Activity log for bug #1906280

Date Who What changed Old value New value Message
2020-11-30 16:29:35 Michael Skalka bug added bug
2020-11-30 16:30:07 Michael Skalka bug added subscriber Canonical Field High
2020-11-30 16:32:32 Michael Skalka description As seen during this Focal Ussuri test run: https://solutions.qa.canonical.com/testruns/testRun/5f7ad510-f57e-40ce-beb7-5f39800fa5f0 Crashdump here: https://oil-jenkins.canonical.com/artifacts/5f7ad510-f57e-40ce-beb7-5f39800fa5f0/generated/generated/openstack/juju-crashdump-openstack-2020-11-28-03.40.36.tar.gz Octavia's ovn-chassis units are stuck waiting: octavia/0 blocked idle 1/lxd/8 10.244.8.170 9876/tcp Awaiting leader to create required resources hacluster-octavia/1 active idle 10.244.8.170 Unit is ready and clustered logrotated/63 active idle 10.244.8.170 Unit is ready. octavia-ovn-chassis/1 waiting executing 10.244.8.170 'ovsdb' incomplete public-policy-routing/45 active idle 10.244.8.170 Unit is ready When the db is reporting healthy: ovn-central/0* active idle 1/lxd/9 10.246.64.225 6641/tcp,6642/tcp Unit is ready (leader: ovnnb_db, ovnsb_db) logrotated/19 active idle 10.246.64.225 Unit is ready. ovn-central/1 active idle 3/lxd/9 10.246.64.250 6641/tcp,6642/tcp Unit is ready (northd: active) logrotated/27 active idle 10.246.64.250 Unit is ready. ovn-central/2 active idle 5/lxd/9 10.246.65.21 6641/tcp,6642/tcp Unit is ready logrotated/52 active idle 10.246.65.21 Unit is ready. Warning in the juju unit logs indicates that the charm is blocking on a missing key in the ovsdb: 2020-11-27 23:36:57 INFO juju-log ovsdb:195: Invoking reactive handler: hooks/relations/ovsdb-subordinate/provides.py:97:joined:ovsdb-subordinate 2020-11-27 23:36:57 DEBUG jujuc server.go:211 running hook tool "relation-get" 2020-11-27 23:36:57 WARNING ovsdb-relation-changed ovs-vsctl: no key "ovn-remote" in Open_vSwitch record "." column external_ids 2020-11-27 23:36:57 DEBUG jujuc server.go:211 running hook tool "juju-log" 2020-11-27 23:36:57 INFO juju-log ovsdb:195: Invoking reactive handler: hooks/relations/ovsdb/requires.py:34:joined:ovsdb As seen during this Focal Ussuri test run: https://solutions.qa.canonical.com/testruns/testRun/5f7ad510-f57e-40ce-beb7-5f39800fa5f0 Crashdump here: https://oil-jenkins.canonical.com/artifacts/5f7ad510-f57e-40ce-beb7-5f39800fa5f0/generated/generated/openstack/juju-crashdump-openstack-2020-11-28-03.40.36.tar.gz Full history of occurrences can be found here: https://solutions.qa.canonical.com/bugs/bugs/bug/1906280 Octavia's ovn-chassis units are stuck waiting: octavia/0 blocked idle 1/lxd/8 10.244.8.170 9876/tcp Awaiting leader to create required resources   hacluster-octavia/1 active idle 10.244.8.170 Unit is ready and clustered   logrotated/63 active idle 10.244.8.170 Unit is ready.   octavia-ovn-chassis/1 waiting executing 10.244.8.170 'ovsdb' incomplete   public-policy-routing/45 active idle 10.244.8.170 Unit is ready When the db is reporting healthy: ovn-central/0* active idle 1/lxd/9 10.246.64.225 6641/tcp,6642/tcp Unit is ready (leader: ovnnb_db, ovnsb_db)   logrotated/19 active idle 10.246.64.225 Unit is ready. ovn-central/1 active idle 3/lxd/9 10.246.64.250 6641/tcp,6642/tcp Unit is ready (northd: active)   logrotated/27 active idle 10.246.64.250 Unit is ready. ovn-central/2 active idle 5/lxd/9 10.246.65.21 6641/tcp,6642/tcp Unit is ready   logrotated/52 active idle 10.246.65.21 Unit is ready. Warning in the juju unit logs indicates that the charm is blocking on a missing key in the ovsdb: 2020-11-27 23:36:57 INFO juju-log ovsdb:195: Invoking reactive handler: hooks/relations/ovsdb-subordinate/provides.py:97:joined:ovsdb-subordinate 2020-11-27 23:36:57 DEBUG jujuc server.go:211 running hook tool "relation-get" 2020-11-27 23:36:57 WARNING ovsdb-relation-changed ovs-vsctl: no key "ovn-remote" in Open_vSwitch record "." column external_ids 2020-11-27 23:36:57 DEBUG jujuc server.go:211 running hook tool "juju-log" 2020-11-27 23:36:57 INFO juju-log ovsdb:195: Invoking reactive handler: hooks/relations/ovsdb/requires.py:34:joined:ovsdb
2020-12-01 15:29:11 Michael Skalka bug added subscriber Canonical Field Critical
2020-12-01 15:29:15 Michael Skalka removed subscriber Canonical Field High
2020-12-01 18:15:09 Billy Olsen charm-ovn-chassis: assignee Billy Olsen (billy-olsen)
2020-12-01 23:35:43 Billy Olsen charm-ovn-chassis: assignee Billy Olsen (billy-olsen) Dmitrii Shcherbakov (dmitriis)
2020-12-02 14:59:52 Michael Skalka charm-ovn-chassis: status New Incomplete
2020-12-03 09:24:50 Dmitrii Shcherbakov charm-ovn-chassis: status Incomplete Confirmed
2020-12-04 08:18:09 Dmitrii Shcherbakov bug task added charm-neutron-openvswitch
2020-12-04 08:36:28 Dmitrii Shcherbakov charm-ovn-chassis: status Confirmed Triaged
2020-12-04 08:36:30 Dmitrii Shcherbakov charm-neutron-openvswitch: status New Triaged
2020-12-04 08:36:32 Dmitrii Shcherbakov charm-neutron-openvswitch: status Triaged In Progress
2020-12-04 08:36:34 Dmitrii Shcherbakov charm-ovn-chassis: status Triaged In Progress
2020-12-04 08:36:40 Dmitrii Shcherbakov charm-neutron-openvswitch: importance Undecided Critical
2020-12-04 08:36:42 Dmitrii Shcherbakov charm-ovn-chassis: importance Undecided Critical
2020-12-04 08:36:45 Dmitrii Shcherbakov charm-neutron-openvswitch: assignee Dmitrii Shcherbakov (dmitriis)
2020-12-04 08:51:33 Dmitrii Shcherbakov charm-ovn-chassis: milestone 21.01
2020-12-04 08:51:34 Dmitrii Shcherbakov charm-neutron-openvswitch: milestone 21.01
2020-12-12 01:34:22 Dominique Poulain bug added subscriber Dominique Poulain
2020-12-12 15:00:13 Dan Streetman bug added subscriber Dan Streetman
2020-12-14 14:46:59 Billy Olsen charm-neutron-openvswitch: assignee Dmitrii Shcherbakov (dmitriis) Corey Bryant (corey.bryant)
2020-12-14 14:47:13 Billy Olsen charm-ovn-chassis: assignee Dmitrii Shcherbakov (dmitriis) Corey Bryant (corey.bryant)
2020-12-14 19:20:47 Corey Bryant nominated for series Ubuntu Hirsute
2020-12-14 19:20:48 Corey Bryant bug task added Ubuntu Hirsute
2020-12-14 19:23:12 Corey Bryant bug task deleted Ubuntu Hirsute
2020-12-14 19:23:14 Corey Bryant bug task deleted ubuntu
2020-12-14 19:23:32 Corey Bryant bug task added openvswitch (Ubuntu)
2020-12-14 19:23:48 Corey Bryant nominated for series Ubuntu Hirsute
2020-12-14 19:23:49 Corey Bryant bug task added openvswitch (Ubuntu Hirsute)
2020-12-14 19:23:50 Corey Bryant openvswitch (Ubuntu Hirsute): importance Undecided Critical
2020-12-14 19:23:50 Corey Bryant openvswitch (Ubuntu Hirsute): status New Triaged
2020-12-14 19:24:22 Corey Bryant nominated for series Ubuntu Focal
2020-12-14 19:24:22 Corey Bryant bug task added openvswitch (Ubuntu Focal)
2020-12-14 19:24:26 Corey Bryant openvswitch (Ubuntu Focal): importance Undecided Critical
2020-12-14 19:24:26 Corey Bryant openvswitch (Ubuntu Focal): status New Triaged
2020-12-14 19:24:44 Corey Bryant nominated for series Ubuntu Bionic
2020-12-14 19:24:45 Corey Bryant bug task added openvswitch (Ubuntu Bionic)
2020-12-14 19:24:48 Corey Bryant openvswitch (Ubuntu Bionic): importance Undecided Critical
2020-12-14 19:24:48 Corey Bryant openvswitch (Ubuntu Bionic): status New Triaged
2020-12-14 19:25:03 Corey Bryant nominated for series Ubuntu Xenial
2020-12-14 19:25:04 Corey Bryant bug task added openvswitch (Ubuntu Xenial)
2020-12-14 19:25:06 Corey Bryant openvswitch (Ubuntu Xenial): importance Undecided Critical
2020-12-14 19:25:06 Corey Bryant openvswitch (Ubuntu Xenial): status New Triaged
2020-12-14 19:25:48 Corey Bryant cloud-archive: importance Undecided Critical
2020-12-14 19:25:48 Corey Bryant cloud-archive: status New Triaged
2020-12-14 19:26:02 Corey Bryant nominated for series cloud-archive/ussuri
2020-12-14 19:26:03 Corey Bryant bug task added cloud-archive/ussuri
2020-12-14 19:26:07 Corey Bryant cloud-archive/ussuri: importance Undecided Critical
2020-12-14 19:26:07 Corey Bryant cloud-archive/ussuri: status New Triaged
2020-12-14 19:26:17 Corey Bryant nominated for series cloud-archive/train
2020-12-14 19:26:18 Corey Bryant bug task added cloud-archive/train
2020-12-14 19:26:20 Corey Bryant cloud-archive/train: importance Undecided Critical
2020-12-14 19:26:20 Corey Bryant cloud-archive/train: status New Triaged
2020-12-14 19:26:25 Corey Bryant cloud-archive: status Triaged Invalid
2020-12-14 19:26:28 Corey Bryant cloud-archive: importance Critical Undecided
2020-12-14 19:26:37 Corey Bryant nominated for series cloud-archive/stein
2020-12-14 19:26:37 Corey Bryant bug task added cloud-archive/stein
2020-12-14 19:26:39 Corey Bryant cloud-archive/stein: importance Undecided Critical
2020-12-14 19:26:39 Corey Bryant cloud-archive/stein: status New Triaged
2020-12-14 19:26:55 Corey Bryant nominated for series cloud-archive/queens
2020-12-14 19:26:56 Corey Bryant bug task added cloud-archive/queens
2020-12-14 19:26:59 Corey Bryant cloud-archive/queens: importance Undecided Critical
2020-12-14 19:26:59 Corey Bryant cloud-archive/queens: status New Triaged
2020-12-14 19:27:10 Corey Bryant nominated for series cloud-archive/mitaka
2020-12-14 19:27:10 Corey Bryant bug task added cloud-archive/mitaka
2020-12-14 19:27:14 Corey Bryant cloud-archive/mitaka: importance Undecided Critical
2020-12-14 19:27:14 Corey Bryant cloud-archive/mitaka: status New Triaged
2020-12-14 19:27:39 Corey Bryant cloud-archive/stein: assignee Corey Bryant (corey.bryant)
2020-12-14 19:45:47 Corey Bryant cloud-archive/mitaka: assignee Corey Bryant (corey.bryant)
2020-12-14 19:46:23 Corey Bryant cloud-archive/queens: assignee Corey Bryant (corey.bryant)
2020-12-14 19:46:45 Corey Bryant cloud-archive/train: assignee Corey Bryant (corey.bryant)
2020-12-14 20:11:11 Corey Bryant cloud-archive/ussuri: assignee Corey Bryant (corey.bryant)
2020-12-14 20:11:22 Corey Bryant openvswitch (Ubuntu Xenial): assignee Corey Bryant (corey.bryant)
2020-12-14 20:11:35 Corey Bryant openvswitch (Ubuntu Bionic): assignee Corey Bryant (corey.bryant)
2020-12-14 20:11:48 Corey Bryant openvswitch (Ubuntu Focal): assignee Corey Bryant (corey.bryant)
2020-12-14 20:12:01 Corey Bryant openvswitch (Ubuntu Hirsute): assignee Corey Bryant (corey.bryant)
2020-12-14 20:16:33 Corey Bryant nominated for series Ubuntu Groovy
2020-12-14 20:16:33 Corey Bryant bug task added openvswitch (Ubuntu Groovy)
2020-12-14 20:16:47 Corey Bryant openvswitch (Ubuntu Groovy): importance Undecided Critical
2020-12-14 20:16:47 Corey Bryant openvswitch (Ubuntu Groovy): status New Triaged
2020-12-14 20:16:47 Corey Bryant openvswitch (Ubuntu Groovy): assignee Corey Bryant (corey.bryant)
2020-12-14 20:53:56 Corey Bryant bug task deleted cloud-archive/mitaka
2020-12-14 20:54:03 Corey Bryant bug task deleted openvswitch (Ubuntu Xenial)
2020-12-15 18:57:37 Igor Filipovic bug added subscriber Igor Filipovic
2020-12-15 20:14:59 Corey Bryant summary Charm stuck waiting for ovsdb 'no key "ovn-remote" in Open_vSwitch record' [SRU] Add support for disabling memlockall() calls in ovs-vswitchd
2020-12-15 20:16:11 Corey Bryant description As seen during this Focal Ussuri test run: https://solutions.qa.canonical.com/testruns/testRun/5f7ad510-f57e-40ce-beb7-5f39800fa5f0 Crashdump here: https://oil-jenkins.canonical.com/artifacts/5f7ad510-f57e-40ce-beb7-5f39800fa5f0/generated/generated/openstack/juju-crashdump-openstack-2020-11-28-03.40.36.tar.gz Full history of occurrences can be found here: https://solutions.qa.canonical.com/bugs/bugs/bug/1906280 Octavia's ovn-chassis units are stuck waiting: octavia/0 blocked idle 1/lxd/8 10.244.8.170 9876/tcp Awaiting leader to create required resources   hacluster-octavia/1 active idle 10.244.8.170 Unit is ready and clustered   logrotated/63 active idle 10.244.8.170 Unit is ready.   octavia-ovn-chassis/1 waiting executing 10.244.8.170 'ovsdb' incomplete   public-policy-routing/45 active idle 10.244.8.170 Unit is ready When the db is reporting healthy: ovn-central/0* active idle 1/lxd/9 10.246.64.225 6641/tcp,6642/tcp Unit is ready (leader: ovnnb_db, ovnsb_db)   logrotated/19 active idle 10.246.64.225 Unit is ready. ovn-central/1 active idle 3/lxd/9 10.246.64.250 6641/tcp,6642/tcp Unit is ready (northd: active)   logrotated/27 active idle 10.246.64.250 Unit is ready. ovn-central/2 active idle 5/lxd/9 10.246.65.21 6641/tcp,6642/tcp Unit is ready   logrotated/52 active idle 10.246.65.21 Unit is ready. Warning in the juju unit logs indicates that the charm is blocking on a missing key in the ovsdb: 2020-11-27 23:36:57 INFO juju-log ovsdb:195: Invoking reactive handler: hooks/relations/ovsdb-subordinate/provides.py:97:joined:ovsdb-subordinate 2020-11-27 23:36:57 DEBUG jujuc server.go:211 running hook tool "relation-get" 2020-11-27 23:36:57 WARNING ovsdb-relation-changed ovs-vsctl: no key "ovn-remote" in Open_vSwitch record "." column external_ids 2020-11-27 23:36:57 DEBUG jujuc server.go:211 running hook tool "juju-log" 2020-11-27 23:36:57 INFO juju-log ovsdb:195: Invoking reactive handler: hooks/relations/ovsdb/requires.py:34:joined:ovsdb [Impact] Original bug title: Charm stuck waiting for ovsdb 'no key "ovn-remote" in Open_vSwitch record' As seen during this Focal Ussuri test run: https://solutions.qa.canonical.com/testruns/testRun/5f7ad510-f57e-40ce-beb7-5f39800fa5f0 Crashdump here: https://oil-jenkins.canonical.com/artifacts/5f7ad510-f57e-40ce-beb7-5f39800fa5f0/generated/generated/openstack/juju-crashdump-openstack-2020-11-28-03.40.36.tar.gz Full history of occurrences can be found here: https://solutions.qa.canonical.com/bugs/bugs/bug/1906280 Octavia's ovn-chassis units are stuck waiting: octavia/0 blocked idle 1/lxd/8 10.244.8.170 9876/tcp Awaiting leader to create required resources   hacluster-octavia/1 active idle 10.244.8.170 Unit is ready and clustered   logrotated/63 active idle 10.244.8.170 Unit is ready.   octavia-ovn-chassis/1 waiting executing 10.244.8.170 'ovsdb' incomplete   public-policy-routing/45 active idle 10.244.8.170 Unit is ready When the db is reporting healthy: ovn-central/0* active idle 1/lxd/9 10.246.64.225 6641/tcp,6642/tcp Unit is ready (leader: ovnnb_db, ovnsb_db)   logrotated/19 active idle 10.246.64.225 Unit is ready. ovn-central/1 active idle 3/lxd/9 10.246.64.250 6641/tcp,6642/tcp Unit is ready (northd: active)   logrotated/27 active idle 10.246.64.250 Unit is ready. ovn-central/2 active idle 5/lxd/9 10.246.65.21 6641/tcp,6642/tcp Unit is ready   logrotated/52 active idle 10.246.65.21 Unit is ready. Warning in the juju unit logs indicates that the charm is blocking on a missing key in the ovsdb: 2020-11-27 23:36:57 INFO juju-log ovsdb:195: Invoking reactive handler: hooks/relations/ovsdb-subordinate/provides.py:97:joined:ovsdb-subordinate 2020-11-27 23:36:57 DEBUG jujuc server.go:211 running hook tool "relation-get" 2020-11-27 23:36:57 WARNING ovsdb-relation-changed ovs-vsctl: no key "ovn-remote" in Open_vSwitch record "." column external_ids 2020-11-27 23:36:57 DEBUG jujuc server.go:211 running hook tool "juju-log" 2020-11-27 23:36:57 INFO juju-log ovsdb:195: Invoking reactive handler: hooks/relations/ovsdb/requires.py:34:joined:ovsdb
2020-12-15 20:17:42 Corey Bryant description [Impact] Original bug title: Charm stuck waiting for ovsdb 'no key "ovn-remote" in Open_vSwitch record' As seen during this Focal Ussuri test run: https://solutions.qa.canonical.com/testruns/testRun/5f7ad510-f57e-40ce-beb7-5f39800fa5f0 Crashdump here: https://oil-jenkins.canonical.com/artifacts/5f7ad510-f57e-40ce-beb7-5f39800fa5f0/generated/generated/openstack/juju-crashdump-openstack-2020-11-28-03.40.36.tar.gz Full history of occurrences can be found here: https://solutions.qa.canonical.com/bugs/bugs/bug/1906280 Octavia's ovn-chassis units are stuck waiting: octavia/0 blocked idle 1/lxd/8 10.244.8.170 9876/tcp Awaiting leader to create required resources   hacluster-octavia/1 active idle 10.244.8.170 Unit is ready and clustered   logrotated/63 active idle 10.244.8.170 Unit is ready.   octavia-ovn-chassis/1 waiting executing 10.244.8.170 'ovsdb' incomplete   public-policy-routing/45 active idle 10.244.8.170 Unit is ready When the db is reporting healthy: ovn-central/0* active idle 1/lxd/9 10.246.64.225 6641/tcp,6642/tcp Unit is ready (leader: ovnnb_db, ovnsb_db)   logrotated/19 active idle 10.246.64.225 Unit is ready. ovn-central/1 active idle 3/lxd/9 10.246.64.250 6641/tcp,6642/tcp Unit is ready (northd: active)   logrotated/27 active idle 10.246.64.250 Unit is ready. ovn-central/2 active idle 5/lxd/9 10.246.65.21 6641/tcp,6642/tcp Unit is ready   logrotated/52 active idle 10.246.65.21 Unit is ready. Warning in the juju unit logs indicates that the charm is blocking on a missing key in the ovsdb: 2020-11-27 23:36:57 INFO juju-log ovsdb:195: Invoking reactive handler: hooks/relations/ovsdb-subordinate/provides.py:97:joined:ovsdb-subordinate 2020-11-27 23:36:57 DEBUG jujuc server.go:211 running hook tool "relation-get" 2020-11-27 23:36:57 WARNING ovsdb-relation-changed ovs-vsctl: no key "ovn-remote" in Open_vSwitch record "." column external_ids 2020-11-27 23:36:57 DEBUG jujuc server.go:211 running hook tool "juju-log" 2020-11-27 23:36:57 INFO juju-log ovsdb:195: Invoking reactive handler: hooks/relations/ovsdb/requires.py:34:joined:ovsdb [Impact] https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1906280/comments/16 https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1906280/comments/19 Original bug title: Charm stuck waiting for ovsdb 'no key "ovn-remote" in Open_vSwitch record' As seen during this Focal Ussuri test run: https://solutions.qa.canonical.com/testruns/testRun/5f7ad510-f57e-40ce-beb7-5f39800fa5f0 Crashdump here: https://oil-jenkins.canonical.com/artifacts/5f7ad510-f57e-40ce-beb7-5f39800fa5f0/generated/generated/openstack/juju-crashdump-openstack-2020-11-28-03.40.36.tar.gz Full history of occurrences can be found here: https://solutions.qa.canonical.com/bugs/bugs/bug/1906280 Octavia's ovn-chassis units are stuck waiting: octavia/0 blocked idle 1/lxd/8 10.244.8.170 9876/tcp Awaiting leader to create required resources   hacluster-octavia/1 active idle 10.244.8.170 Unit is ready and clustered   logrotated/63 active idle 10.244.8.170 Unit is ready.   octavia-ovn-chassis/1 waiting executing 10.244.8.170 'ovsdb' incomplete   public-policy-routing/45 active idle 10.244.8.170 Unit is ready When the db is reporting healthy: ovn-central/0* active idle 1/lxd/9 10.246.64.225 6641/tcp,6642/tcp Unit is ready (leader: ovnnb_db, ovnsb_db)   logrotated/19 active idle 10.246.64.225 Unit is ready. ovn-central/1 active idle 3/lxd/9 10.246.64.250 6641/tcp,6642/tcp Unit is ready (northd: active)   logrotated/27 active idle 10.246.64.250 Unit is ready. ovn-central/2 active idle 5/lxd/9 10.246.65.21 6641/tcp,6642/tcp Unit is ready   logrotated/52 active idle 10.246.65.21 Unit is ready. Warning in the juju unit logs indicates that the charm is blocking on a missing key in the ovsdb: 2020-11-27 23:36:57 INFO juju-log ovsdb:195: Invoking reactive handler: hooks/relations/ovsdb-subordinate/provides.py:97:joined:ovsdb-subordinate 2020-11-27 23:36:57 DEBUG jujuc server.go:211 running hook tool "relation-get" 2020-11-27 23:36:57 WARNING ovsdb-relation-changed ovs-vsctl: no key "ovn-remote" in Open_vSwitch record "." column external_ids 2020-11-27 23:36:57 DEBUG jujuc server.go:211 running hook tool "juju-log" 2020-11-27 23:36:57 INFO juju-log ovsdb:195: Invoking reactive handler: hooks/relations/ovsdb/requires.py:34:joined:ovsdb
2020-12-15 20:21:54 Corey Bryant description [Impact] https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1906280/comments/16 https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1906280/comments/19 Original bug title: Charm stuck waiting for ovsdb 'no key "ovn-remote" in Open_vSwitch record' As seen during this Focal Ussuri test run: https://solutions.qa.canonical.com/testruns/testRun/5f7ad510-f57e-40ce-beb7-5f39800fa5f0 Crashdump here: https://oil-jenkins.canonical.com/artifacts/5f7ad510-f57e-40ce-beb7-5f39800fa5f0/generated/generated/openstack/juju-crashdump-openstack-2020-11-28-03.40.36.tar.gz Full history of occurrences can be found here: https://solutions.qa.canonical.com/bugs/bugs/bug/1906280 Octavia's ovn-chassis units are stuck waiting: octavia/0 blocked idle 1/lxd/8 10.244.8.170 9876/tcp Awaiting leader to create required resources   hacluster-octavia/1 active idle 10.244.8.170 Unit is ready and clustered   logrotated/63 active idle 10.244.8.170 Unit is ready.   octavia-ovn-chassis/1 waiting executing 10.244.8.170 'ovsdb' incomplete   public-policy-routing/45 active idle 10.244.8.170 Unit is ready When the db is reporting healthy: ovn-central/0* active idle 1/lxd/9 10.246.64.225 6641/tcp,6642/tcp Unit is ready (leader: ovnnb_db, ovnsb_db)   logrotated/19 active idle 10.246.64.225 Unit is ready. ovn-central/1 active idle 3/lxd/9 10.246.64.250 6641/tcp,6642/tcp Unit is ready (northd: active)   logrotated/27 active idle 10.246.64.250 Unit is ready. ovn-central/2 active idle 5/lxd/9 10.246.65.21 6641/tcp,6642/tcp Unit is ready   logrotated/52 active idle 10.246.65.21 Unit is ready. Warning in the juju unit logs indicates that the charm is blocking on a missing key in the ovsdb: 2020-11-27 23:36:57 INFO juju-log ovsdb:195: Invoking reactive handler: hooks/relations/ovsdb-subordinate/provides.py:97:joined:ovsdb-subordinate 2020-11-27 23:36:57 DEBUG jujuc server.go:211 running hook tool "relation-get" 2020-11-27 23:36:57 WARNING ovsdb-relation-changed ovs-vsctl: no key "ovn-remote" in Open_vSwitch record "." column external_ids 2020-11-27 23:36:57 DEBUG jujuc server.go:211 running hook tool "juju-log" 2020-11-27 23:36:57 INFO juju-log ovsdb:195: Invoking reactive handler: hooks/relations/ovsdb/requires.py:34:joined:ovsdb [Impact] Recent changes to systemd rlimit are resulting in memory exhaustion with ovs-vswitchd's use of mlockall(). mlockall() can be disabled via /etc/defaults/openvswitch-vswitch, however there is currently a bug in the shipped ovs-vswitchd systemd unit file and default environment variable file. The package will be fixed in this SRU. Additionally the neutron-openvswitch charm will be updated to enable disabling of mlockall() use in ovs-vswitchd. More details on the above summary can be found in the following comments: https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1906280/comments/16 https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1906280/comments/19 Original bug title: Charm stuck waiting for ovsdb 'no key "ovn-remote" in Open_vSwitch record' Original bug details: As seen during this Focal Ussuri test run: https://solutions.qa.canonical.com/testruns/testRun/5f7ad510-f57e-40ce-beb7-5f39800fa5f0 Crashdump here: https://oil-jenkins.canonical.com/artifacts/5f7ad510-f57e-40ce-beb7-5f39800fa5f0/generated/generated/openstack/juju-crashdump-openstack-2020-11-28-03.40.36.tar.gz Full history of occurrences can be found here: https://solutions.qa.canonical.com/bugs/bugs/bug/1906280 Octavia's ovn-chassis units are stuck waiting: octavia/0 blocked idle 1/lxd/8 10.244.8.170 9876/tcp Awaiting leader to create required resources   hacluster-octavia/1 active idle 10.244.8.170 Unit is ready and clustered   logrotated/63 active idle 10.244.8.170 Unit is ready.   octavia-ovn-chassis/1 waiting executing 10.244.8.170 'ovsdb' incomplete   public-policy-routing/45 active idle 10.244.8.170 Unit is ready When the db is reporting healthy: ovn-central/0* active idle 1/lxd/9 10.246.64.225 6641/tcp,6642/tcp Unit is ready (leader: ovnnb_db, ovnsb_db)   logrotated/19 active idle 10.246.64.225 Unit is ready. ovn-central/1 active idle 3/lxd/9 10.246.64.250 6641/tcp,6642/tcp Unit is ready (northd: active)   logrotated/27 active idle 10.246.64.250 Unit is ready. ovn-central/2 active idle 5/lxd/9 10.246.65.21 6641/tcp,6642/tcp Unit is ready   logrotated/52 active idle 10.246.65.21 Unit is ready. Warning in the juju unit logs indicates that the charm is blocking on a missing key in the ovsdb: 2020-11-27 23:36:57 INFO juju-log ovsdb:195: Invoking reactive handler: hooks/relations/ovsdb-subordinate/provides.py:97:joined:ovsdb-subordinate 2020-11-27 23:36:57 DEBUG jujuc server.go:211 running hook tool "relation-get" 2020-11-27 23:36:57 WARNING ovsdb-relation-changed ovs-vsctl: no key "ovn-remote" in Open_vSwitch record "." column external_ids 2020-11-27 23:36:57 DEBUG jujuc server.go:211 running hook tool "juju-log" 2020-11-27 23:36:57 INFO juju-log ovsdb:195: Invoking reactive handler: hooks/relations/ovsdb/requires.py:34:joined:ovsdb
2020-12-15 20:22:38 Corey Bryant description [Impact] Recent changes to systemd rlimit are resulting in memory exhaustion with ovs-vswitchd's use of mlockall(). mlockall() can be disabled via /etc/defaults/openvswitch-vswitch, however there is currently a bug in the shipped ovs-vswitchd systemd unit file and default environment variable file. The package will be fixed in this SRU. Additionally the neutron-openvswitch charm will be updated to enable disabling of mlockall() use in ovs-vswitchd. More details on the above summary can be found in the following comments: https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1906280/comments/16 https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1906280/comments/19 Original bug title: Charm stuck waiting for ovsdb 'no key "ovn-remote" in Open_vSwitch record' Original bug details: As seen during this Focal Ussuri test run: https://solutions.qa.canonical.com/testruns/testRun/5f7ad510-f57e-40ce-beb7-5f39800fa5f0 Crashdump here: https://oil-jenkins.canonical.com/artifacts/5f7ad510-f57e-40ce-beb7-5f39800fa5f0/generated/generated/openstack/juju-crashdump-openstack-2020-11-28-03.40.36.tar.gz Full history of occurrences can be found here: https://solutions.qa.canonical.com/bugs/bugs/bug/1906280 Octavia's ovn-chassis units are stuck waiting: octavia/0 blocked idle 1/lxd/8 10.244.8.170 9876/tcp Awaiting leader to create required resources   hacluster-octavia/1 active idle 10.244.8.170 Unit is ready and clustered   logrotated/63 active idle 10.244.8.170 Unit is ready.   octavia-ovn-chassis/1 waiting executing 10.244.8.170 'ovsdb' incomplete   public-policy-routing/45 active idle 10.244.8.170 Unit is ready When the db is reporting healthy: ovn-central/0* active idle 1/lxd/9 10.246.64.225 6641/tcp,6642/tcp Unit is ready (leader: ovnnb_db, ovnsb_db)   logrotated/19 active idle 10.246.64.225 Unit is ready. ovn-central/1 active idle 3/lxd/9 10.246.64.250 6641/tcp,6642/tcp Unit is ready (northd: active)   logrotated/27 active idle 10.246.64.250 Unit is ready. ovn-central/2 active idle 5/lxd/9 10.246.65.21 6641/tcp,6642/tcp Unit is ready   logrotated/52 active idle 10.246.65.21 Unit is ready. Warning in the juju unit logs indicates that the charm is blocking on a missing key in the ovsdb: 2020-11-27 23:36:57 INFO juju-log ovsdb:195: Invoking reactive handler: hooks/relations/ovsdb-subordinate/provides.py:97:joined:ovsdb-subordinate 2020-11-27 23:36:57 DEBUG jujuc server.go:211 running hook tool "relation-get" 2020-11-27 23:36:57 WARNING ovsdb-relation-changed ovs-vsctl: no key "ovn-remote" in Open_vSwitch record "." column external_ids 2020-11-27 23:36:57 DEBUG jujuc server.go:211 running hook tool "juju-log" 2020-11-27 23:36:57 INFO juju-log ovsdb:195: Invoking reactive handler: hooks/relations/ovsdb/requires.py:34:joined:ovsdb [Impact] Recent changes to systemd rlimit are resulting in memory exhaustion with ovs-vswitchd's use of mlockall(). mlockall() can be disabled via /etc/defaults/openvswitch-vswitch, however there is currently a bug in the shipped ovs-vswitchd systemd unit file that prevents it. The package will be fixed in this SRU. Additionally the neutron-openvswitch charm will be updated to enable disabling of mlockall() use in ovs-vswitchd. More details on the above summary can be found in the following comments: https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1906280/comments/16 https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1906280/comments/19 Original bug title: Charm stuck waiting for ovsdb 'no key "ovn-remote" in Open_vSwitch record' Original bug details: As seen during this Focal Ussuri test run: https://solutions.qa.canonical.com/testruns/testRun/5f7ad510-f57e-40ce-beb7-5f39800fa5f0 Crashdump here: https://oil-jenkins.canonical.com/artifacts/5f7ad510-f57e-40ce-beb7-5f39800fa5f0/generated/generated/openstack/juju-crashdump-openstack-2020-11-28-03.40.36.tar.gz Full history of occurrences can be found here: https://solutions.qa.canonical.com/bugs/bugs/bug/1906280 Octavia's ovn-chassis units are stuck waiting: octavia/0 blocked idle 1/lxd/8 10.244.8.170 9876/tcp Awaiting leader to create required resources   hacluster-octavia/1 active idle 10.244.8.170 Unit is ready and clustered   logrotated/63 active idle 10.244.8.170 Unit is ready.   octavia-ovn-chassis/1 waiting executing 10.244.8.170 'ovsdb' incomplete   public-policy-routing/45 active idle 10.244.8.170 Unit is ready When the db is reporting healthy: ovn-central/0* active idle 1/lxd/9 10.246.64.225 6641/tcp,6642/tcp Unit is ready (leader: ovnnb_db, ovnsb_db)   logrotated/19 active idle 10.246.64.225 Unit is ready. ovn-central/1 active idle 3/lxd/9 10.246.64.250 6641/tcp,6642/tcp Unit is ready (northd: active)   logrotated/27 active idle 10.246.64.250 Unit is ready. ovn-central/2 active idle 5/lxd/9 10.246.65.21 6641/tcp,6642/tcp Unit is ready   logrotated/52 active idle 10.246.65.21 Unit is ready. Warning in the juju unit logs indicates that the charm is blocking on a missing key in the ovsdb: 2020-11-27 23:36:57 INFO juju-log ovsdb:195: Invoking reactive handler: hooks/relations/ovsdb-subordinate/provides.py:97:joined:ovsdb-subordinate 2020-11-27 23:36:57 DEBUG jujuc server.go:211 running hook tool "relation-get" 2020-11-27 23:36:57 WARNING ovsdb-relation-changed ovs-vsctl: no key "ovn-remote" in Open_vSwitch record "." column external_ids 2020-11-27 23:36:57 DEBUG jujuc server.go:211 running hook tool "juju-log" 2020-11-27 23:36:57 INFO juju-log ovsdb:195: Invoking reactive handler: hooks/relations/ovsdb/requires.py:34:joined:ovsdb
2020-12-15 20:22:58 Corey Bryant description [Impact] Recent changes to systemd rlimit are resulting in memory exhaustion with ovs-vswitchd's use of mlockall(). mlockall() can be disabled via /etc/defaults/openvswitch-vswitch, however there is currently a bug in the shipped ovs-vswitchd systemd unit file that prevents it. The package will be fixed in this SRU. Additionally the neutron-openvswitch charm will be updated to enable disabling of mlockall() use in ovs-vswitchd. More details on the above summary can be found in the following comments: https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1906280/comments/16 https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1906280/comments/19 Original bug title: Charm stuck waiting for ovsdb 'no key "ovn-remote" in Open_vSwitch record' Original bug details: As seen during this Focal Ussuri test run: https://solutions.qa.canonical.com/testruns/testRun/5f7ad510-f57e-40ce-beb7-5f39800fa5f0 Crashdump here: https://oil-jenkins.canonical.com/artifacts/5f7ad510-f57e-40ce-beb7-5f39800fa5f0/generated/generated/openstack/juju-crashdump-openstack-2020-11-28-03.40.36.tar.gz Full history of occurrences can be found here: https://solutions.qa.canonical.com/bugs/bugs/bug/1906280 Octavia's ovn-chassis units are stuck waiting: octavia/0 blocked idle 1/lxd/8 10.244.8.170 9876/tcp Awaiting leader to create required resources   hacluster-octavia/1 active idle 10.244.8.170 Unit is ready and clustered   logrotated/63 active idle 10.244.8.170 Unit is ready.   octavia-ovn-chassis/1 waiting executing 10.244.8.170 'ovsdb' incomplete   public-policy-routing/45 active idle 10.244.8.170 Unit is ready When the db is reporting healthy: ovn-central/0* active idle 1/lxd/9 10.246.64.225 6641/tcp,6642/tcp Unit is ready (leader: ovnnb_db, ovnsb_db)   logrotated/19 active idle 10.246.64.225 Unit is ready. ovn-central/1 active idle 3/lxd/9 10.246.64.250 6641/tcp,6642/tcp Unit is ready (northd: active)   logrotated/27 active idle 10.246.64.250 Unit is ready. ovn-central/2 active idle 5/lxd/9 10.246.65.21 6641/tcp,6642/tcp Unit is ready   logrotated/52 active idle 10.246.65.21 Unit is ready. Warning in the juju unit logs indicates that the charm is blocking on a missing key in the ovsdb: 2020-11-27 23:36:57 INFO juju-log ovsdb:195: Invoking reactive handler: hooks/relations/ovsdb-subordinate/provides.py:97:joined:ovsdb-subordinate 2020-11-27 23:36:57 DEBUG jujuc server.go:211 running hook tool "relation-get" 2020-11-27 23:36:57 WARNING ovsdb-relation-changed ovs-vsctl: no key "ovn-remote" in Open_vSwitch record "." column external_ids 2020-11-27 23:36:57 DEBUG jujuc server.go:211 running hook tool "juju-log" 2020-11-27 23:36:57 INFO juju-log ovsdb:195: Invoking reactive handler: hooks/relations/ovsdb/requires.py:34:joined:ovsdb [Impact] Recent changes to systemd rlimit are resulting in memory exhaustion with ovs-vswitchd's use of mlockall(). mlockall() can be disabled via /etc/defaults/openvswitch-vswitch, however there is currently a bug in the shipped ovs-vswitchd systemd unit file that prevents it. The package will be fixed in this SRU. Additionally the neutron-openvswitch charm will be updated to enable disabling of mlockall() use in ovs-vswitchd via a config option. More details on the above summary can be found in the following comments: https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1906280/comments/16 https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1906280/comments/19 Original bug title: Charm stuck waiting for ovsdb 'no key "ovn-remote" in Open_vSwitch record' Original bug details: As seen during this Focal Ussuri test run: https://solutions.qa.canonical.com/testruns/testRun/5f7ad510-f57e-40ce-beb7-5f39800fa5f0 Crashdump here: https://oil-jenkins.canonical.com/artifacts/5f7ad510-f57e-40ce-beb7-5f39800fa5f0/generated/generated/openstack/juju-crashdump-openstack-2020-11-28-03.40.36.tar.gz Full history of occurrences can be found here: https://solutions.qa.canonical.com/bugs/bugs/bug/1906280 Octavia's ovn-chassis units are stuck waiting: octavia/0 blocked idle 1/lxd/8 10.244.8.170 9876/tcp Awaiting leader to create required resources   hacluster-octavia/1 active idle 10.244.8.170 Unit is ready and clustered   logrotated/63 active idle 10.244.8.170 Unit is ready.   octavia-ovn-chassis/1 waiting executing 10.244.8.170 'ovsdb' incomplete   public-policy-routing/45 active idle 10.244.8.170 Unit is ready When the db is reporting healthy: ovn-central/0* active idle 1/lxd/9 10.246.64.225 6641/tcp,6642/tcp Unit is ready (leader: ovnnb_db, ovnsb_db)   logrotated/19 active idle 10.246.64.225 Unit is ready. ovn-central/1 active idle 3/lxd/9 10.246.64.250 6641/tcp,6642/tcp Unit is ready (northd: active)   logrotated/27 active idle 10.246.64.250 Unit is ready. ovn-central/2 active idle 5/lxd/9 10.246.65.21 6641/tcp,6642/tcp Unit is ready   logrotated/52 active idle 10.246.65.21 Unit is ready. Warning in the juju unit logs indicates that the charm is blocking on a missing key in the ovsdb: 2020-11-27 23:36:57 INFO juju-log ovsdb:195: Invoking reactive handler: hooks/relations/ovsdb-subordinate/provides.py:97:joined:ovsdb-subordinate 2020-11-27 23:36:57 DEBUG jujuc server.go:211 running hook tool "relation-get" 2020-11-27 23:36:57 WARNING ovsdb-relation-changed ovs-vsctl: no key "ovn-remote" in Open_vSwitch record "." column external_ids 2020-11-27 23:36:57 DEBUG jujuc server.go:211 running hook tool "juju-log" 2020-11-27 23:36:57 INFO juju-log ovsdb:195: Invoking reactive handler: hooks/relations/ovsdb/requires.py:34:joined:ovsdb
2020-12-15 20:23:34 Corey Bryant description [Impact] Recent changes to systemd rlimit are resulting in memory exhaustion with ovs-vswitchd's use of mlockall(). mlockall() can be disabled via /etc/defaults/openvswitch-vswitch, however there is currently a bug in the shipped ovs-vswitchd systemd unit file that prevents it. The package will be fixed in this SRU. Additionally the neutron-openvswitch charm will be updated to enable disabling of mlockall() use in ovs-vswitchd via a config option. More details on the above summary can be found in the following comments: https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1906280/comments/16 https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1906280/comments/19 Original bug title: Charm stuck waiting for ovsdb 'no key "ovn-remote" in Open_vSwitch record' Original bug details: As seen during this Focal Ussuri test run: https://solutions.qa.canonical.com/testruns/testRun/5f7ad510-f57e-40ce-beb7-5f39800fa5f0 Crashdump here: https://oil-jenkins.canonical.com/artifacts/5f7ad510-f57e-40ce-beb7-5f39800fa5f0/generated/generated/openstack/juju-crashdump-openstack-2020-11-28-03.40.36.tar.gz Full history of occurrences can be found here: https://solutions.qa.canonical.com/bugs/bugs/bug/1906280 Octavia's ovn-chassis units are stuck waiting: octavia/0 blocked idle 1/lxd/8 10.244.8.170 9876/tcp Awaiting leader to create required resources   hacluster-octavia/1 active idle 10.244.8.170 Unit is ready and clustered   logrotated/63 active idle 10.244.8.170 Unit is ready.   octavia-ovn-chassis/1 waiting executing 10.244.8.170 'ovsdb' incomplete   public-policy-routing/45 active idle 10.244.8.170 Unit is ready When the db is reporting healthy: ovn-central/0* active idle 1/lxd/9 10.246.64.225 6641/tcp,6642/tcp Unit is ready (leader: ovnnb_db, ovnsb_db)   logrotated/19 active idle 10.246.64.225 Unit is ready. ovn-central/1 active idle 3/lxd/9 10.246.64.250 6641/tcp,6642/tcp Unit is ready (northd: active)   logrotated/27 active idle 10.246.64.250 Unit is ready. ovn-central/2 active idle 5/lxd/9 10.246.65.21 6641/tcp,6642/tcp Unit is ready   logrotated/52 active idle 10.246.65.21 Unit is ready. Warning in the juju unit logs indicates that the charm is blocking on a missing key in the ovsdb: 2020-11-27 23:36:57 INFO juju-log ovsdb:195: Invoking reactive handler: hooks/relations/ovsdb-subordinate/provides.py:97:joined:ovsdb-subordinate 2020-11-27 23:36:57 DEBUG jujuc server.go:211 running hook tool "relation-get" 2020-11-27 23:36:57 WARNING ovsdb-relation-changed ovs-vsctl: no key "ovn-remote" in Open_vSwitch record "." column external_ids 2020-11-27 23:36:57 DEBUG jujuc server.go:211 running hook tool "juju-log" 2020-11-27 23:36:57 INFO juju-log ovsdb:195: Invoking reactive handler: hooks/relations/ovsdb/requires.py:34:joined:ovsdb [Impact] Recent changes to systemd rlimit are resulting in memory exhaustion with ovs-vswitchd's use of mlockall(). mlockall() can be disabled via /etc/defaults/openvswitch-vswitch, however there is currently a bug in the shipped ovs-vswitchd systemd unit file that prevents it. The package will be fixed in this SRU. Additionally the neutron-openvswitch charm will be updated to enable disabling of mlockall() use in ovs-vswitchd via a config option. More details on the above summary can be found in the following comments: https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1906280/comments/16 https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1906280/comments/19 ==== Original bug details === Original bug title: Charm stuck waiting for ovsdb 'no key "ovn-remote" in Open_vSwitch record' Original bug details: As seen during this Focal Ussuri test run: https://solutions.qa.canonical.com/testruns/testRun/5f7ad510-f57e-40ce-beb7-5f39800fa5f0 Crashdump here: https://oil-jenkins.canonical.com/artifacts/5f7ad510-f57e-40ce-beb7-5f39800fa5f0/generated/generated/openstack/juju-crashdump-openstack-2020-11-28-03.40.36.tar.gz Full history of occurrences can be found here: https://solutions.qa.canonical.com/bugs/bugs/bug/1906280 Octavia's ovn-chassis units are stuck waiting: octavia/0 blocked idle 1/lxd/8 10.244.8.170 9876/tcp Awaiting leader to create required resources   hacluster-octavia/1 active idle 10.244.8.170 Unit is ready and clustered   logrotated/63 active idle 10.244.8.170 Unit is ready.   octavia-ovn-chassis/1 waiting executing 10.244.8.170 'ovsdb' incomplete   public-policy-routing/45 active idle 10.244.8.170 Unit is ready When the db is reporting healthy: ovn-central/0* active idle 1/lxd/9 10.246.64.225 6641/tcp,6642/tcp Unit is ready (leader: ovnnb_db, ovnsb_db)   logrotated/19 active idle 10.246.64.225 Unit is ready. ovn-central/1 active idle 3/lxd/9 10.246.64.250 6641/tcp,6642/tcp Unit is ready (northd: active)   logrotated/27 active idle 10.246.64.250 Unit is ready. ovn-central/2 active idle 5/lxd/9 10.246.65.21 6641/tcp,6642/tcp Unit is ready   logrotated/52 active idle 10.246.65.21 Unit is ready. Warning in the juju unit logs indicates that the charm is blocking on a missing key in the ovsdb: 2020-11-27 23:36:57 INFO juju-log ovsdb:195: Invoking reactive handler: hooks/relations/ovsdb-subordinate/provides.py:97:joined:ovsdb-subordinate 2020-11-27 23:36:57 DEBUG jujuc server.go:211 running hook tool "relation-get" 2020-11-27 23:36:57 WARNING ovsdb-relation-changed ovs-vsctl: no key "ovn-remote" in Open_vSwitch record "." column external_ids 2020-11-27 23:36:57 DEBUG jujuc server.go:211 running hook tool "juju-log" 2020-11-27 23:36:57 INFO juju-log ovsdb:195: Invoking reactive handler: hooks/relations/ovsdb/requires.py:34:joined:ovsdb ==============================
2020-12-15 20:30:08 Corey Bryant description [Impact] Recent changes to systemd rlimit are resulting in memory exhaustion with ovs-vswitchd's use of mlockall(). mlockall() can be disabled via /etc/defaults/openvswitch-vswitch, however there is currently a bug in the shipped ovs-vswitchd systemd unit file that prevents it. The package will be fixed in this SRU. Additionally the neutron-openvswitch charm will be updated to enable disabling of mlockall() use in ovs-vswitchd via a config option. More details on the above summary can be found in the following comments: https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1906280/comments/16 https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1906280/comments/19 ==== Original bug details === Original bug title: Charm stuck waiting for ovsdb 'no key "ovn-remote" in Open_vSwitch record' Original bug details: As seen during this Focal Ussuri test run: https://solutions.qa.canonical.com/testruns/testRun/5f7ad510-f57e-40ce-beb7-5f39800fa5f0 Crashdump here: https://oil-jenkins.canonical.com/artifacts/5f7ad510-f57e-40ce-beb7-5f39800fa5f0/generated/generated/openstack/juju-crashdump-openstack-2020-11-28-03.40.36.tar.gz Full history of occurrences can be found here: https://solutions.qa.canonical.com/bugs/bugs/bug/1906280 Octavia's ovn-chassis units are stuck waiting: octavia/0 blocked idle 1/lxd/8 10.244.8.170 9876/tcp Awaiting leader to create required resources   hacluster-octavia/1 active idle 10.244.8.170 Unit is ready and clustered   logrotated/63 active idle 10.244.8.170 Unit is ready.   octavia-ovn-chassis/1 waiting executing 10.244.8.170 'ovsdb' incomplete   public-policy-routing/45 active idle 10.244.8.170 Unit is ready When the db is reporting healthy: ovn-central/0* active idle 1/lxd/9 10.246.64.225 6641/tcp,6642/tcp Unit is ready (leader: ovnnb_db, ovnsb_db)   logrotated/19 active idle 10.246.64.225 Unit is ready. ovn-central/1 active idle 3/lxd/9 10.246.64.250 6641/tcp,6642/tcp Unit is ready (northd: active)   logrotated/27 active idle 10.246.64.250 Unit is ready. ovn-central/2 active idle 5/lxd/9 10.246.65.21 6641/tcp,6642/tcp Unit is ready   logrotated/52 active idle 10.246.65.21 Unit is ready. Warning in the juju unit logs indicates that the charm is blocking on a missing key in the ovsdb: 2020-11-27 23:36:57 INFO juju-log ovsdb:195: Invoking reactive handler: hooks/relations/ovsdb-subordinate/provides.py:97:joined:ovsdb-subordinate 2020-11-27 23:36:57 DEBUG jujuc server.go:211 running hook tool "relation-get" 2020-11-27 23:36:57 WARNING ovsdb-relation-changed ovs-vsctl: no key "ovn-remote" in Open_vSwitch record "." column external_ids 2020-11-27 23:36:57 DEBUG jujuc server.go:211 running hook tool "juju-log" 2020-11-27 23:36:57 INFO juju-log ovsdb:195: Invoking reactive handler: hooks/relations/ovsdb/requires.py:34:joined:ovsdb ============================== [Impact] Recent changes to systemd rlimit are resulting in memory exhaustion with ovs-vswitchd's use of mlockall(). mlockall() can be disabled via /etc/defaults/openvswitch-vswitch, however there is currently a bug in the shipped ovs-vswitchd systemd unit file that prevents it. The package will be fixed in this SRU. Additionally the neutron-openvswitch charm will be updated to enable disabling of mlockall() use in ovs-vswitchd via a config option. More details on the above summary can be found in the following comments: https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1906280/comments/16 https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1906280/comments/19 ==== Original bug details === Original bug title: Charm stuck waiting for ovsdb 'no key "ovn-remote" in Open_vSwitch record' Original bug details: As seen during this Focal Ussuri test run: https://solutions.qa.canonical.com/testruns/testRun/5f7ad510-f57e-40ce-beb7-5f39800fa5f0 Crashdump here: https://oil-jenkins.canonical.com/artifacts/5f7ad510-f57e-40ce-beb7-5f39800fa5f0/generated/generated/openstack/juju-crashdump-openstack-2020-11-28-03.40.36.tar.gz Full history of occurrences can be found here: https://solutions.qa.canonical.com/bugs/bugs/bug/1906280 Octavia's ovn-chassis units are stuck waiting: octavia/0 blocked idle 1/lxd/8 10.244.8.170 9876/tcp Awaiting leader to create required resources   hacluster-octavia/1 active idle 10.244.8.170 Unit is ready and clustered   logrotated/63 active idle 10.244.8.170 Unit is ready.   octavia-ovn-chassis/1 waiting executing 10.244.8.170 'ovsdb' incomplete   public-policy-routing/45 active idle 10.244.8.170 Unit is ready When the db is reporting healthy: ovn-central/0* active idle 1/lxd/9 10.246.64.225 6641/tcp,6642/tcp Unit is ready (leader: ovnnb_db, ovnsb_db)   logrotated/19 active idle 10.246.64.225 Unit is ready. ovn-central/1 active idle 3/lxd/9 10.246.64.250 6641/tcp,6642/tcp Unit is ready (northd: active)   logrotated/27 active idle 10.246.64.250 Unit is ready. ovn-central/2 active idle 5/lxd/9 10.246.65.21 6641/tcp,6642/tcp Unit is ready   logrotated/52 active idle 10.246.65.21 Unit is ready. Warning in the juju unit logs indicates that the charm is blocking on a missing key in the ovsdb: 2020-11-27 23:36:57 INFO juju-log ovsdb:195: Invoking reactive handler: hooks/relations/ovsdb-subordinate/provides.py:97:joined:ovsdb-subordinate 2020-11-27 23:36:57 DEBUG jujuc server.go:211 running hook tool "relation-get" 2020-11-27 23:36:57 WARNING ovsdb-relation-changed ovs-vsctl: no key "ovn-remote" in Open_vSwitch record "." column external_ids 2020-11-27 23:36:57 DEBUG jujuc server.go:211 running hook tool "juju-log" 2020-11-27 23:36:57 INFO juju-log ovsdb:195: Invoking reactive handler: hooks/relations/ovsdb/requires.py:34:joined:ovsdb ============================== [Test Case] The easiest way to test this is to deploy openstack with the neutron-openvswitch charm, using the new charm updates. Once deployed, edit /usr/share/openvswitch/scripts/ovs-ctl with an echo to show what MLOCKALL is set to. Then toggle the charm config option [1] and look at journalctl -xe to find the echo output, which should correspond to the mlockall setting. [1] juju config neutron-openvswitch disable-mlockall=true juju config neutron-openvswitch disable-mlockall=false [Regression Potential] There's potential that this will break users who have come to depend on the incorrect EnvironmentFile setting and environment variable in the systemd unit file for ovs-vswitchd. If that is the case they must be running with modified systemd unit files anyway so it is probably a moot point.
2020-12-15 20:39:13 Corey Bryant description [Impact] Recent changes to systemd rlimit are resulting in memory exhaustion with ovs-vswitchd's use of mlockall(). mlockall() can be disabled via /etc/defaults/openvswitch-vswitch, however there is currently a bug in the shipped ovs-vswitchd systemd unit file that prevents it. The package will be fixed in this SRU. Additionally the neutron-openvswitch charm will be updated to enable disabling of mlockall() use in ovs-vswitchd via a config option. More details on the above summary can be found in the following comments: https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1906280/comments/16 https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1906280/comments/19 ==== Original bug details === Original bug title: Charm stuck waiting for ovsdb 'no key "ovn-remote" in Open_vSwitch record' Original bug details: As seen during this Focal Ussuri test run: https://solutions.qa.canonical.com/testruns/testRun/5f7ad510-f57e-40ce-beb7-5f39800fa5f0 Crashdump here: https://oil-jenkins.canonical.com/artifacts/5f7ad510-f57e-40ce-beb7-5f39800fa5f0/generated/generated/openstack/juju-crashdump-openstack-2020-11-28-03.40.36.tar.gz Full history of occurrences can be found here: https://solutions.qa.canonical.com/bugs/bugs/bug/1906280 Octavia's ovn-chassis units are stuck waiting: octavia/0 blocked idle 1/lxd/8 10.244.8.170 9876/tcp Awaiting leader to create required resources   hacluster-octavia/1 active idle 10.244.8.170 Unit is ready and clustered   logrotated/63 active idle 10.244.8.170 Unit is ready.   octavia-ovn-chassis/1 waiting executing 10.244.8.170 'ovsdb' incomplete   public-policy-routing/45 active idle 10.244.8.170 Unit is ready When the db is reporting healthy: ovn-central/0* active idle 1/lxd/9 10.246.64.225 6641/tcp,6642/tcp Unit is ready (leader: ovnnb_db, ovnsb_db)   logrotated/19 active idle 10.246.64.225 Unit is ready. ovn-central/1 active idle 3/lxd/9 10.246.64.250 6641/tcp,6642/tcp Unit is ready (northd: active)   logrotated/27 active idle 10.246.64.250 Unit is ready. ovn-central/2 active idle 5/lxd/9 10.246.65.21 6641/tcp,6642/tcp Unit is ready   logrotated/52 active idle 10.246.65.21 Unit is ready. Warning in the juju unit logs indicates that the charm is blocking on a missing key in the ovsdb: 2020-11-27 23:36:57 INFO juju-log ovsdb:195: Invoking reactive handler: hooks/relations/ovsdb-subordinate/provides.py:97:joined:ovsdb-subordinate 2020-11-27 23:36:57 DEBUG jujuc server.go:211 running hook tool "relation-get" 2020-11-27 23:36:57 WARNING ovsdb-relation-changed ovs-vsctl: no key "ovn-remote" in Open_vSwitch record "." column external_ids 2020-11-27 23:36:57 DEBUG jujuc server.go:211 running hook tool "juju-log" 2020-11-27 23:36:57 INFO juju-log ovsdb:195: Invoking reactive handler: hooks/relations/ovsdb/requires.py:34:joined:ovsdb ============================== [Test Case] The easiest way to test this is to deploy openstack with the neutron-openvswitch charm, using the new charm updates. Once deployed, edit /usr/share/openvswitch/scripts/ovs-ctl with an echo to show what MLOCKALL is set to. Then toggle the charm config option [1] and look at journalctl -xe to find the echo output, which should correspond to the mlockall setting. [1] juju config neutron-openvswitch disable-mlockall=true juju config neutron-openvswitch disable-mlockall=false [Regression Potential] There's potential that this will break users who have come to depend on the incorrect EnvironmentFile setting and environment variable in the systemd unit file for ovs-vswitchd. If that is the case they must be running with modified systemd unit files anyway so it is probably a moot point. [Impact] Recent changes to systemd rlimit are resulting in memory exhaustion with ovs-vswitchd's use of mlockall(). mlockall() can be disabled via /etc/defaults/openvswitch-vswitch, however there is currently a bug in the shipped ovs-vswitchd systemd unit file that prevents it. The package will be fixed in this SRU. Additionally the neutron-openvswitch charm will be updated to enable disabling of mlockall() use in ovs-vswitchd via a config option. More details on the above summary can be found in the following comments: https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1906280/comments/16 https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1906280/comments/19 ==== Original bug details === Original bug title: Charm stuck waiting for ovsdb 'no key "ovn-remote" in Open_vSwitch record' Original bug details: As seen during this Focal Ussuri test run: https://solutions.qa.canonical.com/testruns/testRun/5f7ad510-f57e-40ce-beb7-5f39800fa5f0 Crashdump here: https://oil-jenkins.canonical.com/artifacts/5f7ad510-f57e-40ce-beb7-5f39800fa5f0/generated/generated/openstack/juju-crashdump-openstack-2020-11-28-03.40.36.tar.gz Full history of occurrences can be found here: https://solutions.qa.canonical.com/bugs/bugs/bug/1906280 Octavia's ovn-chassis units are stuck waiting: octavia/0 blocked idle 1/lxd/8 10.244.8.170 9876/tcp Awaiting leader to create required resources   hacluster-octavia/1 active idle 10.244.8.170 Unit is ready and clustered   logrotated/63 active idle 10.244.8.170 Unit is ready.   octavia-ovn-chassis/1 waiting executing 10.244.8.170 'ovsdb' incomplete   public-policy-routing/45 active idle 10.244.8.170 Unit is ready When the db is reporting healthy: ovn-central/0* active idle 1/lxd/9 10.246.64.225 6641/tcp,6642/tcp Unit is ready (leader: ovnnb_db, ovnsb_db)   logrotated/19 active idle 10.246.64.225 Unit is ready. ovn-central/1 active idle 3/lxd/9 10.246.64.250 6641/tcp,6642/tcp Unit is ready (northd: active)   logrotated/27 active idle 10.246.64.250 Unit is ready. ovn-central/2 active idle 5/lxd/9 10.246.65.21 6641/tcp,6642/tcp Unit is ready   logrotated/52 active idle 10.246.65.21 Unit is ready. Warning in the juju unit logs indicates that the charm is blocking on a missing key in the ovsdb: 2020-11-27 23:36:57 INFO juju-log ovsdb:195: Invoking reactive handler: hooks/relations/ovsdb-subordinate/provides.py:97:joined:ovsdb-subordinate 2020-11-27 23:36:57 DEBUG jujuc server.go:211 running hook tool "relation-get" 2020-11-27 23:36:57 WARNING ovsdb-relation-changed ovs-vsctl: no key "ovn-remote" in Open_vSwitch record "." column external_ids 2020-11-27 23:36:57 DEBUG jujuc server.go:211 running hook tool "juju-log" 2020-11-27 23:36:57 INFO juju-log ovsdb:195: Invoking reactive handler: hooks/relations/ovsdb/requires.py:34:joined:ovsdb ============================== [Test Case] The easiest way to test this is to deploy openstack with the neutron-openvswitch charm, using the new charm updates. Once deployed, edit /usr/share/openvswitch/scripts/ovs-ctl with an echo to show what MLOCKALL is set to. Then toggle the charm config option [1] and look at journalctl -xe to find the echo output, which should correspond to the mlockall setting. [1] juju config neutron-openvswitch disable-mlockall=true juju config neutron-openvswitch disable-mlockall=false [Regression Potential] There's potential that this will break users who have come to depend on the incorrect EnvironmentFile setting and environment variable in the systemd unit file for ovs-vswitchd. If that is the case they must be running with modified systemd unit files anyway so it is probably a moot point. [Discussion] I have a query out to James and Christian about an undocumented commit that is getting picked up in the groovy upload. See debian/ltmain-whole-archive.diff and debian/rules in the upload debdiff at http://launchpadlibrarian.net/511453613/openvswitch_2.13.1-0ubuntu1_2.13.1-0ubuntu1.1.diff.gz
2020-12-15 20:39:57 Corey Bryant description [Impact] Recent changes to systemd rlimit are resulting in memory exhaustion with ovs-vswitchd's use of mlockall(). mlockall() can be disabled via /etc/defaults/openvswitch-vswitch, however there is currently a bug in the shipped ovs-vswitchd systemd unit file that prevents it. The package will be fixed in this SRU. Additionally the neutron-openvswitch charm will be updated to enable disabling of mlockall() use in ovs-vswitchd via a config option. More details on the above summary can be found in the following comments: https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1906280/comments/16 https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1906280/comments/19 ==== Original bug details === Original bug title: Charm stuck waiting for ovsdb 'no key "ovn-remote" in Open_vSwitch record' Original bug details: As seen during this Focal Ussuri test run: https://solutions.qa.canonical.com/testruns/testRun/5f7ad510-f57e-40ce-beb7-5f39800fa5f0 Crashdump here: https://oil-jenkins.canonical.com/artifacts/5f7ad510-f57e-40ce-beb7-5f39800fa5f0/generated/generated/openstack/juju-crashdump-openstack-2020-11-28-03.40.36.tar.gz Full history of occurrences can be found here: https://solutions.qa.canonical.com/bugs/bugs/bug/1906280 Octavia's ovn-chassis units are stuck waiting: octavia/0 blocked idle 1/lxd/8 10.244.8.170 9876/tcp Awaiting leader to create required resources   hacluster-octavia/1 active idle 10.244.8.170 Unit is ready and clustered   logrotated/63 active idle 10.244.8.170 Unit is ready.   octavia-ovn-chassis/1 waiting executing 10.244.8.170 'ovsdb' incomplete   public-policy-routing/45 active idle 10.244.8.170 Unit is ready When the db is reporting healthy: ovn-central/0* active idle 1/lxd/9 10.246.64.225 6641/tcp,6642/tcp Unit is ready (leader: ovnnb_db, ovnsb_db)   logrotated/19 active idle 10.246.64.225 Unit is ready. ovn-central/1 active idle 3/lxd/9 10.246.64.250 6641/tcp,6642/tcp Unit is ready (northd: active)   logrotated/27 active idle 10.246.64.250 Unit is ready. ovn-central/2 active idle 5/lxd/9 10.246.65.21 6641/tcp,6642/tcp Unit is ready   logrotated/52 active idle 10.246.65.21 Unit is ready. Warning in the juju unit logs indicates that the charm is blocking on a missing key in the ovsdb: 2020-11-27 23:36:57 INFO juju-log ovsdb:195: Invoking reactive handler: hooks/relations/ovsdb-subordinate/provides.py:97:joined:ovsdb-subordinate 2020-11-27 23:36:57 DEBUG jujuc server.go:211 running hook tool "relation-get" 2020-11-27 23:36:57 WARNING ovsdb-relation-changed ovs-vsctl: no key "ovn-remote" in Open_vSwitch record "." column external_ids 2020-11-27 23:36:57 DEBUG jujuc server.go:211 running hook tool "juju-log" 2020-11-27 23:36:57 INFO juju-log ovsdb:195: Invoking reactive handler: hooks/relations/ovsdb/requires.py:34:joined:ovsdb ============================== [Test Case] The easiest way to test this is to deploy openstack with the neutron-openvswitch charm, using the new charm updates. Once deployed, edit /usr/share/openvswitch/scripts/ovs-ctl with an echo to show what MLOCKALL is set to. Then toggle the charm config option [1] and look at journalctl -xe to find the echo output, which should correspond to the mlockall setting. [1] juju config neutron-openvswitch disable-mlockall=true juju config neutron-openvswitch disable-mlockall=false [Regression Potential] There's potential that this will break users who have come to depend on the incorrect EnvironmentFile setting and environment variable in the systemd unit file for ovs-vswitchd. If that is the case they must be running with modified systemd unit files anyway so it is probably a moot point. [Discussion] I have a query out to James and Christian about an undocumented commit that is getting picked up in the groovy upload. See debian/ltmain-whole-archive.diff and debian/rules in the upload debdiff at http://launchpadlibrarian.net/511453613/openvswitch_2.13.1-0ubuntu1_2.13.1-0ubuntu1.1.diff.gz [Impact] Recent changes to systemd rlimit are resulting in memory exhaustion with ovs-vswitchd's use of mlockall(). mlockall() can be disabled via /etc/defaults/openvswitch-vswitch, however there is currently a bug in the shipped ovs-vswitchd systemd unit file that prevents it. The package will be fixed in this SRU. Additionally the neutron-openvswitch charm will be updated to enable disabling of mlockall() use in ovs-vswitchd via a config option. More details on the above summary can be found in the following comments: https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1906280/comments/16 https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1906280/comments/19 ==== Original bug details === Original bug title: Charm stuck waiting for ovsdb 'no key "ovn-remote" in Open_vSwitch record' Original bug details: As seen during this Focal Ussuri test run: https://solutions.qa.canonical.com/testruns/testRun/5f7ad510-f57e-40ce-beb7-5f39800fa5f0 Crashdump here: https://oil-jenkins.canonical.com/artifacts/5f7ad510-f57e-40ce-beb7-5f39800fa5f0/generated/generated/openstack/juju-crashdump-openstack-2020-11-28-03.40.36.tar.gz Full history of occurrences can be found here: https://solutions.qa.canonical.com/bugs/bugs/bug/1906280 Octavia's ovn-chassis units are stuck waiting: octavia/0 blocked idle 1/lxd/8 10.244.8.170 9876/tcp Awaiting leader to create required resources   hacluster-octavia/1 active idle 10.244.8.170 Unit is ready and clustered   logrotated/63 active idle 10.244.8.170 Unit is ready.   octavia-ovn-chassis/1 waiting executing 10.244.8.170 'ovsdb' incomplete   public-policy-routing/45 active idle 10.244.8.170 Unit is ready When the db is reporting healthy: ovn-central/0* active idle 1/lxd/9 10.246.64.225 6641/tcp,6642/tcp Unit is ready (leader: ovnnb_db, ovnsb_db)   logrotated/19 active idle 10.246.64.225 Unit is ready. ovn-central/1 active idle 3/lxd/9 10.246.64.250 6641/tcp,6642/tcp Unit is ready (northd: active)   logrotated/27 active idle 10.246.64.250 Unit is ready. ovn-central/2 active idle 5/lxd/9 10.246.65.21 6641/tcp,6642/tcp Unit is ready   logrotated/52 active idle 10.246.65.21 Unit is ready. Warning in the juju unit logs indicates that the charm is blocking on a missing key in the ovsdb: 2020-11-27 23:36:57 INFO juju-log ovsdb:195: Invoking reactive handler: hooks/relations/ovsdb-subordinate/provides.py:97:joined:ovsdb-subordinate 2020-11-27 23:36:57 DEBUG jujuc server.go:211 running hook tool "relation-get" 2020-11-27 23:36:57 WARNING ovsdb-relation-changed ovs-vsctl: no key "ovn-remote" in Open_vSwitch record "." column external_ids 2020-11-27 23:36:57 DEBUG jujuc server.go:211 running hook tool "juju-log" 2020-11-27 23:36:57 INFO juju-log ovsdb:195: Invoking reactive handler: hooks/relations/ovsdb/requires.py:34:joined:ovsdb ============================== [Test Case] The easiest way to test this is to deploy openstack with the neutron-openvswitch charm, using the new charm updates. Once deployed, edit /usr/share/openvswitch/scripts/ovs-ctl with an echo to show what MLOCKALL is set to. Then toggle the charm config option [1] and look at journalctl -xe to find the echo output, which should correspond to the mlockall setting. [1] juju config neutron-openvswitch disable-mlockall=true juju config neutron-openvswitch disable-mlockall=false [Regression Potential] There's potential that this will break users who have come to depend on the incorrect EnvironmentFile setting and environment variable in the systemd unit file for ovs-vswitchd. If that is the case they must be running with modified systemd unit files anyway so it is probably a moot point. [Discussion] I have a query out to James and Christian about an undocumented commit that is getting picked up in the groovy upload. It is committed to the ubuntu/groovy branch of the package Vcs. See debian/ltmain-whole-archive.diff and debian/rules in the upload debdiff at http://launchpadlibrarian.net/511453613/openvswitch_2.13.1-0ubuntu1_2.13.1-0ubuntu1.1.diff.gz
2020-12-15 20:46:39 Corey Bryant description [Impact] Recent changes to systemd rlimit are resulting in memory exhaustion with ovs-vswitchd's use of mlockall(). mlockall() can be disabled via /etc/defaults/openvswitch-vswitch, however there is currently a bug in the shipped ovs-vswitchd systemd unit file that prevents it. The package will be fixed in this SRU. Additionally the neutron-openvswitch charm will be updated to enable disabling of mlockall() use in ovs-vswitchd via a config option. More details on the above summary can be found in the following comments: https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1906280/comments/16 https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1906280/comments/19 ==== Original bug details === Original bug title: Charm stuck waiting for ovsdb 'no key "ovn-remote" in Open_vSwitch record' Original bug details: As seen during this Focal Ussuri test run: https://solutions.qa.canonical.com/testruns/testRun/5f7ad510-f57e-40ce-beb7-5f39800fa5f0 Crashdump here: https://oil-jenkins.canonical.com/artifacts/5f7ad510-f57e-40ce-beb7-5f39800fa5f0/generated/generated/openstack/juju-crashdump-openstack-2020-11-28-03.40.36.tar.gz Full history of occurrences can be found here: https://solutions.qa.canonical.com/bugs/bugs/bug/1906280 Octavia's ovn-chassis units are stuck waiting: octavia/0 blocked idle 1/lxd/8 10.244.8.170 9876/tcp Awaiting leader to create required resources   hacluster-octavia/1 active idle 10.244.8.170 Unit is ready and clustered   logrotated/63 active idle 10.244.8.170 Unit is ready.   octavia-ovn-chassis/1 waiting executing 10.244.8.170 'ovsdb' incomplete   public-policy-routing/45 active idle 10.244.8.170 Unit is ready When the db is reporting healthy: ovn-central/0* active idle 1/lxd/9 10.246.64.225 6641/tcp,6642/tcp Unit is ready (leader: ovnnb_db, ovnsb_db)   logrotated/19 active idle 10.246.64.225 Unit is ready. ovn-central/1 active idle 3/lxd/9 10.246.64.250 6641/tcp,6642/tcp Unit is ready (northd: active)   logrotated/27 active idle 10.246.64.250 Unit is ready. ovn-central/2 active idle 5/lxd/9 10.246.65.21 6641/tcp,6642/tcp Unit is ready   logrotated/52 active idle 10.246.65.21 Unit is ready. Warning in the juju unit logs indicates that the charm is blocking on a missing key in the ovsdb: 2020-11-27 23:36:57 INFO juju-log ovsdb:195: Invoking reactive handler: hooks/relations/ovsdb-subordinate/provides.py:97:joined:ovsdb-subordinate 2020-11-27 23:36:57 DEBUG jujuc server.go:211 running hook tool "relation-get" 2020-11-27 23:36:57 WARNING ovsdb-relation-changed ovs-vsctl: no key "ovn-remote" in Open_vSwitch record "." column external_ids 2020-11-27 23:36:57 DEBUG jujuc server.go:211 running hook tool "juju-log" 2020-11-27 23:36:57 INFO juju-log ovsdb:195: Invoking reactive handler: hooks/relations/ovsdb/requires.py:34:joined:ovsdb ============================== [Test Case] The easiest way to test this is to deploy openstack with the neutron-openvswitch charm, using the new charm updates. Once deployed, edit /usr/share/openvswitch/scripts/ovs-ctl with an echo to show what MLOCKALL is set to. Then toggle the charm config option [1] and look at journalctl -xe to find the echo output, which should correspond to the mlockall setting. [1] juju config neutron-openvswitch disable-mlockall=true juju config neutron-openvswitch disable-mlockall=false [Regression Potential] There's potential that this will break users who have come to depend on the incorrect EnvironmentFile setting and environment variable in the systemd unit file for ovs-vswitchd. If that is the case they must be running with modified systemd unit files anyway so it is probably a moot point. [Discussion] I have a query out to James and Christian about an undocumented commit that is getting picked up in the groovy upload. It is committed to the ubuntu/groovy branch of the package Vcs. See debian/ltmain-whole-archive.diff and debian/rules in the upload debdiff at http://launchpadlibrarian.net/511453613/openvswitch_2.13.1-0ubuntu1_2.13.1-0ubuntu1.1.diff.gz [Impact] Recent changes to systemd rlimit are resulting in memory exhaustion with ovs-vswitchd's use of mlockall(). mlockall() can be disabled via /etc/defaults/openvswitch-vswitch, however there is currently a bug in the shipped ovs-vswitchd systemd unit file that prevents it. The package will be fixed in this SRU. Additionally the neutron-openvswitch charm will be updated to enable disabling of mlockall() use in ovs-vswitchd via a config option. More details on the above summary can be found in the following comments: https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1906280/comments/16 https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1906280/comments/19 ==== Original bug details === Original bug title: Charm stuck waiting for ovsdb 'no key "ovn-remote" in Open_vSwitch record' Original bug details: As seen during this Focal Ussuri test run: https://solutions.qa.canonical.com/testruns/testRun/5f7ad510-f57e-40ce-beb7-5f39800fa5f0 Crashdump here: https://oil-jenkins.canonical.com/artifacts/5f7ad510-f57e-40ce-beb7-5f39800fa5f0/generated/generated/openstack/juju-crashdump-openstack-2020-11-28-03.40.36.tar.gz Full history of occurrences can be found here: https://solutions.qa.canonical.com/bugs/bugs/bug/1906280 Octavia's ovn-chassis units are stuck waiting: octavia/0 blocked idle 1/lxd/8 10.244.8.170 9876/tcp Awaiting leader to create required resources   hacluster-octavia/1 active idle 10.244.8.170 Unit is ready and clustered   logrotated/63 active idle 10.244.8.170 Unit is ready.   octavia-ovn-chassis/1 waiting executing 10.244.8.170 'ovsdb' incomplete   public-policy-routing/45 active idle 10.244.8.170 Unit is ready When the db is reporting healthy: ovn-central/0* active idle 1/lxd/9 10.246.64.225 6641/tcp,6642/tcp Unit is ready (leader: ovnnb_db, ovnsb_db)   logrotated/19 active idle 10.246.64.225 Unit is ready. ovn-central/1 active idle 3/lxd/9 10.246.64.250 6641/tcp,6642/tcp Unit is ready (northd: active)   logrotated/27 active idle 10.246.64.250 Unit is ready. ovn-central/2 active idle 5/lxd/9 10.246.65.21 6641/tcp,6642/tcp Unit is ready   logrotated/52 active idle 10.246.65.21 Unit is ready. Warning in the juju unit logs indicates that the charm is blocking on a missing key in the ovsdb: 2020-11-27 23:36:57 INFO juju-log ovsdb:195: Invoking reactive handler: hooks/relations/ovsdb-subordinate/provides.py:97:joined:ovsdb-subordinate 2020-11-27 23:36:57 DEBUG jujuc server.go:211 running hook tool "relation-get" 2020-11-27 23:36:57 WARNING ovsdb-relation-changed ovs-vsctl: no key "ovn-remote" in Open_vSwitch record "." column external_ids 2020-11-27 23:36:57 DEBUG jujuc server.go:211 running hook tool "juju-log" 2020-11-27 23:36:57 INFO juju-log ovsdb:195: Invoking reactive handler: hooks/relations/ovsdb/requires.py:34:joined:ovsdb ============================== [Test Case] The easiest way to test this is to deploy openstack with the neutron-openvswitch charm, using the new charm updates. Once deployed, edit /usr/share/openvswitch/scripts/ovs-ctl with an echo to show what MLOCKALL is set to. Then toggle the charm config option [1] and look at journalctl -xe to find the echo output, which should correspond to the mlockall setting. [1] juju config neutron-openvswitch disable-mlockall=true juju config neutron-openvswitch disable-mlockall=false [Regression Potential] There's potential that this will break users who have come to depend on the incorrect EnvironmentFile setting and environment variable in the systemd unit file for ovs-vswitchd. If that is the case they must be running with modified systemd unit files anyway so it is probably a moot point. [Discussion] == Groovy == I have a query out to James and Christian about an undocumented commit that is getting picked up in the groovy upload. It is committed to the ubuntu/groovy branch of the package Vcs. See debian/ltmain-whole-archive.diff and debian/rules in the upload debdiff at http://launchpadlibrarian.net/511453613/openvswitch_2.13.1-0ubuntu1_2.13.1-0ubuntu1.1.diff.gz == Bionic == The bionic upload is paired with the following SRUs which will also require verification: https://bugs.launchpad.net/bugs/1823295 https://bugs.launchpad.net/bugs/1881077
2020-12-15 20:48:22 Corey Bryant summary [SRU] Add support for disabling memlockall() calls in ovs-vswitchd [SRU] Add support for disabling mlockall() calls in ovs-vswitchd
2020-12-15 20:56:05 Corey Bryant description [Impact] Recent changes to systemd rlimit are resulting in memory exhaustion with ovs-vswitchd's use of mlockall(). mlockall() can be disabled via /etc/defaults/openvswitch-vswitch, however there is currently a bug in the shipped ovs-vswitchd systemd unit file that prevents it. The package will be fixed in this SRU. Additionally the neutron-openvswitch charm will be updated to enable disabling of mlockall() use in ovs-vswitchd via a config option. More details on the above summary can be found in the following comments: https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1906280/comments/16 https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1906280/comments/19 ==== Original bug details === Original bug title: Charm stuck waiting for ovsdb 'no key "ovn-remote" in Open_vSwitch record' Original bug details: As seen during this Focal Ussuri test run: https://solutions.qa.canonical.com/testruns/testRun/5f7ad510-f57e-40ce-beb7-5f39800fa5f0 Crashdump here: https://oil-jenkins.canonical.com/artifacts/5f7ad510-f57e-40ce-beb7-5f39800fa5f0/generated/generated/openstack/juju-crashdump-openstack-2020-11-28-03.40.36.tar.gz Full history of occurrences can be found here: https://solutions.qa.canonical.com/bugs/bugs/bug/1906280 Octavia's ovn-chassis units are stuck waiting: octavia/0 blocked idle 1/lxd/8 10.244.8.170 9876/tcp Awaiting leader to create required resources   hacluster-octavia/1 active idle 10.244.8.170 Unit is ready and clustered   logrotated/63 active idle 10.244.8.170 Unit is ready.   octavia-ovn-chassis/1 waiting executing 10.244.8.170 'ovsdb' incomplete   public-policy-routing/45 active idle 10.244.8.170 Unit is ready When the db is reporting healthy: ovn-central/0* active idle 1/lxd/9 10.246.64.225 6641/tcp,6642/tcp Unit is ready (leader: ovnnb_db, ovnsb_db)   logrotated/19 active idle 10.246.64.225 Unit is ready. ovn-central/1 active idle 3/lxd/9 10.246.64.250 6641/tcp,6642/tcp Unit is ready (northd: active)   logrotated/27 active idle 10.246.64.250 Unit is ready. ovn-central/2 active idle 5/lxd/9 10.246.65.21 6641/tcp,6642/tcp Unit is ready   logrotated/52 active idle 10.246.65.21 Unit is ready. Warning in the juju unit logs indicates that the charm is blocking on a missing key in the ovsdb: 2020-11-27 23:36:57 INFO juju-log ovsdb:195: Invoking reactive handler: hooks/relations/ovsdb-subordinate/provides.py:97:joined:ovsdb-subordinate 2020-11-27 23:36:57 DEBUG jujuc server.go:211 running hook tool "relation-get" 2020-11-27 23:36:57 WARNING ovsdb-relation-changed ovs-vsctl: no key "ovn-remote" in Open_vSwitch record "." column external_ids 2020-11-27 23:36:57 DEBUG jujuc server.go:211 running hook tool "juju-log" 2020-11-27 23:36:57 INFO juju-log ovsdb:195: Invoking reactive handler: hooks/relations/ovsdb/requires.py:34:joined:ovsdb ============================== [Test Case] The easiest way to test this is to deploy openstack with the neutron-openvswitch charm, using the new charm updates. Once deployed, edit /usr/share/openvswitch/scripts/ovs-ctl with an echo to show what MLOCKALL is set to. Then toggle the charm config option [1] and look at journalctl -xe to find the echo output, which should correspond to the mlockall setting. [1] juju config neutron-openvswitch disable-mlockall=true juju config neutron-openvswitch disable-mlockall=false [Regression Potential] There's potential that this will break users who have come to depend on the incorrect EnvironmentFile setting and environment variable in the systemd unit file for ovs-vswitchd. If that is the case they must be running with modified systemd unit files anyway so it is probably a moot point. [Discussion] == Groovy == I have a query out to James and Christian about an undocumented commit that is getting picked up in the groovy upload. It is committed to the ubuntu/groovy branch of the package Vcs. See debian/ltmain-whole-archive.diff and debian/rules in the upload debdiff at http://launchpadlibrarian.net/511453613/openvswitch_2.13.1-0ubuntu1_2.13.1-0ubuntu1.1.diff.gz == Bionic == The bionic upload is paired with the following SRUs which will also require verification: https://bugs.launchpad.net/bugs/1823295 https://bugs.launchpad.net/bugs/1881077 [Impact] Recent changes to systemd rlimit are resulting in memory exhaustion with ovs-vswitchd's use of mlockall(). mlockall() can be disabled via /etc/defaults/openvswitch-vswitch, however there is currently a bug in the shipped ovs-vswitchd systemd unit file that prevents it. The package will be fixed in this SRU. Additionally the neutron-openvswitch charm will be updated to enable disabling of mlockall() use in ovs-vswitchd via a config option. More details on the above summary can be found in the following comments: https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1906280/comments/16 https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1906280/comments/19 ==== Original bug details === Original bug title: Charm stuck waiting for ovsdb 'no key "ovn-remote" in Open_vSwitch record' Original bug details: As seen during this Focal Ussuri test run: https://solutions.qa.canonical.com/testruns/testRun/5f7ad510-f57e-40ce-beb7-5f39800fa5f0 Crashdump here: https://oil-jenkins.canonical.com/artifacts/5f7ad510-f57e-40ce-beb7-5f39800fa5f0/generated/generated/openstack/juju-crashdump-openstack-2020-11-28-03.40.36.tar.gz Full history of occurrences can be found here: https://solutions.qa.canonical.com/bugs/bugs/bug/1906280 Octavia's ovn-chassis units are stuck waiting: octavia/0 blocked idle 1/lxd/8 10.244.8.170 9876/tcp Awaiting leader to create required resources   hacluster-octavia/1 active idle 10.244.8.170 Unit is ready and clustered   logrotated/63 active idle 10.244.8.170 Unit is ready.   octavia-ovn-chassis/1 waiting executing 10.244.8.170 'ovsdb' incomplete   public-policy-routing/45 active idle 10.244.8.170 Unit is ready When the db is reporting healthy: ovn-central/0* active idle 1/lxd/9 10.246.64.225 6641/tcp,6642/tcp Unit is ready (leader: ovnnb_db, ovnsb_db)   logrotated/19 active idle 10.246.64.225 Unit is ready. ovn-central/1 active idle 3/lxd/9 10.246.64.250 6641/tcp,6642/tcp Unit is ready (northd: active)   logrotated/27 active idle 10.246.64.250 Unit is ready. ovn-central/2 active idle 5/lxd/9 10.246.65.21 6641/tcp,6642/tcp Unit is ready   logrotated/52 active idle 10.246.65.21 Unit is ready. Warning in the juju unit logs indicates that the charm is blocking on a missing key in the ovsdb: 2020-11-27 23:36:57 INFO juju-log ovsdb:195: Invoking reactive handler: hooks/relations/ovsdb-subordinate/provides.py:97:joined:ovsdb-subordinate 2020-11-27 23:36:57 DEBUG jujuc server.go:211 running hook tool "relation-get" 2020-11-27 23:36:57 WARNING ovsdb-relation-changed ovs-vsctl: no key "ovn-remote" in Open_vSwitch record "." column external_ids 2020-11-27 23:36:57 DEBUG jujuc server.go:211 running hook tool "juju-log" 2020-11-27 23:36:57 INFO juju-log ovsdb:195: Invoking reactive handler: hooks/relations/ovsdb/requires.py:34:joined:ovsdb ============================== [Test Case] The easiest way to test this is to deploy openstack with the neutron-openvswitch charm, using the new charm updates. Once deployed, edit /usr/share/openvswitch/scripts/ovs-ctl with an echo to show what MLOCKALL is set to. Then toggle the charm config option [1] and look at journalctl -xe to find the echo output, which should correspond to the mlockall setting. [1] juju config neutron-openvswitch disable-mlockall=true juju config neutron-openvswitch disable-mlockall=false [Regression Potential] There's potential that this will break users who have come to depend on the incorrect EnvironmentFile setting and environment variable in the systemd unit file for ovs-vswitchd. If that is the case they must be running with modified systemd unit files anyway so it is probably a moot point. [Discussion] == Groovy == I have a query out to James and Christian about an undocumented commit that is getting picked up in the groovy upload. It is committed to the ubuntu/groovy branch of the package Vcs. See debian/ltmain-whole-archive.diff and debian/rules in the upload debdiff at http://launchpadlibrarian.net/511453613/openvswitch_2.13.1-0ubuntu1_2.13.1-0ubuntu1.1.diff.gz == Bionic == The bionic upload is paired with the following SRUs which will also require verification: https://bugs.launchpad.net/bugs/1823295 https://bugs.launchpad.net/bugs/1881077 == Package details == New package versions are in progress and can be found at: hirsute: https://launchpad.net/ubuntu/+source/openvswitch/2.14.0-0ubuntu2 groovy: https://launchpad.net/ubuntu/groovy/+queue?queue_state=1&queue_text=openvswitch focal: https://launchpad.net/ubuntu/focal/+queue?queue_state=1&queue_text=openvswitch train: https://launchpad.net/~ubuntu-cloud-archive/+archive/ubuntu/train-staging/+packages?field.name_filter=openvswitch&field.status_filter=published&field.series_filter= stein: https://launchpad.net/~ubuntu-cloud-archive/+archive/ubuntu/stein-staging/+packages?field.name_filter=openvswitch&field.status_filter=published&field.series_filter= bionic: https://launchpad.net/ubuntu/bionic/+queue?queue_state=1&queue_text=openvswitch == Charm update == https://review.opendev.org/c/openstack/charm-neutron-openvswitch/+/767212
2020-12-15 21:09:04 Corey Bryant description [Impact] Recent changes to systemd rlimit are resulting in memory exhaustion with ovs-vswitchd's use of mlockall(). mlockall() can be disabled via /etc/defaults/openvswitch-vswitch, however there is currently a bug in the shipped ovs-vswitchd systemd unit file that prevents it. The package will be fixed in this SRU. Additionally the neutron-openvswitch charm will be updated to enable disabling of mlockall() use in ovs-vswitchd via a config option. More details on the above summary can be found in the following comments: https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1906280/comments/16 https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1906280/comments/19 ==== Original bug details === Original bug title: Charm stuck waiting for ovsdb 'no key "ovn-remote" in Open_vSwitch record' Original bug details: As seen during this Focal Ussuri test run: https://solutions.qa.canonical.com/testruns/testRun/5f7ad510-f57e-40ce-beb7-5f39800fa5f0 Crashdump here: https://oil-jenkins.canonical.com/artifacts/5f7ad510-f57e-40ce-beb7-5f39800fa5f0/generated/generated/openstack/juju-crashdump-openstack-2020-11-28-03.40.36.tar.gz Full history of occurrences can be found here: https://solutions.qa.canonical.com/bugs/bugs/bug/1906280 Octavia's ovn-chassis units are stuck waiting: octavia/0 blocked idle 1/lxd/8 10.244.8.170 9876/tcp Awaiting leader to create required resources   hacluster-octavia/1 active idle 10.244.8.170 Unit is ready and clustered   logrotated/63 active idle 10.244.8.170 Unit is ready.   octavia-ovn-chassis/1 waiting executing 10.244.8.170 'ovsdb' incomplete   public-policy-routing/45 active idle 10.244.8.170 Unit is ready When the db is reporting healthy: ovn-central/0* active idle 1/lxd/9 10.246.64.225 6641/tcp,6642/tcp Unit is ready (leader: ovnnb_db, ovnsb_db)   logrotated/19 active idle 10.246.64.225 Unit is ready. ovn-central/1 active idle 3/lxd/9 10.246.64.250 6641/tcp,6642/tcp Unit is ready (northd: active)   logrotated/27 active idle 10.246.64.250 Unit is ready. ovn-central/2 active idle 5/lxd/9 10.246.65.21 6641/tcp,6642/tcp Unit is ready   logrotated/52 active idle 10.246.65.21 Unit is ready. Warning in the juju unit logs indicates that the charm is blocking on a missing key in the ovsdb: 2020-11-27 23:36:57 INFO juju-log ovsdb:195: Invoking reactive handler: hooks/relations/ovsdb-subordinate/provides.py:97:joined:ovsdb-subordinate 2020-11-27 23:36:57 DEBUG jujuc server.go:211 running hook tool "relation-get" 2020-11-27 23:36:57 WARNING ovsdb-relation-changed ovs-vsctl: no key "ovn-remote" in Open_vSwitch record "." column external_ids 2020-11-27 23:36:57 DEBUG jujuc server.go:211 running hook tool "juju-log" 2020-11-27 23:36:57 INFO juju-log ovsdb:195: Invoking reactive handler: hooks/relations/ovsdb/requires.py:34:joined:ovsdb ============================== [Test Case] The easiest way to test this is to deploy openstack with the neutron-openvswitch charm, using the new charm updates. Once deployed, edit /usr/share/openvswitch/scripts/ovs-ctl with an echo to show what MLOCKALL is set to. Then toggle the charm config option [1] and look at journalctl -xe to find the echo output, which should correspond to the mlockall setting. [1] juju config neutron-openvswitch disable-mlockall=true juju config neutron-openvswitch disable-mlockall=false [Regression Potential] There's potential that this will break users who have come to depend on the incorrect EnvironmentFile setting and environment variable in the systemd unit file for ovs-vswitchd. If that is the case they must be running with modified systemd unit files anyway so it is probably a moot point. [Discussion] == Groovy == I have a query out to James and Christian about an undocumented commit that is getting picked up in the groovy upload. It is committed to the ubuntu/groovy branch of the package Vcs. See debian/ltmain-whole-archive.diff and debian/rules in the upload debdiff at http://launchpadlibrarian.net/511453613/openvswitch_2.13.1-0ubuntu1_2.13.1-0ubuntu1.1.diff.gz == Bionic == The bionic upload is paired with the following SRUs which will also require verification: https://bugs.launchpad.net/bugs/1823295 https://bugs.launchpad.net/bugs/1881077 == Package details == New package versions are in progress and can be found at: hirsute: https://launchpad.net/ubuntu/+source/openvswitch/2.14.0-0ubuntu2 groovy: https://launchpad.net/ubuntu/groovy/+queue?queue_state=1&queue_text=openvswitch focal: https://launchpad.net/ubuntu/focal/+queue?queue_state=1&queue_text=openvswitch train: https://launchpad.net/~ubuntu-cloud-archive/+archive/ubuntu/train-staging/+packages?field.name_filter=openvswitch&field.status_filter=published&field.series_filter= stein: https://launchpad.net/~ubuntu-cloud-archive/+archive/ubuntu/stein-staging/+packages?field.name_filter=openvswitch&field.status_filter=published&field.series_filter= bionic: https://launchpad.net/ubuntu/bionic/+queue?queue_state=1&queue_text=openvswitch == Charm update == https://review.opendev.org/c/openstack/charm-neutron-openvswitch/+/767212 [Impact] Recent changes to systemd rlimit are resulting in memory exhaustion with ovs-vswitchd's use of mlockall(). mlockall() can be disabled via /etc/defaults/openvswitch-vswitch, however there is currently a bug in the shipped ovs-vswitchd systemd unit file that prevents it. The package will be fixed in this SRU. Additionally the neutron-openvswitch charm will be updated to enable disabling of mlockall() use in ovs-vswitchd via a config option. More details on the above summary can be found in the following comments: https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1906280/comments/16 https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1906280/comments/19 ==== Original bug details === Original bug title: Charm stuck waiting for ovsdb 'no key "ovn-remote" in Open_vSwitch record' Original bug details: As seen during this Focal Ussuri test run: https://solutions.qa.canonical.com/testruns/testRun/5f7ad510-f57e-40ce-beb7-5f39800fa5f0 Crashdump here: https://oil-jenkins.canonical.com/artifacts/5f7ad510-f57e-40ce-beb7-5f39800fa5f0/generated/generated/openstack/juju-crashdump-openstack-2020-11-28-03.40.36.tar.gz Full history of occurrences can be found here: https://solutions.qa.canonical.com/bugs/bugs/bug/1906280 Octavia's ovn-chassis units are stuck waiting: octavia/0 blocked idle 1/lxd/8 10.244.8.170 9876/tcp Awaiting leader to create required resources   hacluster-octavia/1 active idle 10.244.8.170 Unit is ready and clustered   logrotated/63 active idle 10.244.8.170 Unit is ready.   octavia-ovn-chassis/1 waiting executing 10.244.8.170 'ovsdb' incomplete   public-policy-routing/45 active idle 10.244.8.170 Unit is ready When the db is reporting healthy: ovn-central/0* active idle 1/lxd/9 10.246.64.225 6641/tcp,6642/tcp Unit is ready (leader: ovnnb_db, ovnsb_db)   logrotated/19 active idle 10.246.64.225 Unit is ready. ovn-central/1 active idle 3/lxd/9 10.246.64.250 6641/tcp,6642/tcp Unit is ready (northd: active)   logrotated/27 active idle 10.246.64.250 Unit is ready. ovn-central/2 active idle 5/lxd/9 10.246.65.21 6641/tcp,6642/tcp Unit is ready   logrotated/52 active idle 10.246.65.21 Unit is ready. Warning in the juju unit logs indicates that the charm is blocking on a missing key in the ovsdb: 2020-11-27 23:36:57 INFO juju-log ovsdb:195: Invoking reactive handler: hooks/relations/ovsdb-subordinate/provides.py:97:joined:ovsdb-subordinate 2020-11-27 23:36:57 DEBUG jujuc server.go:211 running hook tool "relation-get" 2020-11-27 23:36:57 WARNING ovsdb-relation-changed ovs-vsctl: no key "ovn-remote" in Open_vSwitch record "." column external_ids 2020-11-27 23:36:57 DEBUG jujuc server.go:211 running hook tool "juju-log" 2020-11-27 23:36:57 INFO juju-log ovsdb:195: Invoking reactive handler: hooks/relations/ovsdb/requires.py:34:joined:ovsdb ============================== [Test Case] Note: Bionic requires additional testing due to pairing with other SRUS. The easiest way to test this is to deploy openstack with the neutron-openvswitch charm, using the new charm updates. Once deployed, edit /usr/share/openvswitch/scripts/ovs-ctl with an echo to show what MLOCKALL is set to. Then toggle the charm config option [1] and look at journalctl -xe to find the echo output, which should correspond to the mlockall setting. [1] juju config neutron-openvswitch disable-mlockall=true juju config neutron-openvswitch disable-mlockall=false [Regression Potential] There's potential that this will break users who have come to depend on the incorrect EnvironmentFile setting and environment variable in the systemd unit file for ovs-vswitchd. If that is the case they must be running with modified systemd unit files anyway so it is probably a moot point. [Discussion] == Groovy == I have a query out to James and Christian about an undocumented commit that is getting picked up in the groovy upload. It is committed to the ubuntu/groovy branch of the package Vcs. See debian/ltmain-whole-archive.diff and debian/rules in the upload debdiff at http://launchpadlibrarian.net/511453613/openvswitch_2.13.1-0ubuntu1_2.13.1-0ubuntu1.1.diff.gz == Bionic == The bionic upload is paired with the following SRUs which will also require verification: https://bugs.launchpad.net/bugs/1823295 https://bugs.launchpad.net/bugs/1881077 == Package details == New package versions are in progress and can be found at: hirsute: https://launchpad.net/ubuntu/+source/openvswitch/2.14.0-0ubuntu2 groovy: https://launchpad.net/ubuntu/groovy/+queue?queue_state=1&queue_text=openvswitch focal: https://launchpad.net/ubuntu/focal/+queue?queue_state=1&queue_text=openvswitch train: https://launchpad.net/~ubuntu-cloud-archive/+archive/ubuntu/train-staging/+packages?field.name_filter=openvswitch&field.status_filter=published&field.series_filter= stein: https://launchpad.net/~ubuntu-cloud-archive/+archive/ubuntu/stein-staging/+packages?field.name_filter=openvswitch&field.status_filter=published&field.series_filter= bionic: https://launchpad.net/ubuntu/bionic/+queue?queue_state=1&queue_text=openvswitch == Charm update == https://review.opendev.org/c/openstack/charm-neutron-openvswitch/+/767212
2020-12-16 15:09:16 Corey Bryant bug added subscriber Ubuntu Stable Release Updates Team
2020-12-16 15:10:10 Corey Bryant description [Impact] Recent changes to systemd rlimit are resulting in memory exhaustion with ovs-vswitchd's use of mlockall(). mlockall() can be disabled via /etc/defaults/openvswitch-vswitch, however there is currently a bug in the shipped ovs-vswitchd systemd unit file that prevents it. The package will be fixed in this SRU. Additionally the neutron-openvswitch charm will be updated to enable disabling of mlockall() use in ovs-vswitchd via a config option. More details on the above summary can be found in the following comments: https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1906280/comments/16 https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1906280/comments/19 ==== Original bug details === Original bug title: Charm stuck waiting for ovsdb 'no key "ovn-remote" in Open_vSwitch record' Original bug details: As seen during this Focal Ussuri test run: https://solutions.qa.canonical.com/testruns/testRun/5f7ad510-f57e-40ce-beb7-5f39800fa5f0 Crashdump here: https://oil-jenkins.canonical.com/artifacts/5f7ad510-f57e-40ce-beb7-5f39800fa5f0/generated/generated/openstack/juju-crashdump-openstack-2020-11-28-03.40.36.tar.gz Full history of occurrences can be found here: https://solutions.qa.canonical.com/bugs/bugs/bug/1906280 Octavia's ovn-chassis units are stuck waiting: octavia/0 blocked idle 1/lxd/8 10.244.8.170 9876/tcp Awaiting leader to create required resources   hacluster-octavia/1 active idle 10.244.8.170 Unit is ready and clustered   logrotated/63 active idle 10.244.8.170 Unit is ready.   octavia-ovn-chassis/1 waiting executing 10.244.8.170 'ovsdb' incomplete   public-policy-routing/45 active idle 10.244.8.170 Unit is ready When the db is reporting healthy: ovn-central/0* active idle 1/lxd/9 10.246.64.225 6641/tcp,6642/tcp Unit is ready (leader: ovnnb_db, ovnsb_db)   logrotated/19 active idle 10.246.64.225 Unit is ready. ovn-central/1 active idle 3/lxd/9 10.246.64.250 6641/tcp,6642/tcp Unit is ready (northd: active)   logrotated/27 active idle 10.246.64.250 Unit is ready. ovn-central/2 active idle 5/lxd/9 10.246.65.21 6641/tcp,6642/tcp Unit is ready   logrotated/52 active idle 10.246.65.21 Unit is ready. Warning in the juju unit logs indicates that the charm is blocking on a missing key in the ovsdb: 2020-11-27 23:36:57 INFO juju-log ovsdb:195: Invoking reactive handler: hooks/relations/ovsdb-subordinate/provides.py:97:joined:ovsdb-subordinate 2020-11-27 23:36:57 DEBUG jujuc server.go:211 running hook tool "relation-get" 2020-11-27 23:36:57 WARNING ovsdb-relation-changed ovs-vsctl: no key "ovn-remote" in Open_vSwitch record "." column external_ids 2020-11-27 23:36:57 DEBUG jujuc server.go:211 running hook tool "juju-log" 2020-11-27 23:36:57 INFO juju-log ovsdb:195: Invoking reactive handler: hooks/relations/ovsdb/requires.py:34:joined:ovsdb ============================== [Test Case] Note: Bionic requires additional testing due to pairing with other SRUS. The easiest way to test this is to deploy openstack with the neutron-openvswitch charm, using the new charm updates. Once deployed, edit /usr/share/openvswitch/scripts/ovs-ctl with an echo to show what MLOCKALL is set to. Then toggle the charm config option [1] and look at journalctl -xe to find the echo output, which should correspond to the mlockall setting. [1] juju config neutron-openvswitch disable-mlockall=true juju config neutron-openvswitch disable-mlockall=false [Regression Potential] There's potential that this will break users who have come to depend on the incorrect EnvironmentFile setting and environment variable in the systemd unit file for ovs-vswitchd. If that is the case they must be running with modified systemd unit files anyway so it is probably a moot point. [Discussion] == Groovy == I have a query out to James and Christian about an undocumented commit that is getting picked up in the groovy upload. It is committed to the ubuntu/groovy branch of the package Vcs. See debian/ltmain-whole-archive.diff and debian/rules in the upload debdiff at http://launchpadlibrarian.net/511453613/openvswitch_2.13.1-0ubuntu1_2.13.1-0ubuntu1.1.diff.gz == Bionic == The bionic upload is paired with the following SRUs which will also require verification: https://bugs.launchpad.net/bugs/1823295 https://bugs.launchpad.net/bugs/1881077 == Package details == New package versions are in progress and can be found at: hirsute: https://launchpad.net/ubuntu/+source/openvswitch/2.14.0-0ubuntu2 groovy: https://launchpad.net/ubuntu/groovy/+queue?queue_state=1&queue_text=openvswitch focal: https://launchpad.net/ubuntu/focal/+queue?queue_state=1&queue_text=openvswitch train: https://launchpad.net/~ubuntu-cloud-archive/+archive/ubuntu/train-staging/+packages?field.name_filter=openvswitch&field.status_filter=published&field.series_filter= stein: https://launchpad.net/~ubuntu-cloud-archive/+archive/ubuntu/stein-staging/+packages?field.name_filter=openvswitch&field.status_filter=published&field.series_filter= bionic: https://launchpad.net/ubuntu/bionic/+queue?queue_state=1&queue_text=openvswitch == Charm update == https://review.opendev.org/c/openstack/charm-neutron-openvswitch/+/767212 [Impact] Recent changes to systemd rlimit are resulting in memory exhaustion with ovs-vswitchd's use of mlockall(). mlockall() can be disabled via /etc/defaults/openvswitch-vswitch, however there is currently a bug in the shipped ovs-vswitchd systemd unit file that prevents it. The package will be fixed in this SRU. Additionally the neutron-openvswitch charm will be updated to enable disabling of mlockall() use in ovs-vswitchd via a config option. More details on the above summary can be found in the following comments: https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1906280/comments/16 https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1906280/comments/19 ==== Original bug details === Original bug title: Charm stuck waiting for ovsdb 'no key "ovn-remote" in Open_vSwitch record' Original bug details: As seen during this Focal Ussuri test run: https://solutions.qa.canonical.com/testruns/testRun/5f7ad510-f57e-40ce-beb7-5f39800fa5f0 Crashdump here: https://oil-jenkins.canonical.com/artifacts/5f7ad510-f57e-40ce-beb7-5f39800fa5f0/generated/generated/openstack/juju-crashdump-openstack-2020-11-28-03.40.36.tar.gz Full history of occurrences can be found here: https://solutions.qa.canonical.com/bugs/bugs/bug/1906280 Octavia's ovn-chassis units are stuck waiting: octavia/0 blocked idle 1/lxd/8 10.244.8.170 9876/tcp Awaiting leader to create required resources   hacluster-octavia/1 active idle 10.244.8.170 Unit is ready and clustered   logrotated/63 active idle 10.244.8.170 Unit is ready.   octavia-ovn-chassis/1 waiting executing 10.244.8.170 'ovsdb' incomplete   public-policy-routing/45 active idle 10.244.8.170 Unit is ready When the db is reporting healthy: ovn-central/0* active idle 1/lxd/9 10.246.64.225 6641/tcp,6642/tcp Unit is ready (leader: ovnnb_db, ovnsb_db)   logrotated/19 active idle 10.246.64.225 Unit is ready. ovn-central/1 active idle 3/lxd/9 10.246.64.250 6641/tcp,6642/tcp Unit is ready (northd: active)   logrotated/27 active idle 10.246.64.250 Unit is ready. ovn-central/2 active idle 5/lxd/9 10.246.65.21 6641/tcp,6642/tcp Unit is ready   logrotated/52 active idle 10.246.65.21 Unit is ready. Warning in the juju unit logs indicates that the charm is blocking on a missing key in the ovsdb: 2020-11-27 23:36:57 INFO juju-log ovsdb:195: Invoking reactive handler: hooks/relations/ovsdb-subordinate/provides.py:97:joined:ovsdb-subordinate 2020-11-27 23:36:57 DEBUG jujuc server.go:211 running hook tool "relation-get" 2020-11-27 23:36:57 WARNING ovsdb-relation-changed ovs-vsctl: no key "ovn-remote" in Open_vSwitch record "." column external_ids 2020-11-27 23:36:57 DEBUG jujuc server.go:211 running hook tool "juju-log" 2020-11-27 23:36:57 INFO juju-log ovsdb:195: Invoking reactive handler: hooks/relations/ovsdb/requires.py:34:joined:ovsdb ============================== [Test Case] Note: Bionic requires additional testing due to pairing with other SRUS. The easiest way to test this is to deploy openstack with the neutron-openvswitch charm, using the new charm updates. Once deployed, edit /usr/share/openvswitch/scripts/ovs-ctl with an echo to show what MLOCKALL is set to. Then toggle the charm config option [1] and look at journalctl -xe to find the echo output, which should correspond to the mlockall setting. [1] juju config neutron-openvswitch disable-mlockall=true juju config neutron-openvswitch disable-mlockall=false [Regression Potential] There's potential that this will break users who have come to depend on the incorrect EnvironmentFile setting and environment variable in the systemd unit file for ovs-vswitchd. If that is the case they must be running with modified systemd unit files anyway so it is probably a moot point. [Discussion] == Groovy == Update (16-12-2020): I chatted briefly with Christian and it sounds like the ltmain-whole-archive.diff may be optional, so I've dropped it from this upload. There are now 2 openvswitch's in the groovy unapproved queue. Please reject the upload from 15-12-2020 and consider accepting the upload from 16-12-2020. I have a query out to James and Christian about an undocumented commit that is getting picked up in the groovy upload. It is committed to the ubuntu/groovy branch of the package Vcs. See debian/ltmain-whole-archive.diff and debian/rules in the upload debdiff at http://launchpadlibrarian.net/511453613/openvswitch_2.13.1-0ubuntu1_2.13.1-0ubuntu1.1.diff.gz == Bionic == The bionic upload is paired with the following SRUs which will also require verification: https://bugs.launchpad.net/bugs/1823295 https://bugs.launchpad.net/bugs/1881077 == Package details == New package versions are in progress and can be found at: hirsute: https://launchpad.net/ubuntu/+source/openvswitch/2.14.0-0ubuntu2 groovy: https://launchpad.net/ubuntu/groovy/+queue?queue_state=1&queue_text=openvswitch focal: https://launchpad.net/ubuntu/focal/+queue?queue_state=1&queue_text=openvswitch train: https://launchpad.net/~ubuntu-cloud-archive/+archive/ubuntu/train-staging/+packages?field.name_filter=openvswitch&field.status_filter=published&field.series_filter= stein: https://launchpad.net/~ubuntu-cloud-archive/+archive/ubuntu/stein-staging/+packages?field.name_filter=openvswitch&field.status_filter=published&field.series_filter= bionic: https://launchpad.net/ubuntu/bionic/+queue?queue_state=1&queue_text=openvswitch == Charm update == https://review.opendev.org/c/openstack/charm-neutron-openvswitch/+/767212
2020-12-17 05:33:31 Launchpad Janitor openvswitch (Ubuntu Hirsute): status Triaged Fix Released
2020-12-23 03:11:35 Nobuto Murata bug added subscriber Nobuto Murata
2021-01-05 01:35:01 Hua Zhang bug added subscriber Hua Zhang
2021-01-05 14:03:23 Pedro Victor Lourenço Fragola bug added subscriber Pedro Victor Lourenço Fragola
2021-01-05 21:28:19 Brian Murray openvswitch (Ubuntu Focal): status Triaged Fix Committed
2021-01-05 21:28:24 Brian Murray bug added subscriber SRU Verification
2021-01-05 21:28:32 Brian Murray tags verification-needed verification-needed-focal
2021-01-05 21:29:31 Brian Murray openvswitch (Ubuntu Groovy): status Triaged Fix Committed
2021-01-05 21:29:43 Brian Murray tags verification-needed verification-needed-focal verification-needed verification-needed-focal verification-needed-groovy
2021-01-06 15:01:44 Corey Bryant cloud-archive/ussuri: status Triaged Fix Committed
2021-01-06 15:01:46 Corey Bryant tags verification-needed verification-needed-focal verification-needed-groovy verification-needed verification-needed-focal verification-needed-groovy verification-ussuri-needed
2021-01-06 15:03:17 Corey Bryant cloud-archive/train: status Triaged Fix Committed
2021-01-06 15:03:19 Corey Bryant tags verification-needed verification-needed-focal verification-needed-groovy verification-ussuri-needed verification-needed verification-needed-focal verification-needed-groovy verification-train-needed verification-ussuri-needed
2021-01-06 15:04:08 Corey Bryant cloud-archive/stein: status Triaged Fix Committed
2021-01-06 15:04:10 Corey Bryant tags verification-needed verification-needed-focal verification-needed-groovy verification-train-needed verification-ussuri-needed verification-needed verification-needed-focal verification-needed-groovy verification-stein-needed verification-train-needed verification-ussuri-needed
2021-01-07 18:45:02 Łukasz Zemczak openvswitch (Ubuntu Bionic): status Triaged Fix Committed
2021-01-07 18:45:11 Łukasz Zemczak tags verification-needed verification-needed-focal verification-needed-groovy verification-stein-needed verification-train-needed verification-ussuri-needed verification-needed verification-needed-bionic verification-needed-focal verification-needed-groovy verification-stein-needed verification-train-needed verification-ussuri-needed
2021-01-07 23:55:50 Corey Bryant cloud-archive/queens: status Triaged Fix Committed
2021-01-07 23:55:53 Corey Bryant tags verification-needed verification-needed-bionic verification-needed-focal verification-needed-groovy verification-stein-needed verification-train-needed verification-ussuri-needed verification-needed verification-needed-bionic verification-needed-focal verification-needed-groovy verification-queens-needed verification-stein-needed verification-train-needed verification-ussuri-needed
2021-01-08 02:12:51 Corey Bryant attachment added bug-1906280-verification.txt https://bugs.launchpad.net/charm-ovn-chassis/+bug/1906280/+attachment/5450495/+files/bug-1906280-verification.txt
2021-01-08 02:13:57 Corey Bryant tags verification-needed verification-needed-bionic verification-needed-focal verification-needed-groovy verification-queens-needed verification-stein-needed verification-train-needed verification-ussuri-needed verification-done verification-done-bionic verification-done-focal verification-done-groovy verification-queens-done verification-stein-done verification-train-done verification-ussuri-done
2021-01-08 19:31:20 Corey Bryant charm-neutron-openvswitch: status In Progress Fix Committed
2021-01-13 17:08:14 Launchpad Janitor openvswitch (Ubuntu Bionic): status Fix Committed Fix Released
2021-01-13 17:08:14 Launchpad Janitor cve linked 2015-8011
2021-01-13 17:08:14 Launchpad Janitor cve linked 2020-27827
2021-01-13 17:58:32 Launchpad Janitor openvswitch (Ubuntu Focal): status Fix Committed Fix Released
2021-01-13 18:17:21 Michael Skalka removed subscriber Canonical Field Critical
2021-01-13 18:33:25 Corey Bryant cloud-archive/ussuri: status Fix Committed Fix Released
2021-01-13 18:33:41 Corey Bryant cloud-archive/train: status Fix Committed Fix Released
2021-01-13 18:33:55 Corey Bryant cloud-archive/stein: status Fix Committed Fix Released
2021-01-13 18:34:05 Corey Bryant cloud-archive/queens: status Fix Committed Fix Released
2021-01-13 18:40:52 Corey Bryant openvswitch (Ubuntu Groovy): status Fix Committed Fix Released
2021-01-19 19:49:33 Frode Nordahl charm-neutron-openvswitch: status Fix Committed In Progress
2021-01-19 19:49:33 Frode Nordahl charm-neutron-openvswitch: assignee Corey Bryant (corey.bryant) Frode Nordahl (fnordahl)
2021-01-20 13:49:34 Aurelien Lourot charm-neutron-openvswitch: status In Progress Fix Committed
2021-01-22 12:46:44 Aurelien Lourot charm-ovn-chassis: status In Progress Fix Committed
2021-01-27 18:52:27 Michael Skalka tags verification-done verification-done-bionic verification-done-focal verification-done-groovy verification-queens-done verification-stein-done verification-train-done verification-ussuri-done cdo-release-blocker verification-done verification-done-bionic verification-done-focal verification-done-groovy verification-queens-done verification-stein-done verification-train-done verification-ussuri-done
2021-01-28 20:24:26 Drew Freiberger bug task added charm-neutron-gateway
2021-02-10 22:49:10 David Ames charm-ovn-chassis: status Fix Committed Fix Released
2021-02-10 22:49:12 David Ames charm-neutron-openvswitch: status Fix Committed Fix Released
2021-02-12 15:51:44 Frode Nordahl bug task added charm-ovn-dedicated-chassis
2021-02-12 15:52:12 Frode Nordahl charm-ovn-dedicated-chassis: importance Undecided Critical
2021-02-12 15:52:12 Frode Nordahl charm-ovn-dedicated-chassis: status New In Progress
2021-03-09 14:37:10 Corey Bryant cloud-archive: status Invalid Fix Committed
2021-03-09 14:50:39 Corey Bryant cloud-archive: status Fix Committed Fix Released
2023-07-25 18:30:54 Felipe Reyes charm-ovn-dedicated-chassis: status In Progress Fix Released
2023-07-31 13:35:35 Dan Streetman removed subscriber Dan Streetman