Activity log for bug #1750777

Date Who What changed Old value New value Message
2018-02-21 10:21:27 Thomas Morin bug added bug
2018-02-21 10:21:42 Thomas Morin bug added subscriber Brian Haley
2018-02-21 10:29:32 Thomas Morin description We ran into a case where the openvswitch agent (current master branch) eats 100% of CPU time. Pyflame profiling show the time being largely spent in neutron.agent.linux.ip_conntrack, line 95. https://github.com/openstack/neutron/blob/master/neutron/agent/linux/ip_conntrack.py#L95 The code around this line is: while True: pool.spawn_n(self._process_queue) The documentation of eventlet.spawn_n says: "The same as spawn(), but it’s not possible to know how the function terminated (i.e. no return value or exceptions). This makes execution faster. See spawn_n for more details." I suspect that GreenPool.spaw_n may behave similarly. It seems plausible that spawn_n is returning very quickly because of some error, and then all time is quickly spent in a short circuited while loop. We just ran into a case where the openvswitch agent (local dev destack, current master branch) eats 100% of CPU time. Pyflame profiling show the time being largely spent in neutron.agent.linux.ip_conntrack, line 95. https://github.com/openstack/neutron/blob/master/neutron/agent/linux/ip_conntrack.py#L95 The code around this line is:         while True:             pool.spawn_n(self._process_queue) The documentation of eventlet.spawn_n says: "The same as spawn(), but it’s not possible to know how the function terminated (i.e. no return value or exceptions). This makes execution faster. See spawn_n for more details." I suspect that GreenPool.spaw_n may behave similarly. It seems plausible that spawn_n is returning very quickly because of some error, and then all time is quickly spent in a short circuited while loop.
2018-02-21 10:38:59 Thomas Morin bug added subscriber Miguel Lavalle
2018-02-22 10:31:51 Thomas Morin bug added subscriber Pierre Crégut
2018-03-01 15:45:16 Brian Haley neutron: importance Undecided High
2018-03-01 15:47:16 OpenStack Infra neutron: status New In Progress
2018-03-01 15:47:16 OpenStack Infra neutron: assignee Brian Haley (brian-haley)
2018-03-07 20:40:02 OpenStack Infra neutron: status In Progress Fix Released
2018-03-19 15:48:41 Thomas Morin description We just ran into a case where the openvswitch agent (local dev destack, current master branch) eats 100% of CPU time. Pyflame profiling show the time being largely spent in neutron.agent.linux.ip_conntrack, line 95. https://github.com/openstack/neutron/blob/master/neutron/agent/linux/ip_conntrack.py#L95 The code around this line is:         while True:             pool.spawn_n(self._process_queue) The documentation of eventlet.spawn_n says: "The same as spawn(), but it’s not possible to know how the function terminated (i.e. no return value or exceptions). This makes execution faster. See spawn_n for more details." I suspect that GreenPool.spaw_n may behave similarly. It seems plausible that spawn_n is returning very quickly because of some error, and then all time is quickly spent in a short circuited while loop. We just ran into a case where the openvswitch agent (local dev destack, current master branch) eats 100% of CPU time. Pyflame profiling show the time being largely spent in neutron.agent.linux.ip_conntrack, line 95. https://github.com/openstack/neutron/blob/master/neutron/agent/linux/ip_conntrack.py#L95 The code around this line is:         while True:             pool.spawn_n(self._process_queue) The documentation of eventlet.spawn_n says: "The same as spawn(), but it’s not possible to know how the function terminated (i.e. no return value or exceptions). This makes execution faster. See spawn_n for more details." I suspect that GreenPool.spawn_n may behave similarly. It seems plausible that spawn_n is returning very quickly because of some error, and then all time is quickly spent in a short circuited while loop.
2018-04-05 12:22:28 OpenStack Infra tags in-stable-queens
2018-05-02 18:33:24 Corey Bryant bug task added neutron (Ubuntu)
2018-05-02 18:33:36 Corey Bryant neutron (Ubuntu): status New Triaged
2018-05-02 18:33:42 Corey Bryant neutron (Ubuntu): importance Undecided High
2018-05-02 18:34:31 Corey Bryant nominated for series Ubuntu Bionic
2018-05-02 18:34:31 Corey Bryant bug task added neutron (Ubuntu Bionic)
2018-05-02 18:34:31 Corey Bryant nominated for series Ubuntu Cosmic
2018-05-02 18:34:31 Corey Bryant bug task added neutron (Ubuntu Cosmic)
2018-05-02 18:35:18 Corey Bryant bug task added cloud-archive
2018-05-02 18:35:29 Corey Bryant nominated for series cloud-archive/queens
2018-05-02 18:35:29 Corey Bryant bug task added cloud-archive/queens
2018-05-02 18:35:37 Corey Bryant cloud-archive/queens: status New Triaged
2018-05-02 18:35:43 Corey Bryant neutron (Ubuntu Bionic): importance Undecided High
2018-05-02 18:35:48 Corey Bryant neutron (Ubuntu Bionic): status New Triaged
2018-05-02 18:35:55 Corey Bryant cloud-archive/queens: importance Undecided High
2018-05-03 13:20:52 Corey Bryant description We just ran into a case where the openvswitch agent (local dev destack, current master branch) eats 100% of CPU time. Pyflame profiling show the time being largely spent in neutron.agent.linux.ip_conntrack, line 95. https://github.com/openstack/neutron/blob/master/neutron/agent/linux/ip_conntrack.py#L95 The code around this line is:         while True:             pool.spawn_n(self._process_queue) The documentation of eventlet.spawn_n says: "The same as spawn(), but it’s not possible to know how the function terminated (i.e. no return value or exceptions). This makes execution faster. See spawn_n for more details." I suspect that GreenPool.spawn_n may behave similarly. It seems plausible that spawn_n is returning very quickly because of some error, and then all time is quickly spent in a short circuited while loop. We just ran into a case where the openvswitch agent (local dev destack, current master branch) eats 100% of CPU time. Pyflame profiling show the time being largely spent in neutron.agent.linux.ip_conntrack, line 95. https://github.com/openstack/neutron/blob/master/neutron/agent/linux/ip_conntrack.py#L95 The code around this line is:         while True:             pool.spawn_n(self._process_queue) The documentation of eventlet.spawn_n says: "The same as spawn(), but it’s not possible to know how the function terminated (i.e. no return value or exceptions). This makes execution faster. See spawn_n for more details." I suspect that GreenPool.spawn_n may behave similarly. It seems plausible that spawn_n is returning very quickly because of some error, and then all time is quickly spent in a short circuited while loop. SRU details for Ubuntu: ----------------------- [Impact] We're cherry-picking a single bug-fix patch here from the upstream stable/queens branch as there is not currently an upstream stable point release available that includes this fix. We'd like to make sure all of our supported customers have access to this fix as there is a significant performance hit without it. [Test Case] The following SRU process was followed: https://wiki.ubuntu.com/OpenStackUpdates In order to avoid regression of existing consumers, the OpenStack team will run their continuous integration test against the packages that are in -proposed. A successful run of all available tests will be required before the proposed packages can be let into -updates. The OpenStack team will be in charge of attaching the output summary of the executed tests. The OpenStack team members will not mark ‘verification-done’ until this has happened. [Regression Potential] In order to mitigate the regression potential, the results of the aforementioned tests are attached to this bug.
2018-05-03 13:21:02 Corey Bryant bug added subscriber Ubuntu Stable Release Updates Team
2018-05-03 20:05:31 Brian Murray neutron (Ubuntu Bionic): status Triaged Fix Committed
2018-05-03 20:05:38 Brian Murray bug added subscriber SRU Verification
2018-05-03 20:05:46 Brian Murray tags in-stable-queens in-stable-queens verification-needed verification-needed-bionic
2018-05-04 12:38:32 Corey Bryant cloud-archive/queens: status Triaged Fix Committed
2018-05-04 12:38:41 Corey Bryant tags in-stable-queens verification-needed verification-needed-bionic in-stable-queens verification-needed verification-needed-bionic verification-queens-needed
2018-05-04 16:50:01 Launchpad Janitor neutron (Ubuntu Cosmic): status Triaged Fix Released
2018-05-07 05:35:34 Trent Lloyd tags in-stable-queens verification-needed verification-needed-bionic verification-queens-needed in-stable-queens verification-needed verification-needed-bionic verification-queens-done
2018-05-07 15:56:44 Corey Bryant tags in-stable-queens verification-needed verification-needed-bionic verification-queens-done in-stable-queens verification-done-bionic verification-needed verification-queens-done
2018-05-14 07:56:17 Łukasz Zemczak removed subscriber Ubuntu Stable Release Updates Team
2018-05-14 08:06:21 Launchpad Janitor neutron (Ubuntu Bionic): status Fix Committed Fix Released
2018-05-14 12:25:08 Corey Bryant cloud-archive/queens: status Fix Committed Fix Released
2018-05-31 10:04:41 Bernard Cafarelli tags in-stable-queens verification-done-bionic verification-needed verification-queens-done in-stable-queens neutron-proactive-backport-potential verification-done-bionic verification-needed verification-queens-done
2018-06-08 14:16:58 Bernard Cafarelli tags in-stable-queens neutron-proactive-backport-potential verification-done-bionic verification-needed verification-queens-done in-stable-queens verification-done-bionic verification-needed verification-queens-done
2020-09-08 13:55:52 Chris MacNaughton cloud-archive: status Fix Committed Fix Released