Activity log for bug #1572085

Date Who What changed Old value New value Message
2016-04-19 11:32:36 Dmitry Mescheryakov bug added bug
2016-04-19 11:32:44 Dmitry Mescheryakov nominated for series mos/9.0.x
2016-04-19 11:32:44 Dmitry Mescheryakov bug task added mos/9.0.x
2016-04-19 11:32:44 Dmitry Mescheryakov nominated for series mos/10.0.x
2016-04-19 11:32:44 Dmitry Mescheryakov bug task added mos/10.0.x
2016-04-19 11:32:49 Dmitry Mescheryakov mos/10.0.x: importance Undecided High
2016-04-19 11:32:50 Dmitry Mescheryakov mos/9.0.x: importance Undecided High
2016-04-19 11:32:56 Dmitry Mescheryakov mos/10.0.x: assignee MOS Oslo (mos-oslo)
2016-04-19 11:33:02 Dmitry Mescheryakov mos/9.0.x: assignee MOS Oslo (mos-oslo)
2016-04-19 11:33:05 Dmitry Mescheryakov mos/10.0.x: milestone 10.0
2016-04-19 11:33:07 Dmitry Mescheryakov mos/9.0.x: milestone 9.0
2016-04-19 11:33:09 Dmitry Mescheryakov mos/10.0.x: status New Confirmed
2016-04-19 11:33:11 Dmitry Mescheryakov mos/9.0.x: status New Confirmed
2016-04-19 11:34:38 Dmitry Mescheryakov description Version: 9.0 Steps to reproduce: 1. Deploy environment MOS environment. 2. Run some tests on it (exact cause is unknown yet) Expected results: All logs are clean Actual results: In one of OpenStack components log you find a lot of exceptions like NotFound: Basic.consume: (404) NOT_FOUND - no queue 'reply_4b5920a6600d4d779c61c1a82dd7b81a' in vhost '/' (full stack trace from neutron-server logs - http://paste.openstack.org/show/494399/) This indicates that process lost a queue it was listening on and the situation does not end by itself. Loosing a queue has an impact that server stops processing messages from it, which might be crucial to its work (depends on the queue). In rabbit logs on node-61 one grep find the following entries: http://paste.openstack.org/show/494589/ Note the pattern - first two queue.declare operations timed out and then basic.consume fail in endless loop. It seems that RabbitMQ failed to create the queue due to overload or something and oslo.messaging did not notice that. Unfortunately the relevant neutron-server logs were already rotated, so it is not clear what happened in oslo.messaging at the time of the queue declaration. Version: 9.0 Steps to reproduce: 1. Deploy environment MOS environment. 2. Run some tests on it (exact cause is unknown yet) Expected results: All logs are clean Actual results: In one of OpenStack components log you find a lot of exceptions like NotFound: Basic.consume: (404) NOT_FOUND - no queue 'reply_4b5920a6600d4d779c61c1a82dd7b81a' in vhost '/' (full stack trace from neutron-server logs - http://paste.openstack.org/show/494399/) This indicates that process lost a queue it was listening on and the situation does not end by itself. Loosing a queue has an impact that server stops processing messages from it, which might be crucial to its work (depends on the queue). In rabbit logs on node-61 with grep one can find the following entries: http://paste.openstack.org/show/494589/ Note the pattern - first two queue.declare operations timed out and then basic.consume fail in endless loop. It seems that RabbitMQ failed to create the queue due to overload or something and oslo.messaging did not notice that. Unfortunately the relevant neutron-server logs were already rotated, so it is not clear what happened in oslo.messaging at the time of the queue declaration.
2016-04-19 11:35:28 Dmitry Mescheryakov description Version: 9.0 Steps to reproduce: 1. Deploy environment MOS environment. 2. Run some tests on it (exact cause is unknown yet) Expected results: All logs are clean Actual results: In one of OpenStack components log you find a lot of exceptions like NotFound: Basic.consume: (404) NOT_FOUND - no queue 'reply_4b5920a6600d4d779c61c1a82dd7b81a' in vhost '/' (full stack trace from neutron-server logs - http://paste.openstack.org/show/494399/) This indicates that process lost a queue it was listening on and the situation does not end by itself. Loosing a queue has an impact that server stops processing messages from it, which might be crucial to its work (depends on the queue). In rabbit logs on node-61 with grep one can find the following entries: http://paste.openstack.org/show/494589/ Note the pattern - first two queue.declare operations timed out and then basic.consume fail in endless loop. It seems that RabbitMQ failed to create the queue due to overload or something and oslo.messaging did not notice that. Unfortunately the relevant neutron-server logs were already rotated, so it is not clear what happened in oslo.messaging at the time of the queue declaration. Version: 9.0 Steps to reproduce: 1. Deploy environment MOS environment. 2. Run some tests on it (exact cause is unknown yet) Expected results: All logs are clean Actual results: In one of OpenStack components log you find a lot of exceptions like NotFound: Basic.consume: (404) NOT_FOUND - no queue 'reply_4b5920a6600d4d779c61c1a82dd7b81a' in vhost '/' (full stack trace from neutron-server logs - http://paste.openstack.org/show/494399/) This indicates that process lost a queue it was listening on and the situation does not end by itself. Loosing a queue has an impact that server stops processing messages from it, which might be crucial to its work (depends on the queue). In rabbit logs on node-61 with grep one can find the following entries (only several earliest are shown): http://paste.openstack.org/show/494589/ Note the pattern - first two queue.declare operations timed out and then basic.consume fail in endless loop. It seems that RabbitMQ failed to create the queue due to overload or something and oslo.messaging did not notice that. Unfortunately the relevant neutron-server logs were already rotated, so it is not clear what happened in oslo.messaging at the time of the queue declaration.
2016-05-20 15:21:04 Dmitry Mescheryakov mos/10.0.x: assignee MOS Oslo (mos-oslo) Kirill Bespalov (k-besplv)
2016-05-20 15:21:17 Dmitry Mescheryakov mos/9.0.x: assignee MOS Oslo (mos-oslo) Kirill Bespalov (k-besplv)
2016-05-20 15:39:46 Dmitry Mescheryakov mos/10.0.x: status Confirmed In Progress
2016-05-20 15:39:50 Dmitry Mescheryakov mos/9.0.x: status Confirmed In Progress
2016-05-24 15:39:01 Dmitry Mescheryakov mos/10.0.x: status In Progress Fix Committed
2016-05-24 15:39:02 Dmitry Mescheryakov mos/9.0.x: status In Progress Fix Committed
2016-06-14 10:39:16 Sergii Turivnyi tags on-verification
2016-06-22 10:49:18 Sergii Turivnyi mos/10.0.x: status Fix Committed Fix Released
2016-06-22 10:49:21 Sergii Turivnyi mos/9.0.x: status Fix Committed Fix Released
2016-06-22 10:49:25 Sergii Turivnyi mos/10.0.x: status Fix Released Fix Committed
2016-06-22 10:49:41 Sergii Turivnyi tags on-verification
2017-05-22 13:30:08 Denis Meltsaykin nominated for series mos/7.0.x
2017-05-22 13:30:08 Denis Meltsaykin bug task added mos/7.0.x
2017-05-22 13:30:08 Denis Meltsaykin nominated for series mos/8.0.x
2017-05-22 13:30:08 Denis Meltsaykin bug task added mos/8.0.x
2017-05-22 13:30:14 Denis Meltsaykin mos/8.0.x: milestone 8.0-mu-5
2017-05-22 13:30:21 Denis Meltsaykin mos/8.0.x: assignee Kirill Bespalov (k-besplv)
2017-05-22 13:30:23 Denis Meltsaykin mos/8.0.x: importance Undecided High
2017-05-22 13:30:26 Denis Meltsaykin mos/8.0.x: status New Fix Committed
2017-05-22 13:30:30 Denis Meltsaykin mos/7.0.x: status New In Progress
2017-05-22 13:30:32 Denis Meltsaykin mos/7.0.x: importance Undecided High
2017-05-22 13:30:39 Denis Meltsaykin mos/7.0.x: assignee Kirill Bespalov (k-besplv)
2017-05-22 13:30:42 Denis Meltsaykin mos/7.0.x: milestone 7.0-mu-8
2017-05-22 16:23:33 Denis Meltsaykin mos/7.0.x: status In Progress Fix Committed
2017-05-24 12:52:47 TatyanaGladysheva mos/7.0.x: status Fix Committed Fix Released
2017-07-14 15:57:43 Dmitry tags on-verification
2017-07-14 15:57:49 Dmitry tags on-verification
2017-07-14 15:58:11 Dmitry mos/8.0.x: status Fix Committed Fix Released