[mos] Critical error: neutron [-] AssertionError: Trying to re-send() an already-triggered event.

Bug #1347612 reported by Anastasia Palkina
18
This bug affects 4 people
Affects Status Importance Assigned to Milestone
Mirantis OpenStack
Won't Fix
Low
Eugene Nikanorov
5.0.x
Won't Fix
Low
Eugene Nikanorov
5.1.x
Won't Fix
Low
Eugene Nikanorov
6.0.x
Won't Fix
Low
Eugene Nikanorov
6.1.x
Won't Fix
Low
Eugene Nikanorov

Bug Description

"build_id": "2014-07-23_02-01-14",
"ostf_sha": "c1b60d4bcee7cd26823079a86e99f3f65414498e",
"build_number": "347",
"auth_required": false,
"api": "1.0",
"nailgun_sha": "f5775d6b7f5a3853b28096e8c502ace566e7041f",
"production": "docker",
"fuelmain_sha": "74b9200955201fe763526ceb51607592274929cd",
"astute_sha": "fd9b8e3b6f59b2727b1b037054f10e0dd7bd37f1",
"feature_groups": ["mirantis"],
"release": "5.1",
"fuellib_sha": "fb0e84c954a33c912584bf35054b60914d2a2360"

1. Create new environment (Ubuntu, simple mode)
2. Choose GRE segmentation
3. Choose both Ceph
4. Choose Sahara and Ceilometer installation
5. Add controller+ceph, compute+ceph, mongo
6. Start deployment. It was successful
7. But there is error in neutron-openvswitch-agent.log on compute node (node-6):

2014-07-23 11:04:41 CRITICAL

neutron [-] AssertionError: Trying to re-send() an already-triggered event.
2014-07-23 10:04:41.147 31786 TRACE neutron Traceback (most recent call last):
2014-07-23 10:04:41.147 31786 TRACE neutron File "/usr/bin/neutron-openvswitch-agent", line 10, in <module>
2014-07-23 10:04:41.147 31786 TRACE neutron sys.exit(main())
2014-07-23 10:04:41.147 31786 TRACE neutron File "/usr/lib/python2.7/dist-packages/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py", line 1434, in main
2014-07-23 10:04:41.147 31786 TRACE neutron agent.daemon_loop()
2014-07-23 10:04:41.147 31786 TRACE neutron File "/usr/lib/python2.7/dist-packages/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py", line 1362, in daemon_loop
2014-07-23 10:04:41.147 31786 TRACE neutron self.rpc_loop(polling_manager=pm)
2014-07-23 10:04:41.147 31786 TRACE neutron File "/usr/lib/python2.7/contextlib.py", line 35, in __exit__
2014-07-23 10:04:41.147 31786 TRACE neutron self.gen.throw(type, value, traceback)
2014-07-23 10:04:41.147 31786 TRACE neutron File "/usr/lib/python2.7/dist-packages/neutron/agent/linux/polling.py", line 41, in get_polling_manager
2014-07-23 10:04:41.147 31786 TRACE neutron pm.stop()
2014-07-23 10:04:41.147 31786 TRACE neutron File "/usr/lib/python2.7/dist-packages/neutron/agent/linux/polling.py", line 108, in stop
2014-07-23 10:04:41.147 31786 TRACE neutron self._monitor.stop()
2014-07-23 10:04:41.147 31786 TRACE neutron File "/usr/lib/python2.7/dist-packages/neutron/agent/linux/async_process.py", line 91, in stop
2014-07-23 10:04:41.147 31786 TRACE neutron self._kill()
2014-07-23 10:04:41.147 31786 TRACE neutron File "/usr/lib/python2.7/dist-packages/neutron/agent/linux/ovsdb_monitor.py", line 108, in _kill
2014-07-23 10:04:41.147 31786 TRACE neutron super(SimpleInterfaceMonitor, self)._kill(*args, **kwargs)
2014-07-23 10:04:41.147 31786 TRACE neutron File "/usr/lib/python2.7/dist-packages/neutron/agent/linux/async_process.py", line 118, in _kill
2014-07-23 10:04:41.147 31786 TRACE neutron self._kill_event.send()
2014-07-23 10:04:41.147 31786 TRACE neutron File "/usr/lib/python2.7/dist-packages/eventlet/event.py", line 150, in send
2014-07-23 10:04:41.147 31786 TRACE neutron assert self._result is NOT_USED, 'Trying to re-send() an already-triggered event.'
2014-07-23 10:04:41.147 31786 TRACE neutron AssertionError: Trying to re-send() an already-triggered event.
2014-07-23 10:04:41.147 31786 TRACE neutron

Revision history for this message
Anastasia Palkina (apalkina) wrote :
Revision history for this message
Bogdan Dobrelya (bogdando) wrote :

Could be related somehow https://bugs.launchpad.net/nova/+bug/1246848 (nova)

Changed in fuel:
assignee: Fuel Library Team (fuel-library) → MOS Neutron (mos-neutron)
Dmitry Ilyin (idv1985)
summary: - Critical error: neutron [-] AssertionError: Trying to re-send() an
+ [mos] Critical error: neutron [-] AssertionError: Trying to re-send() an
already-triggered event.
Revision history for this message
Eugene Nikanorov (enikanorov) wrote :

Am I correct assuming that it is a fresh stable-icehouse branch issue?

Changed in fuel:
assignee: MOS Neutron (mos-neutron) → Eugene Nikanorov (enikanorov)
Mike Scherbakov (mihgen)
Changed in mos:
importance: Undecided → High
assignee: nobody → Eugene Nikanorov (enikanorov)
milestone: none → 5.1
no longer affects: fuel
Revision history for this message
Anastasia Palkina (apalkina) wrote :

Reproduced on master ISO #373
"build_id": "2014-07-30_02-30-36",
"ostf_sha": "9c0454b2197756051fc9cee3cfd856cf2a4f0875",
"build_number": "373",
"auth_required": true,
"api": "1.0",
"nailgun_sha": "8cf375f7687d7d0797e7f085a909df8087fc82a6",
"production": "docker",
"fuelmain_sha": "11ef72a20409ba34535ec9e6e093a2e1695161de",
"astute_sha": "b16efcec6b4af1fb8669055c053fbabe188afa67",
"feature_groups": ["mirantis", "experimental"],
"release": "5.1",
"fuellib_sha": "8729e696e0653920bf937329e45a9c23a8f20a1f"

1. Create new environment (Ubuntu, simple mode)
2. Choose GRE segmentation
3. Add controller, compute, cinder, zabbix
4. Start deployment. It was successful
5. But this bug appears

Changed in mos:
status: New → In Progress
Revision history for this message
Eugene Nikanorov (enikanorov) wrote :

Lowering importance as this trace is only logged when ovs-agent is terminated.

Changed in mos:
importance: High → Medium
Changed in mos:
milestone: 5.1 → 6.0
Revision history for this message
Andrew Woodward (xarses) wrote :

New reproducer, set to critical, 5.1 and new so that we can re-evaluate status of the bug.

Changed in mos:
milestone: 6.0 → 5.1
importance: Medium → Critical
status: In Progress → New
Revision history for this message
Tyler Wilson (loth) wrote :

{"build_id": "2014-08-26_21-42-16", "ostf_sha": "4dcd99cc4bfa19f52d4b87ed321eb84ff03844da", "build_number": "9", "auth_required": true, "api": "1.0", "nailgun_sha": "04e3f9d9ad3140cd63a9b5a1a302c03ebe64fd0a", "production": "docker", "fuelmain_sha": "74a97d500bb2fe9528f99771ccc2ec657ae3f76e", "astute_sha": "bc60b7d027ab244039f48c505ac52ab8eb0a990c", "feature_groups": ["experimental"], "release": "5.1", "fuellib_sha": "1e43ca00fe4fb05a485de4bea55bd00d16bda532"}

Reproduced on latest master build. Per request here is output of 'neutron agent-list' after deployment;

+--------------------------------------+--------------------+--------+-------+----------------+
| id | agent_type | host | alive | admin_state_up |
+--------------------------------------+--------------------+--------+-------+----------------+
| 11e86165-8b49-4bf5-a67a-53adc65797c2 | DHCP agent | node-3 | :-) | True |
| 131c1bef-2ebf-467e-b9db-0058e85638c6 | Open vSwitch agent | node-1 | :-) | True |
| 210d7f7e-b404-47c3-b05d-04e5d68859db | Open vSwitch agent | node-8 | :-) | True |
| 22516581-1f85-4226-b57d-df2343a859b8 | L3 agent | node-1 | :-) | True |
| 452ae239-7304-46ec-82b0-4865bfdf6527 | Open vSwitch agent | node-4 | :-) | True |
| 5fcdf96c-01bc-4a14-97af-a4ee721a101e | Open vSwitch agent | node-3 | :-) | True |
| 616d1c5f-75bf-4da1-9f06-7c3dbce5c1ae | Metadata agent | node-3 | :-) | True |
| 75599e30-17ce-4b6d-a6b3-ee26169672ce | Open vSwitch agent | node-7 | :-) | True |
| 7cd6f9b9-c22d-4139-a946-6d872f963b3c | Metadata agent | node-2 | :-) | True |
| 8ea2f52d-8ede-42b2-bf40-2c6dca64d861 | Metadata agent | node-1 | :-) | True |
| a99e8632-5ee2-4ca1-a7a2-992c0ce20cfd | Open vSwitch agent | node-5 | :-) | True |
| d0706d42-c1ac-40be-baa6-f1e38a25d6db | Open vSwitch agent | node-2 | :-) | True |
| eeb4a8b9-7cef-4095-8f98-13f00e54bc75 | Open vSwitch agent | node-6 | :-) | True |
+--------------------------------------+--------------------+--------+-------+----------------+

Revision history for this message
Andrew Woodward (xarses) wrote :
Revision history for this message
Eugene Nikanorov (enikanorov) wrote :

I don't really see why is that critical.
It only happens when agent is restarted, so it's going to die anyway.

The fix for the issue has been submitted to upstream and I wait to pick up the merged fix.

Changed in mos:
importance: Critical → Medium
Revision history for this message
Dmitry Mescheryakov (dmitrymex) wrote :

The bug is just about scary log message in the log. So I agree with Eugene that the bug severe and should be medium at maximum. Since we already reached soft code freeze, we do not fix bugs lower than high. Hence moving to 6.0.

Changed in mos:
milestone: 5.1 → 6.0
Changed in mos:
status: New → Triaged
tags: added: release-notes
Revision history for this message
Dmitry Borodaenko (angdraug) wrote :

If this Medium priority, it doesn't need a backport.

Revision history for this message
Alexander Ignatov (aignatov) wrote :

Decreased severirty. Impact only in log messages.

Changed in mos:
status: Triaged → Won't Fix
Revision history for this message
Dmitry Mescheryakov (dmitrymex) wrote :

Text for the release note (taken from 5.1.1 release notes):
During OpenStack deployment, a spurious critical error may appear in a log related to the ovs-agent. The error is misleading; no actual malfunction has occurred.

Revision history for this message
Eugene Nikanorov (enikanorov) wrote :

This will go away as soon as Kilo will be merged.
Not critical and has no negative effect on cloud functionality, setting as Won't fix for 6.1

Revision history for this message
Alexander Ignatov (aignatov) wrote :

Setting as Won't Fix for 6.0.1 since it Won't be fixed at all and old issue occured several months ago only

Changed in mos:
status: Incomplete → Won't Fix
tags: added: release-notes-done
removed: release-notes
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.