Seem to be hitting this but not yet sure how:
nd': u'router-manage'} from (pid=13733) info /opt/stack/astara/astara/notifications.py:153
2016-01-21 21:07:45.892 DEBUG astara.scheduler:13611:pmain:tmain [-] target 9b1783947cc8452aa56116e4ec22a4b6 maps to worker 0 from (pid=13611) pick_workers /opt/stack/astara/astara/scheduler.py:95
2016-01-21 21:07:45.896 DEBUG astara.worker:13741:p00:tmain [req-fd920c8c-abba-44a1-b629-061bdf3121c6 None None] got: 9b1783947cc8452aa56116e4ec22a4b6 <Event (resource=<Resource (driver=*, id=0e06ab0a-bbee-4228-8808-db72e9e78776, tenant_id=9b1783947cc8452aa56116e4ec22a4b6)>, crud=command, body={u'router_id': u'0e06ab0a-bbee-4228-8808-db72e9e78776', u'tenant_id': u'9b1783947cc8452aa56116e4ec22a4b6', u'reason': None, u'command': u'router-manage'})> from (pid=13741) handle_message /opt/stack/astara/astara/worker.py:396
2016-01-21 21:07:45.928 INFO astara.worker:13741:p00:tmain [req-fd920c8c-abba-44a1-b629-061bdf3121c6 None None] Resuming management of resource 0e06ab0a-bbee-4228-8808-db72e9e78776
2016-01-21 21:07:45.929 INFO astara.worker:13741:p00:tmain [req-fd920c8c-abba-44a1-b629-061bdf3121c6 None None] Unlocked resource 0e06ab0a-bbee-4228-8808-db72e9e78776
2016-01-21 21:07:46.063 DEBUG oslo_messaging._drivers.amqpdriver:13741:p00:Thread-3 [-] received reply msg_id: 99617e5d475b49129085a538830e934d from (pid=13741) __call__ /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:296
2016-01-21 21:07:46.082 DEBUG ak-router-0e06ab0a-bbee-4228-8808-db72e9e78776:13741:p00:t00 [req-382c771a-a207-4c85-ab01-08c3a66d5f55 None None] ConfigureInstance.execute -> poll instance.state=configured from (pid=13741) update /opt/stack/astara/astara/state.py:449
2016-01-21 21:07:46.082 DEBUG ak-router-0e06ab0a-bbee-4228-8808-db72e9e78776:13741:p00:t00 [req-382c771a-a207-4c85-ab01-08c3a66d5f55 None None] ConfigureInstance.transition(poll) -> CalcAction instance.state=configured from (pid=13741) update /opt/stack/astara/astara/state.py:467
Exception in thread t00:
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 810, in __bootstrap_inner
self.run()
File "/usr/lib/python2.7/threading.py", line 763, in run
self.__target(*self.__args, **self.__kwargs)
File "/opt/stack/astara/astara/worker.py", line 246, in _thread_target
self._release_resource_lock(sm)
File "/opt/stack/astara/astara/worker.py", line 681, in _release_resource_lock
self._resource_locks[sm.resource_id].release()
error: release unlocked lock
It ends up gumming up the state machine for that resource and msgs start to pile up in-queue. Details to come.
Fix proposed to branch: master /review. openstack. org/271158
Review: https:/