another log from another test looks similar:
2015-04-30 09:03:51.570 | 2015-04-30 09:03:51.550 | Captured traceback: 2015-04-30 09:03:51.572 | 2015-04-30 09:03:51.552 | ~~~~~~~~~~~~~~~~~~~ 2015-04-30 09:03:51.573 | 2015-04-30 09:03:51.553 | Traceback (most recent call last): 2015-04-30 09:03:51.575 | 2015-04-30 09:03:51.555 | File "/opt/stack/new/heat/heat_integrationtests/functional/test_template_resource.py", line 399, in test_update_on_failed_create 2015-04-30 09:03:51.606 | 2015-04-30 09:03:51.556 | files={'server_fail.yaml': self.nested_templ}) 2015-04-30 09:03:51.607 | 2015-04-30 09:03:51.558 | File "/opt/stack/new/heat/heat_integrationtests/common/test.py", line 317, in update_stack 2015-04-30 09:03:51.607 | 2015-04-30 09:03:51.559 | self._wait_for_stack_status(**kwargs) 2015-04-30 09:03:51.607 | 2015-04-30 09:03:51.561 | File "/opt/stack/new/heat/heat_integrationtests/common/test.py", line 275, in _wait_for_stack_status 2015-04-30 09:03:51.607 | 2015-04-30 09:03:51.562 | stack_status_reason=stack.stack_status_reason) 2015-04-30 09:03:51.607 | 2015-04-30 09:03:51.564 | heat_integrationtests.common.exceptions.StackBuildErrorException: Stack TemplateResourceUpdateFailedTest-1508413043/861a0ffb-f79f-4d47-ba6d-e1f54dd2f3ca is in UPDATE_FAILED status due to 'ActionInProgress: Stack TemplateResourceUpdateFailedTest-1508413043-server-zcn4zjtjjfry already has an action (CREATE) in progress.'
Also logs from heat-engine:
2015-04-30 09:03:47.269 5485 INFO heat.engine.stack [-] Stack CREATE COMPLETE (TemplateResourceUpdateFailedTest-1508413043-server-zcn4zjtjjfry): Stack CREATE completed successfully
2015-04-30 09:03:47.298 5485 DEBUG oslo_messaging.rpc.dispatcher [req-4da843af-91c9-41d9-b4cb-01acd8bebb1d demo demo] Expected exception during message handling (Stack TemplateResourceUpdateFailedTest-1508413043-server-zcn4zjtjjfry already has an action (CREATE) in progress.) _dispatch_and_reply /usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py:145
So it really looks like issue with our bad handling of update via rpc.
another log from another test looks similar:
2015-04-30 09:03:51.570 | 2015-04-30 09:03:51.550 | Captured traceback: new/heat/ heat_integratio ntests/ functional/ test_template_ resource. py", line 399, in test_update_ on_failed_ create 'server_ fail.yaml' : self.nested_templ}) new/heat/ heat_integratio ntests/ common/ test.py" , line 317, in update_stack for_stack_ status( **kwargs) new/heat/ heat_integratio ntests/ common/ test.py" , line 275, in _wait_for_ stack_status reason= stack.stack_ status_ reason) ntests. common. exceptions. StackBuildError Exception: Stack TemplateResourc eUpdateFailedTe st-1508413043/ 861a0ffb- f79f-4d47- ba6d-e1f54dd2f3 ca is in UPDATE_FAILED status due to 'ActionInProgress: Stack TemplateResourc eUpdateFailedTe st-1508413043- server- zcn4zjtjjfry already has an action (CREATE) in progress.'
2015-04-30 09:03:51.572 | 2015-04-30 09:03:51.552 | ~~~~~~~~~~~~~~~~~~~
2015-04-30 09:03:51.573 | 2015-04-30 09:03:51.553 | Traceback (most recent call last):
2015-04-30 09:03:51.575 | 2015-04-30 09:03:51.555 | File "/opt/stack/
2015-04-30 09:03:51.606 | 2015-04-30 09:03:51.556 | files={
2015-04-30 09:03:51.607 | 2015-04-30 09:03:51.558 | File "/opt/stack/
2015-04-30 09:03:51.607 | 2015-04-30 09:03:51.559 | self._wait_
2015-04-30 09:03:51.607 | 2015-04-30 09:03:51.561 | File "/opt/stack/
2015-04-30 09:03:51.607 | 2015-04-30 09:03:51.562 | stack_status_
2015-04-30 09:03:51.607 | 2015-04-30 09:03:51.564 | heat_integratio
Also logs from heat-engine:
2015-04-30 09:03:47.269 5485 INFO heat.engine.stack [-] Stack CREATE COMPLETE (TemplateResour ceUpdateFailedT est-1508413043- server- zcn4zjtjjfry) : Stack CREATE completed successfully
2015-04-30 09:03:47.298 5485 DEBUG oslo_messaging. rpc.dispatcher [req-4da843af- 91c9-41d9- b4cb-01acd8bebb 1d demo demo] Expected exception during message handling (Stack TemplateResourc eUpdateFailedTe st-1508413043- server- zcn4zjtjjfry already has an action (CREATE) in progress.) _dispatch_and_reply /usr/local/ lib/python2. 7/dist- packages/ oslo_messaging/ rpc/dispatcher. py:145
So it really looks like issue with our bad handling of update via rpc.