race condition occurs when two policies in one autoscalinggroup are signaled

Bug #1310648 reported by Zhang Yang
10
This bug affects 2 people
Affects Status Importance Assigned to Milestone
OpenStack Heat
Triaged
Low
Unassigned

Bug Description

currently, update a in progress stack is forbiddened, but we don't limit it when a autoscalinggroup adjusts, if two or more policies are signaled in a short while, some resources will be out of control because only one new template will be saved. and some unexpected errors may happen.

Changed in heat:
assignee: nobody → Zhang Yang (neil-zhangyang)
description: updated
Revision history for this message
Openstack Gerrit (openstack-gerrit) wrote : Fix proposed to heat (master)

Fix proposed to branch: master
Review: https://review.openstack.org/90325

Changed in heat:
status: New → In Progress
summary: - race condition occurs when two policies in one autoscalinggroup a
+ race condition occurs when two policies in one autoscalinggroup are
signaled
Revision history for this message
Zhang Yang (neil-zhangyang) wrote :

I reproduced this bug, and found that this error occurs only when policies are signaled almost at the same time. like this:
signal 1: start --- load_nested_stack(status: COMPLETE) --- check_status --- set_status --- update --- set_status(COMPLETE) --- store

signal 2: start --- load_nested_stack(status: COMPLETE) --- check_status --- set_status --- update -- set_status(COMPLETE) --- store

And i think there aren't use cases like this.

but there is a problem in theory when signaling a in_progress autoscalinggroup. now, if updating a IN_PROGRESS stack, we will set the status as "FAILED".

signal 1: start --- load_nested_stack(status: COMPLETE) --- check_status --- set_status(IN_PROGRESS) --- update ---set_status(COMPLETE) --- store

signal 2: start --- load_nested_stack(status: IN_PROGRESS) --- set_status( status: FAILED)

if thread_signal_2 set status before thread_signal_1, the status is set as "FAILED" and then "COMPLETE", It's ok (only the status is confused), but if not, the stack's status is failed finally, this group can't adjust anymore.

so I think we shoud check the nested_stack status, and throw a exception directly if the stack is not in stable status, like what stack-update does.

Changed in heat:
assignee: Zhang Yang (neil-zhangyang) → nobody
Angus Salkeld (asalkeld)
Changed in heat:
status: In Progress → Triaged
Angus Salkeld (asalkeld)
Changed in heat:
importance: Undecided → Low
Changed in heat:
assignee: nobody → ismdev (fernandez-molina-ismael)
Changed in heat:
assignee: ismdev (fernandez-molina-ismael) → nobody
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Change abandoned on heat (master)

Change abandoned by Zane Bitter (<email address hidden>) on branch: master
Review: https://review.openstack.org/90325

Rico Lin (rico-lin)
Changed in heat:
milestone: none → no-priority-tag-bugs
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.