get_param is not working with stack-update

Bug #1291411 reported by Pavlo Shchelokovskyy
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenStack Heat
Fix Released
High
Zane Bitter

Bug Description

While fixing https://bugs.launchpad.net/heat/+bug/1269686 (gerrit review https://review.openstack.org/#/c/72681/) I found the following problem:

when updating the stack with template that uses get_param for a resource parameter to be updated, I can not reference the parameters of the old resource inside the handle_update method, as self.properties at this stage already holds new values.

On the contrary, when not using get_param and putting updated values directly in the resources section of the template, self.properties of the resource still holds values for old stack during handle_update, allowing usage of the old parameters while updating the resource.

Unfortunately, there are only few resources in Heat now that use old parameter values during update, so I can not (yet) provide the example with existing resources, so I give here the templates I was using while making VolumeAttachment updatable (the code in the gerrit review given above).

I have two volumes in cinder
$>cinder list
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| e0383cb9-ccfa-4d56-915c-66c7c09df009 | available | vol2 | 2 | None | false | |
| e90d0e9f-3cf4-4513-8112-0387fc75378c | available | vol1 | 1 | None | false | |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+

vol1.yaml
=======
heat_template_version: 2013-05-23

description: >
  HOT template that creates one instance and attaches a cinder volume to it.

resources:
  nova_instance:
    type: OS::Nova::Server
    properties:
      availability_zone: nova
      image: cirros-0.3.1-x86_64-disk
      flavor: m1.nano
  volume_attachment:
    type: OS::Cinder::VolumeAttachment
    properties:
      instance_uuid: { get_resource: nova_instance }
      volume_id: e90d0e9f-3cf4-4513-8112-0387fc75378c
      mountpoint: '/dev/vdc'

outputs:
  instance_ip:
    description: The IP address of the deployed instance
    value: { get_attr: [nova_instance, first_address] }

vol2.yaml
=======
heat_template_version: 2013-05-23

description: >
  HOT template that creates one instance and attaches a cinder volume to it.

resources:
  nova_instance:
    type: OS::Nova::Server
    properties:
      availability_zone: nova
      image: cirros-0.3.1-x86_64-disk
      flavor: m1.nano
  volume_attachment:
    type: OS::Cinder::VolumeAttachment
    properties:
      instance_uuid: { get_resource: nova_instance }
      volume_id: e0383cb9-ccfa-4d56-915c-66c7c09df009
      mountpoint: '/dev/vdc'

outputs:
  instance_ip:
    description: The IP address of the deployed instance
    value: { get_attr: [nova_instance, first_address] }

vols.yaml
=======

heat_template_version: 2013-05-23

description: >
  HOT template that creates one instance and attaches a cinder volume to it.

parameters:
  volume:
    type: string
    description: ID of the volume to attach

resources:
  nova_instance:
    type: OS::Nova::Server
    properties:
      availability_zone: nova
      image: cirros-0.3.1-x86_64-disk
      flavor: m1.nano
  volume_attachment:
    type: OS::Cinder::VolumeAttachment
    properties:
      instance_uuid: { get_resource: nova_instance }
      volume_id: { get_param: volume }
      mountpoint: '/dev/vdc'

outputs:
  instance_ip:
    description: The IP address of the deployed instance
    value: { get_attr: [nova_instance, first_address] }

When using templates without get_param:
$> heat stack-create -f vol1.yaml test
$> cinder list
+--------------------------------------+-----------+--------------+------+-------------+----------+--------------------------------------+
| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+----------+--------------------------------------+
| e0383cb9-ccfa-4d56-915c-66c7c09df009 | available | vol2 | 2 | None | false | |
| e90d0e9f-3cf4-4513-8112-0387fc75378c | in-use | vol1 | 1 | None | false | 33e068e5-cf09-4ea1-8c68-5ed313faa011 |
+--------------------------------------+-----------+--------------+------+-------------+----------+--------------------------------------+
$> heat stack-update -f vol2.yaml test
$> cinder list
+--------------------------------------+-----------+--------------+------+-------------+----------+--------------------------------------+
| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+----------+--------------------------------------+
| e0383cb9-ccfa-4d56-915c-66c7c09df009 | in-use | vol2 | 2 | None | false | 33e068e5-cf09-4ea1-8c68-5ed313faa011 |
| e90d0e9f-3cf4-4513-8112-0387fc75378c | available | vol1 | 1 | None | false | |
+--------------------------------------+-----------+--------------+------+-------------+----------+--------------------------------------+
volume attachment is successfully updated.

When using template with get_param:
$> heat stack-create -f vols.yaml -P volume=e90d0e9f-3cf4-4513-8112-0387fc75378c test
$> cinder list
+--------------------------------------+-----------+--------------+------+-------------+----------+--------------------------------------+
| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+----------+--------------------------------------+
| e0383cb9-ccfa-4d56-915c-66c7c09df009 | available | vol2 | 2 | None | false | |
| e90d0e9f-3cf4-4513-8112-0387fc75378c | in-use | vol1 | 1 | None | false | a53ef924-37a9-4d7f-ab77-9e59998c7733 |
+--------------------------------------+-----------+--------------+------+-------------+----------+--------------------------------------+
$> heat stack-update -f vols.yaml -P volume=e0383cb9-ccfa-4d56-915c-66c7c09df009 test
the update fails.

Below are what I consider relevant lines from heat-engine log):

2014-03-12 14:13:29.449 INFO heat.engine.resource [-] updating CinderVolumeAttachment "volume_attachment" [e90d0e9f-3cf4-4513-8112-0387fc75378c] Stack "vols" [b3fba4a5-8602-4dfe-8ce5-da920d2eb79a]
2014-03-12 14:13:29.525 DEBUG heat.engine.scheduler [-] Task VolumeDetachTask(e0383cb9-ccfa-4d56-915c-66c7c09df009 -/> a53ef924-37a9-4d7f-ab77-9e59998c7733) starting from (pid=17057) start /opt/stack/heat/heat/engine/scheduler.py:153
-----
2014-03-12 14:13:29.845 WARNING heat.engine.resources.volume [-] Volume e0383cb9-ccfa-4d56-915c-66c7c09df009 is already datached from the server a53ef924-37a9-4d7f-ab77-9e59998c7733
-----
2014-03-12 14:13:31.149 TRACE heat.engine.resource Conflict: The supplied device path (/dev/vdc) is in use. (HTTP 409)

Notice that heat immediately tries to detach the new volume instead of the old one, which is naturally not attached, and proceeds with attaching it again while the old volume is still attached to the same mount, which fails.

I checked it with pdb, and it seems that the problem is what I already described - different values in self.properties while executing handle_update.

I will try to come up with example that uses some code that is already in master branch.

Revision history for this message
Pavlo Shchelokovskyy (pshchelo) wrote :

Going though the code I have found only one resource that seems to use old properties values in the fashion vulnerable to the above behavior, and that is OS::Neutron::Pool. But while digging through it's handle_update several more bugs were found, so I am waiting for them to be fixed in order to prove my point with this bug.

Revision history for this message
Pavlo Shchelokovskyy (pshchelo) wrote :

As it is already (or on the path on being) fixed in affected resources on per-resource basis, I am marking this as Invalid. Bottom line - do not use self.properties to access old values of the resource's properties during update.

Changed in heat:
status: New → Invalid
Zane Bitter (zaneb)
Changed in heat:
status: Invalid → New
assignee: nobody → Zane Bitter (zaneb)
importance: Undecided → High
Changed in heat:
status: New → Triaged
milestone: none → icehouse-rc1
Zane Bitter (zaneb)
Changed in heat:
status: Triaged → In Progress
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to heat (master)

Fix proposed to branch: master
Review: https://review.openstack.org/81678

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to heat (master)

Reviewed: https://review.openstack.org/81678
Committed: https://git.openstack.org/cgit/openstack/heat/commit/?id=a674ac35d143ab63b18872497ccc61f25d00873c
Submitter: Jenkins
Branch: master

commit a674ac35d143ab63b18872497ccc61f25d00873c
Author: Zane Bitter <email address hidden>
Date: Wed Mar 19 20:56:43 2014 -0400

    Don't re-bind parameters during stack update

    While we're in the process of updating a stack, we set the stack's parameters
    to the new, updated values. However, we don't want to change existing
    resources' idea of their own values until we have explicitly done an update of
    them to bring them into line with the new template/parameters. So when parsing
    a template snippet, store the parameters object to which it will refer.
    References will resolve to the new parameters only when the template has been
    re-resolved.

    Closes-bug: #1291411

    Change-Id: Iebbb518c5d15cfba858db001648f3412a7d8b1c0

Changed in heat:
status: In Progress → Fix Committed
Thierry Carrez (ttx)
Changed in heat:
status: Fix Committed → Fix Released
Thierry Carrez (ttx)
Changed in heat:
milestone: icehouse-rc1 → 2014.1
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.