Unable to delete overcloud node when identifying --stack by UUID, using name works however

Bug #1646283 reported by John Fulton
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
tripleo
New
Undecided
Unassigned

Bug Description

Running `openstack overcloud node delete --stack $stack $node_uuid` fails if $stack is set to the UUID of a heat stack. Setting it to the stack_name, e.g. "overcloud", succeeds however. As per the output of `openstack overcloud node delete --help`, they should be interchangeable. The gist of the error is "Environment not found [name=23e7c364-7303-4af6-b54d-cfbf1b737680]"

Full error message below:

[stack@hci-director ~]$ nova_id=$(openstack server list | grep compute-3 | awk {'print $2'} | egrep -vi 'id|^$')
[stack@hci-director ~]$ stack_id=$(openstack stack list | awk {'print $2'} | egrep -vi 'id|^$')
[stack@hci-director ~]$ time openstack overcloud node delete --stack $stack_id $nova_id
deleting nodes [u'6b2a2e71-f9c8-4d5b-aaf8-dada97c90821'] from stack 23e7c364-7303-4af6-b54d-cfbf1b737680
Started Mistral Workflow. Execution ID: 4864b1df-a170-4d51-b411-79f839d11ecd
{u'execution': {u'id': u'4864b1df-a170-4d51-b411-79f839d11ecd',
                u'input': {u'container': u'23e7c364-7303-4af6-b54d-cfbf1b737680',
                           u'nodes': [u'6b2a2e71-f9c8-4d5b-aaf8-dada97c90821'],
                           u'queue_name': u'b0c40c06-be37-402d-9636-6071ba3e28b2',
                           u'timeout': 240},
                u'name': u'tripleo.scale.v1.delete_node',
                u'params': {},
                u'spec': {u'description': u'deletes given overcloud nodes and updates the stack',
                          u'input': [u'container',
                                     u'nodes',
                                     {u'timeout': 240},
                                     {u'queue_name': u'tripleo'}],
                          u'name': u'delete_node',
                          u'tasks': {u'delete_node': {u'action': u'tripleo.scale.delete_node nodes=<% $.nodes %> timeout=<% $.timeout %> container=<% $.container %>',
                                                      u'name': u'delete_node',
                                                      u'on-error': u'set_delete_node_failed',
                                                      u'on-success': u'send_message',
                                                      u'type': u'direct',
                                                      u'version': u'2.0'},
                                     u'send_message': {u'action': u'zaqar.queue_post',
                                                       u'input': {u'messages': {u'body': {u'payload': {u'execution': u'<% execution() %>',
                                                                                                       u'message': u"<% $.get('message', '') %>",
                                                                                                       u'status': u"<% $.get('status', 'SUCCESS') %>"},
                                                                                          u'type': u'tripleo.scale.v1.delete_node'}},
                                                                  u'queue_name': u'<% $.queue_name %>'},
                                                       u'name': u'send_message',
                                                       u'retry': u'count=5 delay=1',
                                                       u'type': u'direct',
                                                       u'version': u'2.0'},
                                     u'set_delete_node_failed': {u'name': u'set_delete_node_failed',
                                                                 u'on-success': u'send_message',
                                                                 u'publish': {u'message': u'<% task(delete_node).result %>',
                                                                              u'status': u'FAILED'},
                                                                 u'type': u'direct',
                                                                 u'version': u'2.0'}},
                          u'version': u'2.0'}},
 u'message': u"Failed to run action [action_ex_id=c2e44ffe-00fc-4131-b29c-981e33f50ea1, action_cls='<class 'mistral.actions.action_factory.ScaleDownAction'>', attributes='{}', params='{u'nodes': [u'6b2a2e71-f9c8-4d5b-aaf8-dada97c90821'], u'container': u'23e7c364-7303-4af6-b54d-cfbf1b737680', u'timeout': 240}']\n Environment not found [name=23e7c364-7303-4af6-b54d-cfbf1b737680]",
 u'status': u'FAILED'}

real 1m39.169s
user 0m0.530s
sys 0m0.104s
[stack@hci-director ~]$

Additional Details:

The stack ID was correct:

[stack@hci-director ~]$ heat stack-list
WARNING (shell) "heat stack-list" is deprecated, please use "openstack stack list" instead
+--------------------------------------+------------+-----------------+----------------------+----------------------+
| id | stack_name | stack_status | creation_time | updated_time |
+--------------------------------------+------------+-----------------+----------------------+----------------------+
| 23e7c364-7303-4af6-b54d-cfbf1b737680 | overcloud | UPDATE_COMPLETE | 2016-11-24T03:24:56Z | 2016-11-30T17:16:48Z |
+--------------------------------------+------------+-----------------+----------------------+----------------------+
[stack@hci-director ~]$

As well as the nova id:

[stack@hci-director ~]$ openstack server list | grep osd-compute-3
| 6b2a2e71-f9c8-4d5b-aaf8-dada97c90821 | overcloud-osd-compute-3 | ACTIVE | ctlplane=192.168.1.27 | overcloud-full |
[stack@hci-director ~]$

Note that the delete works if I identify the stack by its name.

[stack@hci-director ~]$ time openstack overcloud node delete --stack overcloud 6b2a2e71-f9c8-4d5b-aaf8-dada97c90821
deleting nodes [u'6b2a2e71-f9c8-4d5b-aaf8-dada97c90821'] from stack overcloud
Started Mistral Workflow. Execution ID: 396f123d-df5b-4f37-b137-83d33969b52b

real 1m50.662s
user 0m0.563s
sys 0m0.099s
[stack@hci-director ~]$

Revision history for this message
Julie Pichon (jpichon) wrote :

Thank you for the detailed bug report! This looks like a duplicate of bug 1640933, which was just fixed on master last week - doesn't look like a backport made it through yet though.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.