Heat doesn't delete LBaaS v2 pools before the LBaaS

Bug #1672182 reported by Turbo Fredriksson
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenStack Heat
New
Undecided
Unassigned

Bug Description

I have a loadbalancer (v2), which I attach several listeners to. These listeners have a pool each.

This is the tree structure of the loadbalancer:

lbaas-admin-consul: ACTIVE (10.0.17.28)
        listener-admin-consul-8301-TCP: ONLINE
                hapool-admin-consul-8301-TCP: ONLINE (TCP)
        listener-admin-consul-8302-TCP: ONLINE
                hapool-admin-consul-8302-TCP: ONLINE (TCP)
        listener-admin-consul-8600-TCP: ONLINE
                hapool-admin-consul-8600-TCP: ONLINE (TCP)
        listener-admin-consul-8400-TCP: ONLINE
                hapool-admin-consul-8400-TCP: ONLINE (TCP)
        listener-admin-consul-8500-HTTP: ONLINE
                hapool-admin-consul-8500-HTTP: ONLINE (HTTP)
        listener-admin-consul-8300-TCP: ONLINE
                hapool-admin-consul-8300-TCP: ONLINE (TCP)

Trying to destroy the stack where this LBaaS is located, gives:

2017-03-12 17:33:11.813 31481 INFO heat.engine.resource [req-c588a9a9-8f9e-4775-be57-f5f521677f1f - - - - -] DELETE: LoadBalancer "lbaas" [e5f0a131-1b80-4f8c-9c02-961ac7ab53e4] Stack "admin-consul-lbaas-ahjbvbsacfrh" [56dbfcf8-8b02-439f-b639-f20ae90ba1cb]
2017-03-12 17:33:11.813 31481 ERROR heat.engine.resource Traceback (most recent call last):
2017-03-12 17:33:11.813 31481 ERROR heat.engine.resource File "/usr/lib/python2.7/dist-packages/heat/engine/resource.py", line 753, in _action_recorder
2017-03-12 17:33:11.813 31481 ERROR heat.engine.resource yield
2017-03-12 17:33:11.813 31481 ERROR heat.engine.resource File "/usr/lib/python2.7/dist-packages/heat/engine/resource.py", line 1669, in delete
2017-03-12 17:33:11.813 31481 ERROR heat.engine.resource *action_args)
2017-03-12 17:33:11.813 31481 ERROR heat.engine.resource File "/usr/lib/python2.7/dist-packages/heat/engine/scheduler.py", line 353, in wrapper
2017-03-12 17:33:11.813 31481 ERROR heat.engine.resource step = next(subtask)
2017-03-12 17:33:11.813 31481 ERROR heat.engine.resource File "/usr/lib/python2.7/dist-packages/heat/engine/resource.py", line 806, in action_handler_task
2017-03-12 17:33:11.813 31481 ERROR heat.engine.resource done = check(handler_data)
2017-03-12 17:33:11.813 31481 ERROR heat.engine.resource File "/usr/lib/python2.7/dist-packages/heat/engine/resources/openstack/neutron/lbaas/loadbalancer.py", line 172, in check_delete_complete
2017-03-12 17:33:11.813 31481 ERROR heat.engine.resource self.client().delete_loadbalancer(self.resource_id)
2017-03-12 17:33:11.813 31481 ERROR heat.engine.resource File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 1089, in delete_loadbalancer
2017-03-12 17:33:11.813 31481 ERROR heat.engine.resource (lbaas_loadbalancer))
2017-03-12 17:33:11.813 31481 ERROR heat.engine.resource File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 356, in delete
2017-03-12 17:33:11.813 31481 ERROR heat.engine.resource headers=headers, params=params)
2017-03-12 17:33:11.813 31481 ERROR heat.engine.resource File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 337, in retry_request
2017-03-12 17:33:11.813 31481 ERROR heat.engine.resource headers=headers, params=params)
2017-03-12 17:33:11.813 31481 ERROR heat.engine.resource File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 300, in do_request
2017-03-12 17:33:11.813 31481 ERROR heat.engine.resource self._handle_fault_response(status_code, replybody, resp)
2017-03-12 17:33:11.813 31481 ERROR heat.engine.resource File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 275, in _handle_fault_response
2017-03-12 17:33:11.813 31481 ERROR heat.engine.resource exception_handler_v20(status_code, error_body)
2017-03-12 17:33:11.813 31481 ERROR heat.engine.resource File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 91, in exception_handler_v20
2017-03-12 17:33:11.813 31481 ERROR heat.engine.resource request_ids=request_ids)
2017-03-12 17:33:11.813 31481 ERROR heat.engine.resource Conflict: pool 17123a61-47bf-42b6-93ba-760b2f666ade is using this loadbalancer
2017-03-12 17:33:11.813 31481 ERROR heat.engine.resource Neutron server returns request_ids: ['req-dbdecf35-c493-45a7-acc3-08bcf796860d']
2017-03-12 17:33:11.813 31481 ERROR heat.engine.resource
2017-03-12 17:33:11.976 31481 INFO heat.engine.stack [req-c588a9a9-8f9e-4775-be57-f5f521677f1f - - - - -] Stack DELETE FAILED (admin-consul-lbaas-ahjbvbsacfrh): Resource DELETE failed: Conflict: resources.lbaas: pool 17123a61-47bf-42b6-93ba-760b2f666ade is using this loadbalancer
Neutron server returns request_ids: ['req-dbdecf35-c493-45a7-acc3-08bcf796860d']
2017-03-12 17:33:13.270 31474 INFO heat.engine.service [req-b1e74969-7f05-4ec5-a55c-0857d0246563 9bb439da8e6b4c28b372671f7e495c24 0fc3d7b2705b46d2a0895071358477b2 - - -] Deleting stack admin-consul-lbaas-ahjbvbsacfrh
2017-03-12 17:33:13.480 31474 INFO heat.engine.stack [req-b1e74969-7f05-4ec5-a55c-0857d0246563 9bb439da8e6b4c28b372671f7e495c24 0fc3d7b2705b46d2a0895071358477b2 - - -] Stack DELETE IN_PROGRESS (admin-consul-lbaas-ahjbvbsacfrh): Stack DELETE started
2017-03-12 17:33:13.561 31474 INFO heat.engine.resource [req-b1e74969-7f05-4ec5-a55c-0857d0246563 9bb439da8e6b4c28b372671f7e495c24 0fc3d7b2705b46d2a0895071358477b2 - - -] deleting LoadBalancer "lbaas" [e5f0a131-1b80-4f8c-9c02-961ac7ab53e4] Stack "admin-consul-lbaas-ahjbvbsacfrh" [56dbfcf8-8b02-439f-b639-f20ae90ba1cb]
2017-03-12 17:33:13.651 31474 INFO heat.engine.resource [req-b1e74969-7f05-4ec5-a55c-0857d0246563 9bb439da8e6b4c28b372671f7e495c24 0fc3d7b2705b46d2a0895071358477b2 - - -] delete LoadBalancer "lbaas" [e5f0a131-1b80-4f8c-9c02-961ac7ab53e4] Stack "admin-consul-lbaas-ahjbvbsacfrh" [56dbfcf8-8b02-439f-b639-f20ae90ba1cb] attempt 1

After failing, I check what pools are left:

=> lbaas-listener-list:
+--------------------------------------+--------------------------------------+------------------------------------+----------+---------------+----------------+
| id | default_pool_id | name | protocol | protocol_port | admin_state_up |
+--------------------------------------+--------------------------------------+------------------------------------+----------+---------------+----------------+
| 0fa87276-22b3-4b8d-af7a-f323f429842e | 0915b44f-ac3f-4d89-bfa5-69ff94d831e1 | listener-service-mongodb-mongodb | TCP | 27017 | True |
| dd68f2d5-afd1-41ef-8ccd-664a0ff1b198 | 126d0746-f1c8-4809-8f12-ad5c068a59d2 | listener-service-rabbitmq-rabbitmq | TCP | 5672 | True |
| 04477ec1-3052-4f9d-a0bb-06c184bfc0e9 | e236cf27-bdc3-4d7e-9bf0-66a48df4b941 | listener-admin-vault-8200-HTTP | HTTP | 8200 | True |
| d78fe42b-d242-44cc-9b0c-98f3688cacc9 | 372b6468-6448-43cc-94bf-b0f53b8b6d87 | listener-admin-puppet-8140-TCP | TCP | 8140 | True |
+--------------------------------------+--------------------------------------+------------------------------------+----------+---------------+----------------+

=> lbaas-pool-list:
+--------------------------------------+----------------------------------+----------+----------------+
| id | name | protocol | admin_state_up |
+--------------------------------------+----------------------------------+----------+----------------+
| 0915b44f-ac3f-4d89-bfa5-69ff94d831e1 | hapool-service-mongodb-mongodb | TCP | True |
| 126d0746-f1c8-4809-8f12-ad5c068a59d2 | hapool-service-rabbitmq-rabbitmq | TCP | True |
| 372b6468-6448-43cc-94bf-b0f53b8b6d87 | hapool-admin-puppet | TCP | True |
| e236cf27-bdc3-4d7e-9bf0-66a48df4b941 | hapool-admin-vault | HTTP | True |
| 17123a61-47bf-42b6-93ba-760b2f666ade | hapool-admin-consul-8400-TCP | TCP | True |
| 3f8de3e8-d2aa-450c-ae45-ca3ce6682385 | hapool-admin-consul-8301-TCP | TCP | True |
| dd69b9f4-60c0-4a56-872f-7287409aa720 | hapool-admin-consul-8301-TCP | TCP | True |
+--------------------------------------+----------------------------------+----------+----------------+

Deleting these three *-consul- pools manually and then try to delete the stack again, it succeeds.

Because I started out with six listeners, six pools, but after Heat have tried to run, I end up with three. So it's clear it managed to delete some of them. All the listeners was deleted, but three pools remained.

This happens every time, so I'm fairly certain I can reproduce it.

This is Heat v7.0.0, Neutron v9.1.1 on Debian GNU/Linux Jessie with Openstack Newton.

Revision history for this message
huangtianhua (huangtianhua) wrote :

Heat delete the resources of stack and check whether they were deleted complete, in above user case, heat will delete pool first and then delete listener and last delete lb, so in your test, all the listeners were deleted, but three pools
remained, so seems there is something wrong in neutron, could you look into the neutron log to find out the reason?

Revision history for this message
Turbo Fredriksson (turbo-bayour) wrote :

Only see (several times), which isn't very revealing:

2017-03-12 17:34:37.037 8832 INFO neutron.wsgi [req-8b4a5096-7305-4a6b-945a-a32961077fe4 9bb439da8e6b4c28b372671f7e495c24 0fc3d7b2705b46d2a0895071358477b2 - - -] 10.0.3.252 - - [12/Mar/2017 17:34:37] "GET /v2.0/lbaas/loadbalancers/e5f0a131-1b80-4f8c-9c02-961ac7ab53e4.json HTTP/1.1" 200 827 0.913516
2017-03-12 17:34:37.735 8832 INFO neutron.api.v2.resource [req-047c5281-5c4f-4a2e-a987-a67fd8d4e290 9bb439da8e6b4c28b372671f7e495c24 0fc3d7b2705b46d2a0895071358477b2 - - -] delete failed (client error): There was a conflict when trying to complete your request.
2017-03-12 17:34:37.737 8832 INFO neutron.wsgi [req-047c5281-5c4f-4a2e-a987-a67fd8d4e290 9bb439da8e6b4c28b372671f7e495c24 0fc3d7b2705b46d2a0895071358477b2 - - -] 10.0.3.252 - - [12/Mar/2017 17:34:37] "DELETE /v2.0/lbaas/loadbalancers/e5f0a131-1b80-4f8c-9c02-961ac7ab53e4.json HTTP/1.1" 409 343 0.695675

Revision history for this message
Turbo Fredriksson (turbo-bayour) wrote :

But because there ARE pools left, Heat shouldn't continue with trying to delete the LB, so error is "irrelevant" (from Heats point of view).

Rico Lin (rico-lin)
Changed in heat:
milestone: none → no-priority-tag-bugs
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.