Too many pools created from heat template when both listeners and pools depend on a item
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
octavia |
Fix Released
|
Critical
|
Unassigned |
Bug Description
When you deploy a heat template that has both listeners and pools depending on a item, due to the order of locking, you may get additional pools created erroneously.
Excerpt of heat template showing the issue :
##### LOADBALANCERS #####
test-
type: OS::Neutron:
properties:
name: test
description: test
vip_subnet: { get_param: subnet }
##### LISTENERS #####
http-listener:
type: OS::Neutron:
depends_on: test-loadbalancer
properties:
name: listener1
description: listener1
protocol_
loadbalancer: { get_resource: test-loadbalancer }
protocol: HTTP
https-listener:
type: OS::Neutron:
depends_on: http-listener
properties:
name: listener2
description: listener2
protocol_
loadbalancer: { get_resource: test-loadbalancer }
protocol: TERMINATED_HTTPS
default_
##### POOLS #####
http-pool:
type: OS::Neutron:
depends_on: http-listener
properties:
name: pool1
description: pool1
lb_algorithm: 'ROUND_ROBIN'
listener: { get_resource: http-listener }
protocol: HTTP
https-pool:
type: OS::Neutron:
depends_on: https-listener
properties:
name: pool2
description: pool2
lb_algorithm: 'ROUND_ROBIN'
listener: { get_resource: https-listener }
protocol: HTTP
After the http-listener is created, both a pool and another listener attempt to create but we end up with a number of pools (not always the same number).
affects: | neutron → octavia |
We believe that this may be due to https:/ /github. com/openstack/ neutron- lbaas/blob/ master/ neutron_ lbaas/services/ loadbalancer/ plugin. py#L537- L543 performing test_and_set_status first and then creating the listener while https:/ /github. com/openstack/ neutron- lbaas/blob/ master/ neutron_ lbaas/services/ loadbalancer/ plugin. py#L668- L671 creates the pool first and then performs the test_and_ set_status.
We suspect that the listener creation puts it into pending update while it is setting that up, but the pools keep creating until it finally completes successfully (after the status goes back active)