Trying to remove a load balancer pool (which contains members) via horizon ends with error

Bug #1242338 reported by Rami Vaknin
10
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenStack Dashboard (Horizon)
Invalid
Undecided
Akihiro Motoki
neutron
Fix Released
Medium
Eugene Nikanorov

Bug Description

I've tried to remove a pool that has 2 members and a health monitor, operation failed with the following popup:
"Error: Unable to delete pool. 409-{u'NeutronError': {u'message': u'Pool f5004d04-4461-4a9a-aa7c-04a9bdfde974 is still in use', u'type': u'PoolInUse', u'detail': u''}}"

I did expect this operation to fail, I just didn't expect it to be available in horizon while the pool still has other objects associated with it and I didn't expect it to leave the pool in "PENDING_DELETE" status.

The exception from the log file:

2013-10-20 16:12:13.564 22804 ERROR neutron.services.loadbalancer.drivers.haproxy.agent_manager [-] Unable to destroy device for pool: f5004d04-4461-4a9a-aa7c-04a9bdfde974
2013-10-20 16:12:13.564 22804 TRACE neutron.services.loadbalancer.drivers.haproxy.agent_manager Traceback (most recent call last):
2013-10-20 16:12:13.564 22804 TRACE neutron.services.loadbalancer.drivers.haproxy.agent_manager File "/usr/lib/python2.6/site-packages/neutron/services/loadbalancer/drivers/haproxy/agent_manager.py", line 244, in destroy_device
2013-10-20 16:12:13.564 22804 TRACE neutron.services.loadbalancer.drivers.haproxy.agent_manager self.driver.destroy(pool_id)
2013-10-20 16:12:13.564 22804 TRACE neutron.services.loadbalancer.drivers.haproxy.agent_manager File "/usr/lib/python2.6/site-packages/neutron/services/loadbalancer/drivers/haproxy/namespace_driver.py", line 92, in destroy
2013-10-20 16:12:13.564 22804 TRACE neutron.services.loadbalancer.drivers.haproxy.agent_manager ns.garbage_collect_namespace()
2013-10-20 16:12:13.564 22804 TRACE neutron.services.loadbalancer.drivers.haproxy.agent_manager File "/usr/lib/python2.6/site-packages/neutron/agent/linux/ip_lib.py", line 141, in garbage_collect_namespace
2013-10-20 16:12:13.564 22804 TRACE neutron.services.loadbalancer.drivers.haproxy.agent_manager self.netns.delete(self.namespace)
2013-10-20 16:12:13.564 22804 TRACE neutron.services.loadbalancer.drivers.haproxy.agent_manager File "/usr/lib/python2.6/site-packages/neutron/agent/linux/ip_lib.py", line 440, in delete
2013-10-20 16:12:13.564 22804 TRACE neutron.services.loadbalancer.drivers.haproxy.agent_manager self._as_root('delete', name, use_root_namespace=True)
2013-10-20 16:12:13.564 22804 TRACE neutron.services.loadbalancer.drivers.haproxy.agent_manager File "/usr/lib/python2.6/site-packages/neutron/agent/linux/ip_lib.py", line 206, in _as_root
2013-10-20 16:12:13.564 22804 TRACE neutron.services.loadbalancer.drivers.haproxy.agent_manager kwargs.get('use_root_namespace', False))
2013-10-20 16:12:13.564 22804 TRACE neutron.services.loadbalancer.drivers.haproxy.agent_manager File "/usr/lib/python2.6/site-packages/neutron/agent/linux/ip_lib.py", line 65, in _as_root
2013-10-20 16:12:13.564 22804 TRACE neutron.services.loadbalancer.drivers.haproxy.agent_manager namespace)
2013-10-20 16:12:13.564 22804 TRACE neutron.services.loadbalancer.drivers.haproxy.agent_manager File "/usr/lib/python2.6/site-packages/neutron/agent/linux/ip_lib.py", line 76, in _execute
2013-10-20 16:12:13.564 22804 TRACE neutron.services.loadbalancer.drivers.haproxy.agent_manager root_helper=root_helper)
2013-10-20 16:12:13.564 22804 TRACE neutron.services.loadbalancer.drivers.haproxy.agent_manager File "/usr/lib/python2.6/site-packages/neutron/agent/linux/utils.py", line 61, in execute
2013-10-20 16:12:13.564 22804 TRACE neutron.services.loadbalancer.drivers.haproxy.agent_manager raise RuntimeError(m)
2013-10-20 16:12:13.564 22804 TRACE neutron.services.loadbalancer.drivers.haproxy.agent_manager RuntimeError:
2013-10-20 16:12:13.564 22804 TRACE neutron.services.loadbalancer.drivers.haproxy.agent_manager Command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'delete', 'qlbaas-f5004d04-4461-4a9a-aa7c-04a9bdfde974']
2013-10-20 16:12:13.564 22804 TRACE neutron.services.loadbalancer.drivers.haproxy.agent_manager Exit code: 255
2013-10-20 16:12:13.564 22804 TRACE neutron.services.loadbalancer.drivers.haproxy.agent_manager Stdout: ''
2013-10-20 16:12:13.564 22804 TRACE neutron.services.loadbalancer.drivers.haproxy.agent_manager Stderr: 'Cannot remove /var/run/netns/qlbaas-f5004d04-4461-4a9a-aa7c-04a9bdfde974: Device or resource busy\n'
2013-10-20 16:12:13.564 22804 TRACE neutron.services.loadbalancer.drivers.haproxy.agent_manager

Changed in neutron:
assignee: nobody → Eugene Nikanorov (enikanorov)
affects: neutron → horizon
Changed in horizon:
status: New → Confirmed
tags: added: lbaas
affects: horizon → neutron
Changed in neutron:
importance: Undecided → Medium
Revision history for this message
Akihiro Motoki (amotoki) wrote :

I believe it is just a bug of neutron and it affects horizon.

I will check Horizon would handle an exception from neutronclient properly after Neutron bug is fixed.

tags: added: havana-backport-potential
Changed in horizon:
assignee: nobody → Akihiro Motoki (amotoki)
Revision history for this message
Eugene Nikanorov (enikanorov) wrote :

For horizon it would be helpful to evaluate UX regarding delete operation: does it make sense to allow operation, that is known to fail?

Revision history for this message
Akihiro Motoki (amotoki) wrote :

UX topic is related to this bug, but I would like to deal with UX topics in a separate bug.
What I see in this bug is Horizon raises an internal server error though it is sometimes caused by backend project at now.

I agree some operations should be determined allowed or not depending on associated resources. pool-delete should not be allowed when VIP or member is associated with the pool. I have a plan to be fixed in a separate patch for better UX.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to neutron (master)

Fix proposed to branch: master
Review: https://review.openstack.org/53122

Changed in neutron:
status: Confirmed → In Progress
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to neutron (master)

Reviewed: https://review.openstack.org/53122
Committed: http://github.com/openstack/neutron/commit/c9b6c15d5de4d5ee326d7a870c2b2668f7909efa
Submitter: Jenkins
Branch: master

commit c9b6c15d5de4d5ee326d7a870c2b2668f7909efa
Author: Eugene Nikanorov <email address hidden>
Date: Tue Oct 22 17:49:00 2013 +0400

    LBaaS: Fix incorrect pool status change

    Avoid incorrect status change when deleting the pool.
    We can check for the conditions prior putting the pool
    to PENDING_DELETE state, in case delete conditions are met
    it is safe to change the state to PENDING_DELETE.
    Also, change create_vip and update_vip operations to respect
    PENDING_DELETE and avoid race conditions.

    Change-Id: I9f526901eb85bdb83cf4ff8131460eb592c900f8
    Closes-Bug: #1242338

Changed in neutron:
status: In Progress → Fix Committed
Akihiro Motoki (amotoki)
Changed in neutron:
milestone: none → icehouse-1
Thierry Carrez (ttx)
Changed in neutron:
status: Fix Committed → Fix Released
Revision history for this message
Akihiro Motoki (amotoki) wrote :

The point related to UX perspective (is it reasonable to allow deleting a VIP with members) is filed as bug 1296419.

I confirmed Horizon does not return Internal server error even when deleting a VIP with members and the VIP is successfully deleted. I mark this bug Invalid in Horizon.

Changed in horizon:
status: New → Invalid
Thierry Carrez (ttx)
Changed in neutron:
milestone: icehouse-1 → 2014.1
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.