Unable to attach service subnet to private router when gateway is already in use

Bug #1410221 reported by Rodrigo Barbieri
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenStack Shared File Systems Service (Manila)
Fix Released
Medium
Unassigned

Bug Description

Steps to reproduce:

1) Clean devstack environment
2) Create a Share Network (share_net_1) on private network
3) Create a Share (share_1) on share_net_1, wait for available status
4) Remove private network interface from router1
5) Create a router (router2)
6) Attach private network to router2
7) Create a Share Network (share_net_2) on private network
8) Create a share (share_2) on share_net_2, check the below trace on m-shr log.

Explanation: The default driver workflow is to create a Share server for each Share Network. Also, create a subnet in Manila Service Network for each respective Private Network and attach it to its router. What happens here is that since share_net_2 is related to the same private network as share_net_1, no subnet is created on Manila Service Network, because it uses the one share_net_1 created, therefore when the driver attempts to connect that subnet to the private router, it fails because the subnet gateway is already in use, allocated for router1.

2015-01-12 13:11:38.293 ERROR manila.share.manager [req-9a727f10-2424-47d4-92bf-c78d3aa787ac b3324d3fdcea41bb8e56e737032839eb fe12ff75372b491ea54cf8fb41f0401a] Failed t
o get share server for share creation.
2015-01-12 13:11:38.327 ERROR oslo.messaging.rpc.dispatcher [req-9a727f10-2424-47d4-92bf-c78d3aa787ac b3324d3fdcea41bb8e56e737032839eb fe12ff75372b491ea54cf8fb41f0401a]
 Exception during message handling: Unable to complete operation for network f2ba1dcd-a489-4642-8dd4-eb5ce211260c. The IP address 10.254.0.1 is in use.
2015-01-12 13:11:38.327 TRACE oslo.messaging.rpc.dispatcher Traceback (most recent call last):
2015-01-12 13:11:38.327 TRACE oslo.messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 137, in _dispatch_and
_reply
2015-01-12 13:11:38.327 TRACE oslo.messaging.rpc.dispatcher incoming.message))
2015-01-12 13:11:38.327 TRACE oslo.messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 180, in _dispatch
2015-01-12 13:11:38.327 TRACE oslo.messaging.rpc.dispatcher return self._do_dispatch(endpoint, method, ctxt, args)
2015-01-12 13:11:38.327 TRACE oslo.messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 126, in _do_dispatch
2015-01-12 13:11:38.327 TRACE oslo.messaging.rpc.dispatcher result = getattr(endpoint, method)(ctxt, **new_args)
2015-01-12 13:11:38.327 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/manila/manila/share/manager.py", line 236, in create_share
2015-01-12 13:11:38.327 TRACE oslo.messaging.rpc.dispatcher {'status': 'error'})
2015-01-12 13:11:38.327 TRACE oslo.messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 82, in __exit__
2015-01-12 13:11:38.327 TRACE oslo.messaging.rpc.dispatcher six.reraise(self.type_, self.value, self.tb)
2015-01-12 13:11:38.327 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/manila/manila/share/manager.py", line 230, in create_share
2015-01-12 13:11:38.327 TRACE oslo.messaging.rpc.dispatcher context, share_network_id, share_id)
2015-01-12 13:11:38.327 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/manila/manila/share/manager.py", line 181, in _provide_share_server_for_share
2015-01-12 13:11:38.327 TRACE oslo.messaging.rpc.dispatcher return _provide_share_server_for_share()
2015-01-12 13:11:38.327 TRACE oslo.messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 431, in inner
2015-01-12 13:11:38.327 TRACE oslo.messaging.rpc.dispatcher return f(*args, **kwargs)
2015-01-12 13:11:38.327 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/manila/manila/share/manager.py", line 173, in _provide_share_server_for_share
2015-01-12 13:11:38.327 TRACE oslo.messaging.rpc.dispatcher share_server = self._setup_server(context, share_server)
2015-01-12 13:11:38.327 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/manila/manila/share/manager.py", line 482, in _setup_server
2015-01-12 13:11:38.327 TRACE oslo.messaging.rpc.dispatcher self.driver.deallocate_network(context, share_server['id'])
2015-01-12 13:11:38.327 TRACE oslo.messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 82, in __exit__
2015-01-12 13:11:38.327 TRACE oslo.messaging.rpc.dispatcher six.reraise(self.type_, self.value, self.tb)
2015-01-12 13:11:38.327 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/manila/manila/share/manager.py", line 455, in _setup_server
2015-01-12 13:11:38.327 TRACE oslo.messaging.rpc.dispatcher metadata=metadata)
2015-01-12 13:11:38.327 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/manila/manila/share/drivers/generic.py", line 607, in setup_server
2015-01-12 13:11:38.327 TRACE oslo.messaging.rpc.dispatcher network_info['neutron_subnet_id'],
2015-01-12 13:11:38.327 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/manila/manila/share/drivers/service_instance.py", line 297, in set_up_service_instance
2015-01-12 13:11:38.327 TRACE oslo.messaging.rpc.dispatcher neutron_subnet_id)
2015-01-12 13:11:38.327 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/manila/manila/share/drivers/service_instance.py", line 392, in _create_service_instance
2015-01-12 13:11:38.327 TRACE oslo.messaging.rpc.dispatcher neutron_subnet_id)
2015-01-12 13:11:38.327 TRACE oslo.messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 431, in inner
2015-01-12 13:11:38.327 TRACE oslo.messaging.rpc.dispatcher return f(*args, **kwargs)
2015-01-12 13:11:38.327 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/manila/manila/share/drivers/service_instance.py", line 506, in _setup_network_for_instance
2015-01-12 13:11:38.327 TRACE oslo.messaging.rpc.dispatcher service_subnet['id'])
2015-01-12 13:11:38.327 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/manila/manila/network/neutron/api.py", line 275, in router_add_interface
2015-01-12 13:11:38.327 TRACE oslo.messaging.rpc.dispatcher message=e.message)
2015-01-12 13:11:38.327 TRACE oslo.messaging.rpc.dispatcher NetworkException: Unable to complete operation for network f2ba1dcd-a489-4642-8dd4-eb5ce211260c. The IP address 10.254.0.1 is in use.
2015-01-12 13:11:38.327 TRACE oslo.messaging.rpc.dispatcher

Tags: driver generic
Changed in manila:
importance: Undecided → High
assignee: nobody → Valeriy Ponomaryov (vponomaryov)
milestone: none → kilo-2
Changed in manila:
assignee: Valeriy Ponomaryov (vponomaryov) → nobody
Revision history for this message
Valeriy Ponomaryov (vponomaryov) wrote :

I am not sure we have something to do here.

From your reproducing steps you lost access to share1 after removing your subnet from router, but did not do it for service subnet.

So, it is recommended to set config opt "connect_share_server_to_tenant_network" to True, to remove dependency on router. But this is blocked by another bug - https://bugs.launchpad.net/manila/+bug/1410246

So, need just merge fix (https://review.openstack.org/148230) for another bug and use direct connection.

tags: added: driver generic
Changed in manila:
importance: High → Medium
Changed in manila:
milestone: kilo-2 → kilo-3
Revision history for this message
Rodrigo Barbieri (rodrigo-barbieri2010) wrote :

After https://bugs.launchpad.net/manila/+bug/1410246 has been fixed by https://review.openstack.org/148230, this bug no longer happens when "connect_share_server_to_tenant_network" is set to True. I believe this is the only correct way of handling this scenario since nothing can be done when "connect_share_server_to_tenant_network" is set to False to fix this.

Changed in manila:
status: New → Fix Committed
Changed in manila:
milestone: kilo-3 → kilo-2
Thierry Carrez (ttx)
Changed in manila:
status: Fix Committed → Fix Released
Thierry Carrez (ttx)
Changed in manila:
milestone: kilo-2 → 2015.1.0
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.