Incorrect port unbind request on instance delete

Bug #1634269 reported by AJAY KALAMBUR on 2016-10-17
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
Low
AJAY KALAMBUR

Bug Description

Nova sends a request for port unbind with binding host "" and this can sometimes lead to a problem when nova's cache is different than Neutron's view of instance-port mappings.

Steps
1. Create a neutron port
2. Create a VM and launch instance with this port
3. Shutdown nova compute and neutron agents on compute node where this VM was spawned
4. Unbind port from VM and delete the VM(Notice offline delete is a DB operation as request cant be completed on compute node as it is down)
5. Now create a new VM with same port on a different compute node
6. Bring up nova compute on the first node

After 30 minutes the running deleted instance reap runs and detects a VM in libvirt which is not present in nova database. It cleans up the VM but in the process it unbinds the pre-existing port which is now attached to a completely different VM on a different compute node

Root cause: nova unbind request does not check if port is bound to current instance from Neutron standpoint

AJAY KALAMBUR (akalambu) wrote :

Tried following fix seemed to work
diff --git a/nova/network/neutronv2/api.py b/nova/network/neutronv2/api.py
index 7192031..c526393 100644
--- a/nova/network/neutronv2/api.py
+++ b/nova/network/neutronv2/api.py
@@ -1170,7 +1170,11 @@ class API(base_api.NetworkAPI):
         ports = set(ports) - ports_to_skip

         # Reset device_id and device_owner for the ports that are skipped
- self._unbind_ports(context, ports_to_skip, neutron)
+ # Only unbind port if neutron sees this port as bound
+ if data.get('ports', []):
+ self._unbind_ports(context, ports_to_skip, neutron)
+ else:
+ LOG.debug("Skipping port unbind as neutron does not see any ports attached")
         # Delete the rest of the ports
         self._delete_ports(neutron, instance, ports, raise_if_fail=True)

AJAY KALAMBUR (akalambu) on 2016-10-17
Changed in nova:
assignee: nobody → AJAY KALAMBUR (akalambu)
Changed in nova:
status: New → In Progress
Changed in nova:
assignee: AJAY KALAMBUR (akalambu) → Ian Wells (ijw-ubuntu)
Jay Pipes (jaypipes) wrote :

Marking as Low priority as I don't see this as anything more than a corner case.

Changed in nova:
importance: Undecided → Low
Ian Wells (ijw-ubuntu) wrote :

This is a means to re-use the port (and its allocated address and MAC) without deleting the port and creating a new one with the same address and MAC. Obviously, you can't create a port before deletion with the same address and MAC, but you also can't create the port reliably after deletion because you can lose either address to another port - it's a race condition.

In the original test case, the issue was a host going down cauing a VM failure. The VM doesn't delete promptly on a freshly downed host (and deleting a VM takes an uncertain amount of time in any case), so it's impossible to free the port by conventional means, and replacing the VM requires recycling its address.

Sean Dague (sdague) wrote :

There are no currently open reviews on this bug, changing
the status back to the previous state and unassigning. If
there are active reviews related to this bug, please include
links in comments.

Changed in nova:
status: In Progress → New
assignee: Ian Wells (ijw-ubuntu) → nobody
Sean Dague (sdague) wrote :

Found open reviews for this bug in gerrit, setting to In Progress.

review: https://review.openstack.org/387687 in branch: master

Changed in nova:
status: New → In Progress
assignee: nobody → Ian Wells (ijw-ubuntu)
Changed in nova:
assignee: Ian Wells (ijw-ubuntu) → AJAY KALAMBUR (akalambu)
Matt Riedemann (mriedem) on 2019-03-12
tags: added: neutron
Changed in nova:
assignee: AJAY KALAMBUR (akalambu) → Matt Riedemann (mriedem)
Matt Riedemann (mriedem) on 2019-03-12
Changed in nova:
assignee: Matt Riedemann (mriedem) → AJAY KALAMBUR (akalambu)
To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers