OVS drops RARP packets by QEMU upon live-migration - VM temporarily disconnected

Bug #1414559 reported by Liaz Kamper on 2015-01-26
This bug affects 12 people
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
Oleg Bondarev
Oleg Bondarev

Bug Description

When live-migrating a VM the QEMU send 5 RARP packets in order to allow re-learning of the new location of the VM's MAC address.
However the VIF creation scheme between nova-compute and neutron-ovs-agent drops these RARPs:
1. nova creates a port on OVS but without the internal tagging.
2. At this stage all the packets that come out from the VM, or QEMU process it runs in, will be dropped.
3. The QEMU sends five RARP packets in order to allow MAC learning. These packets are dropped as described in #2.
4. In the meanwhile neutron-ovs-agent loops every POLLING_INTERVAL and scans for new ports. Once it detects a new port is added. it will read the properties of the new port, and assign the correct internal tag, that will allow connection of the VM.

The flow above suggests that:
1. RARP packets are dropped, so MAC learning takes much longer and depends on internal traffic and advertising by the VM.
2. VM is disconnected from the network for a mean period of POLLING_INTERVAL/2

Seems like this could be solved by direct messages between nova vif driver and neutron-ovs-agent

Dongcan Ye (hellochosen) wrote :

Also encounterd in my environment.
I use ovs + vlan mode, after live migrated , vm send RARP packets. But the RARP packets are not taking vlan tag, so they can't send to outer.

Changed in neutron:
assignee: nobody → sean mooney (sean-k-mooney)
status: New → Incomplete
sean mooney (sean-k-mooney) wrote :

hi i have been looking into this today and have not been able to reproduce.

i have been testing with the head of master and each time i live migrate i a receiving the RAAP on the appropriate vlan tag.

i have not found a specific commit as of yet that fixes this issue but it appears to be resolved on the current master.

can you re validate or provided more information.

if not i think this can be marked as invalid as the behavior is no longer present.

sean mooney (sean-k-mooney) wrote :

i will try to revalidate with larger fedora vms running iperf to try and generate some more cpu,memory and network load in the vms i am live-migrating to see if that effect the result tomorrow.

sean mooney (sean-k-mooney) wrote :

i have tried to recreate this bug using both nano cirrius images and
a 4 cpu 1GB Ubuntu image and have not been able to reproduce the behavior sited.

i used the the cirros nano image to test live migrating a very small vm which should have a very short time scale.
in the larger image i ran and iperf server and a iperf client connected over the loopback interface to generate a lot of cup load and memory load to test a longer migrations.

bringing up the br-int interface and sniffing for RARP packets show that the RARP packets are correctly vlan tagged in both cases.

sudo tshark -V -i br-int rarp
Frame 11: 64 bytes on wire (512 bits), 64 bytes captured (512 bits) on interface 0
    Interface id: 0
    Encapsulation type: Ethernet (1)
    Arrival Time: Aug 26, 2015 11:36:39.533142000 IST
    [Time shift for this packet: 0.000000000 seconds]
    Epoch Time: 1440585399.533142000 seconds
    [Time delta from previous captured frame: 0.350099000 seconds]
    [Time delta from previous displayed frame: 0.350099000 seconds]
    [Time since reference or first frame: 465.125834000 seconds]
    Frame Number: 11
    Frame Length: 64 bytes (512 bits)
    Capture Length: 64 bytes (512 bits)
    [Frame is marked: False]
    [Frame is ignored: False]
    [Protocols in frame: eth:vlan:arp]
Ethernet II, Src: fa:16:3e:47:8b:b0 (fa:16:3e:47:8b:b0), Dst: Broadcast (ff:ff:ff:ff:ff:ff)
    Destination: Broadcast (ff:ff:ff:ff:ff:ff)
        Address: Broadcast (ff:ff:ff:ff:ff:ff)
        .... ..1. .... .... .... .... = LG bit: Locally administered address (this is NOT the factory default)
        .... ...1 .... .... .... .... = IG bit: Group address (multicast/broadcast)
    Source: fa:16:3e:47:8b:b0 (fa:16:3e:47:8b:b0)
        Address: fa:16:3e:47:8b:b0 (fa:16:3e:47:8b:b0)
        .... ..1. .... .... .... .... = LG bit: Locally administered address (this is NOT the factory default)
        .... ...0 .... .... .... .... = IG bit: Individual address (unicast)
    Type: 802.1Q Virtual LAN (0x8100)
802.1Q Virtual LAN, PRI: 0, CFI: 0, ID: 1
    000. .... .... .... = Priority: Best Effort (default) (0)
    ...0 .... .... .... = CFI: Canonical (0)
    .... 0000 0000 0001 = ID: 1
    Type: RARP (0x8035)
    Padding: 0000000000000000000000000000
    Trailer: 00000000
Address Resolution Protocol (reverse request)
    Hardware type: Ethernet (1)
    Protocol type: IP (0x0800)
    Hardware size: 6
    Protocol size: 4
    Opcode: reverse request (3)
    Sender MAC address: fa:16:3e:47:8b:b0 (fa:16:3e:47:8b:b0)
    Sender IP address: (
    Target MAC address: fa:16:3e:47:8b:b0 (fa:16:3e:47:8b:b0)
    Target IP address: (

unless the original reporter can provide more information on how to reproduce
i will mark this as invalid at the end of the week as it appeares to be fixed.

sean mooney (sean-k-mooney) wrote :

marking as invalid as the bug cannot be reproduced.
please reopen if this is still an issue for you and you can provide more info on how to recreate.

Changed in neutron:
status: Incomplete → Invalid
Changed in neutron:
assignee: sean mooney (sean-k-mooney) → Oleg Bondarev (obondarev)
Radomir Dopieralski (deshipu) wrote :

We have a bug reported from our customers about this, and we have been able to reproduce it in our testing environments without any problems -- there is a noticeable pause in network connectivity of the guest right after migration.

We have a bug for it: https://bugs.launchpad.net/nova/+bug/1511430

Radomir Dopieralski (deshipu) wrote :

One note: this is much more noticeable when running a guest with an older kernel -- it's several dozen seconds with RHEL6 but only fractions of a second with RHEL7 guest.

Changed in neutron:
status: Invalid → Triaged
importance: Undecided → Medium

Fix proposed to branch: master
Review: https://review.openstack.org/246898

Changed in neutron:
status: Triaged → In Progress
Oleg Bondarev (obondarev) wrote :
Changed in nova:
assignee: nobody → Oleg Bondarev (obondarev)
status: New → In Progress
Changed in neutron:
assignee: Oleg Bondarev (obondarev) → Swaminathan Vasudevan (swaminathan-vasudevan)
Changed in neutron:
assignee: Swaminathan Vasudevan (swaminathan-vasudevan) → Oleg Bondarev (obondarev)

Change abandoned by Sergey Belous (<email address hidden>) on branch: stable/mitaka
Review: https://review.openstack.org/297170

Reviewed: https://review.openstack.org/281137
Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=62bcfc595dc49a7035b95daadc72e8744c48c8e7
Submitter: Jenkins
Branch: master

commit 62bcfc595dc49a7035b95daadc72e8744c48c8e7
Author: Oleg Bondarev <email address hidden>
Date: Tue Feb 16 19:10:03 2016 +0300

    Add ability to filter migrations by instance uuid

    This will be used by dependent patch.

    Partial-Bug: #1414559
    Change-Id: I20470487287fa2a7aa919507073f75181368c3c0

Reviewed: https://review.openstack.org/246898
Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=b7c303ee0a16a05c1fdb476dc7f4c7ca623a3f58
Submitter: Jenkins
Branch: master

commit b7c303ee0a16a05c1fdb476dc7f4c7ca623a3f58
Author: Oleg Bondarev <email address hidden>
Date: Wed Nov 18 12:15:09 2015 +0300

    Notify nova with network-vif-plugged in case of live migration

     - during live migration on pre migration step nova plugs instance
       vif device on the destination compute node;
     - L2 agent on destination host detects new device and requests device
       info from server;
     - server does not change port status since port is bound to another
       host (source host);
     - L2 agent processes device and sends update_device_up to server;
     - again server does not update status as port is bound to another host;

    Nova notifications are sent only in case port status change so in this case
    no notifications are sent.

    The fix is to explicitly notify nova if agent reports device up from a host
    other than port's current host.

    This is the fix on neutron side, the actual fix of the bug is on nova side:
    change-id Ib1cb9c2f6eb2f5ce6280c685ae44a691665b4e98

    Closes-Bug: #1414559
    Change-Id: Ifa919a9076a3cc2696688af3feadf8d7fa9e6fc2

Changed in neutron:
status: In Progress → Fix Released
tags: added: neutron-proactive-backport-potential
Miguel Angel Ajo (mangelajo) wrote :

Let's look at backportability, I agree.

We are hitting the same issue with the neutron ml2 linuxbridge driver. As pointed out by Andreas Scheuring in https://review.openstack.org/#/c/246910/ nova just creates the bridge but does not plug the physical interface to it. If the VM migration succeeds in a short time, then the RARP packets are sent out before the neutron-linuxbridge-agent loop to detect new interfaces had a chance to detect the new interface and correctly set up the bridge's physical interface.

This issue was fixed in the openstack/neutron development milestone.

tags: removed: neutron-proactive-backport-potential

Change abandoned by Matt Riedemann (<email address hidden>) on branch: stable/mitaka
Review: https://review.openstack.org/415190

Change abandoned by Sean Dague (<email address hidden>) on branch: master
Review: https://review.openstack.org/246910
Reason: This review is > 4 weeks without comment, and is not mergable in it's current state. We are abandoning this for now. Feel free to reactivate the review by pressing the restore button and leaving a 'recheck' comment to get fresh test results.

Change abandoned by Sean Dague (<email address hidden>) on branch: master
Review: https://review.openstack.org/413555
Reason: This review is > 4 weeks without comment, and is not mergable in it's current state. We are abandoning this for now. Feel free to reactivate the review by pressing the restore button and leaving a 'recheck' comment to get fresh test results.

To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Duplicates of this bug

Other bug subscribers