Packet loss with DVR, router MAC learned and flapping

Bug #1596473 reported by wondra
This bug affects 1 person
Affects Status Importance Assigned to Milestone

Bug Description

Already posted on the Operator mailing list without answer

I've stumbled upon a weird condition in Neutron and couldn't find a bug
filed for it. So even if it is happening with the Kilo release, it could
still be relevant. I've also read the commit logs without finding anything relevant.

The setup has 3 network nodes and 1 compute node currently hosting a virtual
network (GRE based). DVR is enabled. I have just added IPv6 to this network
and to the external network (VLAN based). The virtual network is set to SLAAC.

Now, all four mentioned nodes have spawned a radvd process and VMs are
getting globally routable addresses. Traffic has been statically routed to
the subnet so reachability is OK in both ways.

However, the link-local router address and associated MAC address is the
same in all 4 qr namespaces. About 16% packets get lost in randomly occuring
bursts. Openvswitch forwarding tables are flapping and I think that the
packet loss occurs at the moment when all 4 switches learn the MAC address
from another machine through a GRE tunnel simultaneously. With a second VM on the
network on another compute node, the packet loss is 12%.

Another router address and the external gateway address resides in a snat
namespace, which exists in only one copy. When I tell the VM to route
through that, there is no packet loss. My best solution for this so far is
by passing a script to the VM through user-data that changes the gateway and
adds a rc script to do the same on reboot.

Is there any way to change the behavior to get rid of the MAC address
conflict? I have determined that pushing a host route to the VMs is not supported
for IPv6. Therefore, the workaround is not feasible if uninformed users will be
launching VMs.

Revision history for this message
Brian Haley (brian-haley) wrote :

So you are seeing this on Kilo? If so, can you reproduce it with Mitaka? There have been a number of bug fixes for IPv6 and DVR and this problem could be fixed already.

tags: added: l3-dvr-backlog l3-ipam-dhcp
Changed in neutron:
status: New → Incomplete
Revision history for this message
wondra (wondra) wrote :

I still haven't upgraded to Mitaka, but I have some more insight into this. It also affects IPv4.
A customer complained about connectivity issues. Pings to his instance had about 2% packet loss. I have spied into the forwarding table of OpenVSwitch on a compute node with DVR:
watch --differences=permanent -n0.1 "ovs-appctl fdb/show br-int | grep fa:16:3e:a5:d8:e7"
..where the MAC belongs to the .1 address of the distributed router. The one that exists on every compute and network node.

From time to time, the port number jumped there and back again. This coincided with the lost pings.

I thought that enabling l2population could solve the issue, but alas, that only populates the br-tun bridge, not br-int. (!)

Then I ran tcpdump in the router namespace and on the instance's iptables bridge
ip netns exec qrouter-ba8c8b17-5649-474b-ac81-4960c2358611 tcpdump -i qr-2f1aa754-89 -ln ether host fa:16:3e:a5:d8:e7
tcpdump -i qbre6b1046f-7c -ln ether host fa:16:3e:a5:d8:e7

a) The ping requests showed on both, the reply was missing only in the router namespace.
b) Around the time of the lost ping, I saw a connection attempt to another IP address (TCP Syn), even on the instance's bridge. It was flooded from another compute node, flipping the switching table of OpenVSwitch and causing packets from all nodes in the cloud to go to the node that did the broadcast for a short time.

Steps to reproduce:
1. On a compute node cmp01, run an instance and start pinging its floating IP from the outside (not a requirement, but the traffic needs to pass through DVR).
2. On a compute node cmp02, run an instance and then stop it (shutoff state).
Ping the floating IP of the shutoff instance.
3. Observe the flooded packets, flipping switching table and packet loss.

My original bug report and this one are closely related. Both are caused by duplicate MAC addresses of the router in the DVR model. l2population does not save day as the conflict happens on br-int, not br-tun.
Is this a design flaw of DVR?

Revision history for this message
wondra (wondra) wrote :

..the cause that the packet loss was 12% with IPv6 vs. only 2% in IPv4 is that with IPv6, there is radvd running and broadcasting on its own. With IPv4, it required this concrete condition to trigger.

wondra (wondra)
Changed in neutron:
status: Incomplete → New
wondra (wondra)
summary: - Packet loss with DVR and IPv6
+ Packet loss with DVR, router MAC learned and flapping
Revision history for this message
wondra (wondra) wrote :

Reading the article, I probably found the source of my problems.
The dvr_host_macs MySQL table is empty. That's why I have MAC address collisions.
Why could the table be empty? Who should populate it?

Revision history for this message
wondra (wondra) wrote :

I've probably found the problem. On the compute nodes, I did not have
enable_distributed_routing = True
In the [agent] section of the ml2_conf.ini. As a result, the mechanism that prevents MAC address conflicts was disabled. It is interesting that it worked that good without it.

Changed in neutron:
status: New → Invalid
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers