Neutron OpenvSwitch DVR - connection problem

Bug #1883321 reported by Philipp Krivanec
26
This bug affects 3 people
Affects Status Importance Assigned to Milestone
neutron
High
LIU Yulong

Bug Description

Hello,

I am seeing strange behaviour on my CentOS 8/OpenStack Ussuri test cluster. Note that I am using OpenvSwitch in DVR mode with the OpenvSwitch firewall driver without router HA configured.

I have two VMs on a compute node in the same L2 segment, one of them with a floating IP and one without. Incoming connections to the floating IP work as expected until the other VM sends any traffic to the Internet. As soon as the second VM sends traffic, the incoming connection to the floating IP stops working; new connection can not be established as well.

After stopping incoming traffic to the floating IP and outgoing traffic from the second VM and subsequently waiting 30-60s new incoming connections to the floating IP can be established again.

Traffic between the private IPs of both VMs works flawlessly and does not have impact on incoming connections to the floating IP.

With explicitly_egress_direct is set to True the incoming traffic is forwarded to the network node, and I can capture the traffic on the vxlan_sys_4789 interface on both nodes (the compute and the network node).

If explicitly_egress_direct is not set in the configuration traffic is
broadcasted on the br-int of the compute node and is also forwarded to the
network node.
The traffic reaches the VM with the floating IP which sends return traffic, so the already established connection is working, but I cannot establish new connections.

Packet captures on vxlan_sys_4789 show the traffic both on the compute and
network node.

If I use no firewall driver and explicitly_egress_direct is not set in the
configuration, the incoming traffic is also broadcasted on br-int. The
established connection is working, and I can establish new connections, but all the incoming traffic is broadcasted.

The packet capture shows, that the destination MAC of the incoming traffic is the correct MAC of the VM.

The established connection is listed in conntrack table but the new connection attempts are not showing up.

What else can I do to isolate the problem?

Best Regards
Phil

ovs_version: "2.12.0"
Kernel 4.18.0-147.8.1.el8_1.x86_64

Flow dumps on comute node
==========================================

VXLAN underlay Net: 10.0.2.0/24
Provider Net: 192.168.97.0/24
Internal Net: 10.10.10.0/24

Floting IP: 192.168.97.161
VM with floting IP: 10.10.10.185 (fa:16:3e:e7:c3:cb)
VM without flowting IP: 10.10.10.242 (fa:16:3e:f9:c2:b7)

ovs-appctl dpctl/show
system@ovs-system:
  lookups: hit:317 missed:117 lost:0
  flows: 4
  masks: hit:1176 total:3 hit/pkt:2.71
  port 0: ovs-system (internal)
  port 1: br-ex (internal)
  port 2: enp3s0 <<===== External Interface
  port 3: br-int (internal)
  port 4: br-tun (internal)
  port 5: qr-1f6bbe11-9b (internal)
  port 6: fg-0c191c26-85 (internal)
  port 7: vxlan_sys_4789 (vxlan: packet_type=ptap)
  port 8: tapf494600d-62 <<==== VM with flowting IP
  port 9: tapbd3a7589-3f <<==== VM without flowting IP

firewall NONE / explicitly_egress_direct TRUE
---------------------------------------------

WORKING
--------------------------------
ovs-appctl dpctl/dump-flows
recirc_id(0),in_port(5),eth(src=fa:16:3e:4f:ac:f8,dst=fa:16:3e:e7:c3:cb),eth_type(0x0800),ipv4(frag=no), packets:4, bytes:392, used:0.957s, actions:8
recirc_id(0),in_port(8),eth(src=fa:16:3e:e7:c3:cb,dst=fa:16:3e:4f:ac:f8),eth_type(0x0800),ipv4(frag=no), packets:4, bytes:392, used:0.957s, actions:5
recirc_id(0),in_port(2),eth(src=00:11:0a:66:b2:68,dst=fa:16:3e:5a:f0:65),eth_type(0x8100),vlan(vid=97,pcp=0),encap(eth_type(0x0800),ipv4(frag=no)), packets:4, bytes:408, used:0.957s, actions:pop_vlan,push_vlan(vid=1,pcp=0),3,pop_vlan,6
recirc_id(0),in_port(6),eth(src=fa:16:3e:5a:f0:65,dst=00:11:0a:66:b2:68),eth_type(0x0800),ipv4(frag=no), packets:4, bytes:392, used:0.957s, actions:push_vlan(vid=97,pcp=0),2
recirc_id(0),in_port(2),eth(src=00:17:e0:1f:63:94,dst=01:00:0c:cc:cc:cc),eth_type(0/0xffff), packets:0, bytes:0, used:never, actions:drop
recirc_id(0),in_port(2),eth(src=00:17:e0:1f:63:94,dst=00:17:e0:1f:63:94),eth_type(0x9000), packets:66, bytes:3960, used:0.305s, actions:drop

NOT WORKING - after ping from VM
--------------------------------
ovs-appctl dpctl/dump-flows
recirc_id(0),in_port(5),eth(src=fa:16:3e:4f:ac:f8,dst=fa:16:3e:e7:c3:cb),eth_type(0x0800),ipv4(frag=no), packets:127, bytes:12446, used:5.742s, actions:8
recirc_id(0),in_port(8),eth(src=fa:16:3e:e7:c3:cb,dst=fa:16:3e:4f:ac:f8),eth_type(0x0800),ipv4(frag=no), packets:127, bytes:12446, used:5.741s, actions:5
recirc_id(0),in_port(5),skb_mark(0x4000000),eth(src=fa:16:3e:4f:ac:f8),eth_type(0x0800),ipv4(tos=0/0x3,frag=no), packets:5, bytes:490, used:0.734s, actions:set(tunnel(tun_id=0x1,src=10.0.2.100,dst=10.0.2.20,ttl=64,tp_dst=4789,flags(df|key))),set(eth(src=fa:16:3f:9c:aa:5e)),set(skb_mark(0)),7
recirc_id(0),tunnel(tun_id=0x1,src=10.0.2.20,dst=10.0.2.100,flags(-df-csum+key)),in_port(7),eth(src=fa:16:3e:82:99:59,dst=fa:16:3e:f9:c2:b7),eth_type(0x0800),ipv4(frag=no), packets:0, bytes:0, used:never, actions:9
recirc_id(0),in_port(2),eth(src=00:11:0a:66:b2:68,dst=fa:16:3e:5a:f0:65),eth_type(0x8100),vlan(vid=97,pcp=0),encap(eth_type(0x0800),ipv4(frag=no)), packets:132, bytes:13464, used:0.734s, actions:pop_vlan,push_vlan(vid=1,pcp=0),3,pop_vlan,6
recirc_id(0),in_port(6),eth(src=fa:16:3e:5a:f0:65,dst=00:11:0a:66:b2:68),eth_type(0x0800),ipv4(frag=no), packets:127, bytes:12446, used:5.741s, actions:push_vlan(vid=97,pcp=0),2
recirc_id(0),in_port(2),eth(src=00:17:e0:1f:63:94,dst=01:00:0c:cc:cc:cc),eth_type(0/0xffff), packets:0, bytes:0, used:never, actions:drop
recirc_id(0),in_port(2),eth(src=00:17:e0:1f:63:94,dst=00:17:e0:1f:63:94),eth_type(0x9000), packets:8, bytes:480, used:8.269s, actions:drop
recirc_id(0),in_port(9),eth(src=fa:16:3e:f9:c2:b7,dst=fa:16:3e:4f:ac:f8),eth_type(0x0800),ipv4(frag=no), packets:0, bytes:0, used:never, actions:5

firewall NONE / explicitly_egress_direct False
---------------------------------------------

WORKING
--------------------------------
ovs-appctl dpctl/dump-flows
recirc_id(0),in_port(8),eth(src=fa:16:3e:e7:c3:cb,dst=fa:16:3e:4f:ac:f8),eth_type(0x0806),arp(sip=10.10.10.185), packets:0, bytes:0, used:never, actions:5
recirc_id(0),in_port(2),eth(src=00:17:e0:1f:63:94,dst=00:17:e0:1f:63:94),eth_type(0x9000), packets:38, bytes:2280, used:5.542s, actions:drop
recirc_id(0),in_port(2),eth(src=00:11:0a:66:b2:68,dst=fa:16:3e:5a:f0:65),eth_type(0x8100),vlan(vid=97,pcp=0),encap(eth_type(0x0806)), packets:0, bytes:0, used:never, actions:pop_vlan,6
recirc_id(0),in_port(5),eth(src=fa:16:3e:4f:ac:f8,dst=fa:16:3e:e7:c3:cb),eth_type(0x0806), packets:0, bytes:0, used:never, actions:8
recirc_id(0),in_port(2),eth(src=00:11:0a:66:b2:68,dst=fa:16:3e:5a:f0:65),eth_type(0x8100),vlan(vid=97,pcp=0),encap(eth_type(0x0800),ipv4(frag=no)), packets:42, bytes:4284, used:0.665s, actions:pop_vlan,6
recirc_id(0),in_port(5),eth(src=fa:16:3e:4f:ac:f8,dst=fa:16:3e:e7:c3:cb),eth_type(0x0800),ipv4(frag=no), packets:42, bytes:4116, used:0.664s, actions:8
recirc_id(0),in_port(2),eth(src=fa:16:3e:86:13:c3,dst=33:33:00:00:00:02),eth_type(0x8100),vlan(vid=97,pcp=0),encap(eth_type(0x86dd),ipv6(frag=no)), packets:0, bytes:0, used:never, actions:1,pop_vlan,push_vlan(vid=1,pcp=0),3,pop_vlan,6
recirc_id(0),in_port(6),eth(src=fa:16:3e:5a:f0:65,dst=00:11:0a:66:b2:68),eth_type(0x0806), packets:0, bytes:0, used:never, actions:push_vlan(vid=97,pcp=0),2
recirc_id(0),in_port(2),eth(src=00:17:e0:1f:63:94,dst=01:00:0c:cc:cc:cc),eth_type(0/0xffff), packets:0, bytes:0, used:never, actions:drop
recirc_id(0),in_port(8),eth(src=fa:16:3e:e7:c3:cb,dst=fa:16:3e:4f:ac:f8),eth_type(0x0800),ipv4(frag=no), packets:42, bytes:4116, used:0.664s, actions:5
recirc_id(0),in_port(6),eth(src=fa:16:3e:5a:f0:65,dst=00:11:0a:66:b2:68),eth_type(0x0800),ipv4(frag=no), packets:42, bytes:4116, used:0.664s, actions:push_vlan(vid=97,pcp=0),2

NOT WORKING - after ping from VM
--------------------------------
ovs-appctl dpctl/dump-flows
recirc_id(0),in_port(8),eth(src=fa:16:3e:e7:c3:cb,dst=fa:16:3e:4f:ac:f8),eth_type(0x0806),arp(sip=10.10.10.185), packets:0, bytes:0, used:never, actions:5
recirc_id(0),in_port(2),eth(src=00:17:e0:1f:63:94,dst=00:17:e0:1f:63:94),eth_type(0x9000), packets:44, bytes:2640, used:1.590s, actions:drop
recirc_id(0),in_port(2),eth(src=00:11:0a:66:b2:68,dst=fa:16:3e:5a:f0:65),eth_type(0x8100),vlan(vid=97,pcp=0),encap(eth_type(0x0806)), packets:0, bytes:0, used:never, actions:pop_vlan,6
recirc_id(0),in_port(9),eth(src=fa:16:3e:f9:c2:b7,dst=fa:16:3e:4f:ac:f8),eth_type(0x0800),ipv4(frag=no), packets:0, bytes:0, used:never, actions:5
recirc_id(0),in_port(5),eth(src=fa:16:3e:4f:ac:f8,dst=fa:16:3e:e7:c3:cb),eth_type(0x0806), packets:0, bytes:0, used:never, actions:8
recirc_id(0),tunnel(tun_id=0x1,src=10.0.2.20,dst=10.0.2.100,flags(-df-csum+key)),in_port(7),eth(src=fa:16:3e:82:99:59,dst=fa:16:3e:f9:c2:b7),eth_type(0x0800),ipv4(frag=no), packets:0, bytes:0, used:never, actions:9
recirc_id(0),in_port(2),eth(src=00:11:0a:66:b2:68,dst=fa:16:3e:5a:f0:65),eth_type(0x8100),vlan(vid=97,pcp=0),encap(eth_type(0x0800),ipv4(frag=no)), packets:92, bytes:9384, used:0.181s, actions:pop_vlan,6
recirc_id(0),in_port(9),skb_mark(0),eth(src=fa:16:3e:f9:c2:b7,dst=33:33:00:00:00:02),eth_type(0x86dd),ipv6(proto=58,tclass=0/0x3,frag=no),icmpv6(type=128/0xf8), packets:0, bytes:0, used:never, actions:push_vlan(vid=2,pcp=0),3,set(tunnel(tun_id=0x1,src=10.0.2.100,dst=10.0.2.20,ttl=64,tp_dst=4789,flags(df|key))),pop_vlan,7,set(tunnel(tun_id=0x1,src=10.0.2.100,dst=10.0.2.102,ttl=64,tp_dst=4789,flags(df|key))),7,set(tunnel(tun_id=0x1,src=10.0.2.100,dst=10.0.2.101,ttl=64,tp_dst=4789,flags(df|key))),7,set(tunnel(tun_id=0x1,src=10.0.2.100,dst=10.0.2.103,ttl=64,tp_dst=4789,flags(df|key))),7,5,8
recirc_id(0),in_port(9),eth(src=fa:16:3e:f9:c2:b7,dst=fa:16:3e:4f:ac:f8),eth_type(0x0806),arp(sip=10.10.10.242), packets:0, bytes:0, used:never, actions:5
recirc_id(0),in_port(5),skb_mark(0x4000000),eth(src=fa:16:3e:4f:ac:f8),eth_type(0x0800),ipv4(tos=0/0x3,frag=no), packets:30, bytes:2940, used:0.181s, actions:push_vlan(vid=2,pcp=0),3,set(tunnel(tun_id=0x1,src=10.0.2.100,dst=10.0.2.20,ttl=64,tp_dst=4789,flags(df|key))),set(eth(src=fa:16:3f:9c:aa:5e)),pop_vlan,set(skb_mark(0)),7,set(eth(src=fa:16:3e:4f:ac:f8)),set(skb_mark(0x4000000)),8,9
recirc_id(0),in_port(6),eth(src=fa:16:3e:5a:f0:65,dst=00:11:0a:66:b2:68),eth_type(0x0806), packets:0, bytes:0, used:never, actions:push_vlan(vid=97,pcp=0),2
recirc_id(0),in_port(2),eth(src=00:17:e0:1f:63:94,dst=01:00:0c:cc:cc:cc),eth_type(0/0xffff), packets:0, bytes:0, used:never, actions:drop
recirc_id(0),in_port(8),eth(src=fa:16:3e:e7:c3:cb,dst=fa:16:3e:4f:ac:f8),eth_type(0x0800),ipv4(frag=no), packets:13, bytes:1274, used:0.180s, actions:5
recirc_id(0),in_port(6),eth(src=fa:16:3e:5a:f0:65,dst=00:11:0a:66:b2:68),eth_type(0x0800),ipv4(frag=no), packets:13, bytes:1274, used:0.180s, actions:push_vlan(vid=97,pcp=0),2
recirc_id(0),in_port(5),eth(src=fa:16:3e:4f:ac:f8,dst=fa:16:3e:f9:c2:b7),eth_type(0x0806), packets:0, bytes:0, used:never, actions:9

Neutron config
==========================================

Compute1
-----------------------------------------

neutron.conf
-----------------------
[DEFAULT]
transport_url = rabbit://openstack:*********@controller
auth_strategy = keystone
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
global_physnet_mtu = 9000
max_l3_agents_per_router = 0
min_l3_agents_per_router = 1
[database]
connection = mysql+pymysql://neutron:*********@controller/neutron
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = *********
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp

l3_agent.ini
-----------------------
[DEFAULT]
interface_driver = openvswitch
router_delete_namespaces = True
agent_mode = dvr
external_network_bridge =

ml2_conf.ini
------------------------
[DEFAULT]
[l2pop]
[ml2]
type_drivers = flat,vlan,gre,vxlan
tenant_network_types = vxlan
mechanism_drivers = openvswitch
segment_mtu = 1500
path_mtu = 9000
physical_network_mtus = provider:1500
extension_drivers = port_security
[ml2_type_flat]
flat_networks = provider
[ml2_type_geneve]
[ml2_type_gre]
[ml2_type_vlan]
network_vlan_ranges = provider
[ml2_type_vxlan]
vni_ranges = 1:1000

openvswitch_agent.ini
---------------------
[DEFAULT]
[agent]
tunnel_types = vxlan
veth_mtu = 9000
enable_distributed_routing = True
l2_population = True
arp_responder = True
[ovs]
local_ip = 10.0.2.100
bridge_mappings = provider:br-ex
integration_bridge = br-int
tunnel_bridge = br-tun
[securitygroup]
enable_security_group = True
enable_ipset = True
firewall_driver = openvswitch

Network
-----------------------------------------

neutron.conf
-----------------------
[DEFAULT]
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true
transport_url = rabbit://openstack:*********@controller
auth_strategy = keystone
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
global_physnet_mtu = 9000
max_l3_agents_per_router = 0
min_l3_agents_per_router = 1
[database]
connection = mysql+pymysql://neutron:*********@controller/neutron
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = **********
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp

l3_agent.ini
-----------------------
[DEFAULT]
interface_driver = openvswitch
router_delete_namespaces = True
agent_mode = dvr_snat
external_network_bridge =

ml2_conf.ini
-----------------------
[DEFAULT]
[l2pop]
[ml2]
type_drivers = flat,vlan,gre,vxlan
tenant_network_types = vxlan
mechanism_drivers = openvswitch
segment_mtu = 1500
path_mtu = 9000
physical_network_mtus = provider:1500
extension_drivers = port_security
[ml2_type_flat]
flat_networks = provider
[ml2_type_geneve]
[ml2_type_gre]
[ml2_type_vlan]
network_vlan_ranges = provider
[ml2_type_vxlan]
vni_ranges = 1:1000

openvswitch_agent.ini
----------------------
[DEFAULT]
[agent]
tunnel_types = vxlan
veth_mtu = 9000
enable_distributed_routing = True
l2_population = True
arp_responder = True
[network_log]
[ovs]
local_ip = 10.0.2.20
bridge_mappings = provider:br-ex
integration_bridge = br-int
tunnel_bridge = br-tun
[securitygroup]
enable_security_group = true
enable_ipset = true
firewall_driver = openvswitch
[xenapi]

Controller
-----------------------------------------

neutron.conf
-----------------------
[DEFAULT]
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true
transport_url = rabbit://openstack:***********@controller
auth_strategy = keystone
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
global_physnet_mtu = 9000
router_distributed = True
debug = true
[cors]
[database]
connection = mysql+pymysql://neutron:***********@controller/neutron
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = ***********
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
driver = messagingv2
[oslo_messaging_rabbit]
[oslo_middleware]
[oslo_policy]
policy_file = /etc/neutron/policy.yaml
policy_default_rule = default
[privsep]
[ssl]
[nova]
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = ***********

l3_agent.ini
-----------------------
[DEFAULT]
interface_driver = openvswitch
router_delete_namespaces = True
external_network_bridge =

ml2_conf.ini
-----------------------
[DEFAULT]
[l2pop]
[ml2]
type_drivers = flat,vlan,gre,vxlan
tenant_network_types = vxlan
mechanism_drivers = openvswitch,l2population
path_mtu = 9000
physical_network_mtus = provider:1500
extension_drivers = port_security
[ml2_type_flat]
flat_networks = provider
[ml2_type_vlan]
network_vlan_ranges = provider
[ml2_type_vxlan]
vni_ranges = 1:1000

openvswitch_agent.ini
---------------------
[DEFAULT]
[agent]
tunnel_types = vxlan
veth_mtu = 9000
enable_distributed_routing = True
l2_population = True
arp_responder = True
explicitly_egress_direct = True
[ovs]
local_ip = 10.0.2.10
bridge_mappings = provider:br-ex
integration_bridge = br-int
tunnel_bridge = br-tun
[securitygroup]

description: updated
Revision history for this message
Philipp Krivanec (pkrivanec) wrote :

Hello,

I have new information.

If I use dvr_no_external as agent_mode on the compute nodes, it works.

Incoming connections to the floating IP are routed via the network note, the outgoing traffic also works and does not interrupt incoming connections.

If explicitly_egress_direct set to false, the return traffic from VM is broadcasted on br-int, witch is the expected behaviour.
If explicitly_egress_direct set to true the return traffic is not longer broadcasted on br-int and the incoming connections continue to work.

But as soon as I switched back to dvr as agent_mode, it's broken.

Best Regards
Phil

Changed in neutron:
importance: Undecided → High
tags: added: l3-dvr-backlog
Revision history for this message
Philipp Krivanec (pkrivanec) wrote :
Download full text (6.6 KiB)

Hello,

I am seeing more strange behavior on my test cluster.

In this scenario I have created two L2 networks; two VMs on one compute
node in the first L2 network; a third VM on another compute node in the
second L2 network.

Compute1
============
L2 Net 1:
----------
VMa
VMb

Compute3
============
L2 Net 2:
----------
VMc

Pinging from VMa to VMc works as expected, but I can not ping from VMb
to VMc at the same time.
VMa -> VMc OK
VMb -> VMc Broken

If the ping from VMb is started first it works, but then the ping from
VMa is broken.

As can be seen in the packet dump, traffic from VMb reaches VMc but the
return traffic is forwarded to VMa, which is incorrect.

The destination MAC address of the return traffic is correct, but
OpenvSwitch forwardeds the traffic to the wrong VM.

You can see in the dump-flows on compute1, there is no flow to output
9, only a flow to output 5.

I tested with agent_mode=dvr_no_external and with agent_mode=dvr with
identical results.

If I deploy a centralized router it works.

In my opinion this is the same bug.

Best Regards
Phil

tcpdump on VMs
==========================================

DUMP VMa
------------------
eth0:
  link/ether fa:16:3e:e7:c3:cb
  inet 10.10.10.185/24

debian@debian:~$ sudo tcpdump -netti eth0 icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
1592330132.615290 fa:16:3e:5b:19:10 > fa:16:3e:f9:c2:b7, ethertype IPv4 (0x0800), length 98: 10.10.20.57 > 10.10.10.242: ICMP echo reply, id 577, seq 33, length 64
1592330133.639309 fa:16:3e:5b:19:10 > fa:16:3e:f9:c2:b7, ethertype IPv4 (0x0800), length 98: 10.10.20.57 > 10.10.10.242: ICMP echo reply, id 577, seq 34, length 64

DUMP VMb
------------------
eth0:
  link/ether fa:16:3e:f9:c2:b7
  inet 10.10.10.242/24

debian@debian:~$ sudo tcpdump -netti eth0 icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
1592330136.797903 fa:16:3e:f9:c2:b7 > fa:16:3e:5b:19:10, ethertype IPv4 (0x0800), length 98: 10.10.10.242 > 10.10.20.57: ICMP echo request, id 577, seq 37, length 64
1592330137.821900 fa:16:3e:f9:c2:b7 > fa:16:3e:5b:19:10, ethertype IPv4 (0x0800), length 98: 10.10.10.242 > 10.10.20.57: ICMP echo request, id 577, seq 38, length 64

DUMP VMc
------------------
eth0:
  link/ether fa:16:3e:b2:47:43
  inet 10.10.20.57/24

debian@debian:~$ sudo tcpdump -netti eth0 icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
1592330137.026450 fa:16:3e:b2:47:43 > fa:16:3e:c0:43:39, ethertype IPv4 (0x0800), length 98: 10.10.20.57 > 10.10.10.242: ICMP echo reply, id 577, seq 38, length 64
1592330138.050433 fa:16:3e:c0:43:39 > fa:16:3e:b2:47:43, ethertype IPv4 (0x0800), length 98: 10.10.10.242 > 10.10.20.57: ICMP echo request, id 577, seq 39, length 64
1592330138.050461 fa:16:3e:b2:47:43 > fa:16:3e:c0:43:39, ethertype IPv4 (0x0800), length 98: 10.10.20.57 > 10.10.10.242: ICMP echo reply, id 577, seq 39, length 64
1592330139.074473 fa:16:3e:c0:43:39 > fa...

Read more...

Revision history for this message
Philipp Krivanec (pkrivanec) wrote :
Revision history for this message
Philipp Krivanec (pkrivanec) wrote :
Revision history for this message
Philipp Krivanec (pkrivanec) wrote :

Hi,

one more observation from my test system.

If I want to ping from VMa to VMf which are in different L2, but on the same compute node. The traffic reaches the VMf but the return traffic is not correct forwarded.

I also updated my test system to Centos 8.2 but nothing changed.

I've made a list of all the cases I've tested.
All test were done with agent_mode=dvr_no_external and only VMa has a flowting IP.

Best Regards
Phil

| Compute1 || Compute2 || Compute3 || Compute4 |
|============||============||============||============|
| Subnet a || Subnet a || Subnet a || Subnet a |
|------------||------------||------------||------------|
| VMa || VMc || VMd || VMe |
| VMb || || || |
| || || || |
| Subnet b || Subnet b || Subnet b || Subnet b |
|------------||------------||------------||------------|
| VMf || VMg || VMh || VMi |
| || || || |

-> First Ping
+ -> Next Ping, running at the same time

Case 1
---------------------------
VMa -> VMb WORKING

Case 2
---------------------------
VMa -> VMc WORKING

Case 3
---------------------------
VMa -> VMb WORKING
VMa + -> VMc WORKING

Case 4
---------------------------
VMa -> VMg WORKING
VMa + -> VMh NOT WORKING

Case 5
---------------------------
VMa -> VMg WORKING
VMb + -> VMg NOT WORKING

Case 6
---------------------------
VMa -> VMg WORKING
VMb + -> VMh NOT WORKING

Case 7
---------------------------
VMa -> VMf NOT WORKING

Case 8
---------------------------
VMa -> VMf NOT WORKING
VMa + -> VMg NOT WORKING
VMb + -> VMg NOT WORKING

Case 9
---------------------------
VMa -> VMf NOT WORKING
VMa + -> VMb WORKING

Case 10
---------------------------
VMa -> VMf NOT WORKING
VMa + -> VMc WORKING

Case 11
---------------------------
VMa -> VMf NOT WORKING
VMb + -> VMh NOT WORKING

Case 12
---------------------------
VMa -> VMf NOT WORKING
VMc + -> VMh WORKING

Case 13
---------------------------
VMa -> VMf NOT WORKING
VMc + -> VMf NOT WORKING

Revision history for this message
LIU Yulong (dragon889) wrote :

Change the agent mode from dvr_no_external to dvr is not related to the config option "explicity_egress_direct", it should be a bug of floating IP migration.

Revision history for this message
Philipp Krivanec (pkrivanec) wrote :

Hello,

yes the option "explicity_egress_direct" is used to avoid that egress packets are flooded on br-int.

In my opinion, the problem is that no flow entry is created for the return traffic, if the traffic is routed with a distributed router.

If I use a centralized router, all my test cases work.

Best Regards
Phil

Revision history for this message
LIU Yulong (dragon889) wrote :

@Philipp Krivanec,
Hi, please take a look at bug https://bugs.launchpad.net/neutron/+bug/1884708 which is something related to the config `explicity_egress_direct` when the ovs-agent is using noop or iptables hybrid security group driver.
And if you have more time, take a try of this patch:
https://review.opendev.org/#/c/738551/

Thanks in advance.

Revision history for this message
LIU Yulong (dragon889) wrote :

For now, I can say, a valid step to do this work is:
1. disassociate the floating IPs
2. disable the routers before change the agent mode
3. change the agent mode
4. enable the routers
5. associate the floating IPs back

Please try this to see if the floating IP migration works.

Changed in neutron:
assignee: nobody → LIU Yulong (dragon889)
Revision history for this message
Philipp Krivanec (pkrivanec) wrote :

@LIU Yulong
Hi,

I tested the patch, but the behavior has not changed with the patch installed.
The problem exists with the noop firewall driver and also with the openvswitch firewall driver.

To the bug 1732067, there the traffic is flooded between the nodes, but in my test system I see the return traffic is not correct forwarded.

My procedure between the tests looks like this.

- Stopping all Neutron Services on all hosts
- Remove all ovs bridges
- Remove all network namespaces
- Create the external bridge and add the external interface
- Start neutron-openvswitch-agent on all hosts
- Start the neutron-L3-agent and neutron-dhcp-agent

So I have the system in the same state for all tests.

Best Regards
Phil

Revision history for this message
Erik Panter (epanter) wrote :

Hi,

I noticed the same issue you mentioned in #2 with openvswitch 2.12.0 and OpenStack train

This datapath flow (on compute1 in your case) causes all tunneled traffic to be forwarded to the first VM's tap interface:

recirc_id(0),tunnel(...),in_port(vxlan_sys_*),eth(src=<dvr-mac>),eth_type(0x0800),ipv4(frag=no),actions:set(eth(src=fa:16:3e:5b:19:10)),tap*

it is missing the destination mac address, even though the OpenFlow flows actually match the instance's mac address in table 1 and 60 when tracing incoming traffic on br-int:

--------------------
Router DVR MAC: fa:16:3f:d8:de:ca

VMa MAC: fa:16:3e:e4:f2:f2
VMb MAC: fa:16:3e:0d:a5:98

VMa tap: tap34269161-f2
VMb tap: tap83b8d780-fb
-------------------

$ ovs-appctl ofproto/trace br-tun in_port=vxlan-0a221012,tun_id=0x1,dl_src=fa:16:3f:d8:de:ca,dl_dst=fa:16:3e:0d:a5:98
Flow: tun_id=0x1,in_port=2,vlan_tci=0x0000,dl_src=fa:16:3f:d8:de:ca,dl_dst=fa:16:3e:0d:a5:98,dl_type=0x0000

bridge("br-tun")
----------------
 0. in_port=2, priority 1, cookie 0xb1a5a3f5821cab3e
    goto_table:4
 4. tun_id=0x1, priority 1, cookie 0xb1a5a3f5821cab3e
    push_vlan:0x8100
    set_field:4098->vlan_vid
    goto_table:9
 9. dl_src=fa:16:3f:d8:de:ca, priority 1, cookie 0xb1a5a3f5821cab3e
    output:1

bridge("br-int")
----------------
 0. in_port=3,dl_src=fa:16:3f:d8:de:ca, priority 2, cookie 0xb0d51fe3de606c9
    goto_table:1
 1. dl_vlan=2,dl_dst=fa:16:3e:0d:a5:98, priority 20, cookie 0xb0d51fe3de606c9
    set_field:fa:16:3e:0d:8c:ea->eth_src
    goto_table:60
60. dl_vlan=2,dl_dst=fa:16:3e:0d:a5:98, priority 20, cookie 0xb0d51fe3de606c9
    pop_vlan
    output:665

Final flow: tun_id=0x1,in_port=2,dl_vlan=2,dl_vlan_pcp=0,vlan_tci1=0x0000,dl_src=fa:16:3f:d8:de:ca,dl_dst=fa:16:3e:0d:a5:98,dl_type=0x0000
Megaflow: recirc_id=0,eth,tun_id=0x1,in_port=2,vlan_tci=0x0000/0x1fff,dl_src=fa:16:3f:d8:de:ca,dl_dst=fa:16:3e:0d:a5:98,dl_type=0x0000
Datapath actions: set(eth(src=fa:16:3e:0d:8c:ea,dst=fa:16:3e:0d:a5:98)),13

After I downgraded to openvswitch 2.11.4 and recreated the setup, the datapath flows seem correct and the two pings work simultaneously:

$ovs-appctl dpctl/dump-flows --names | grep 'fa:16:3f:d8:de:ca'

recirc_id(0),tunnel(...),in_port(vxlan_sys_4789),eth(src=fa:16:3f:d8:de:ca,dst=fa:16:3e:0d:a5:98),eth_type(0x0800),ipv4(frag=no), ... actions:set(eth(src=fa:16:3e:0d:8c:ea,dst=fa:16:3e:0d:a5:98)),tap83b8d780-fb

recirc_id(0),tunnel(...),in_port(vxlan_sys_4789),eth(src=fa:16:3f:d8:de:ca,dst=fa:16:3e:e4:f2:f2),eth_type(0x0800),ipv4(frag=no), ... actions:set(eth(src=fa:16:3e:0d:8c:ea,dst=fa:16:3e:e4:f2:f2)),tap34269161-f2

could this be an issue in openvswitch?

Revision history for this message
Erik Panter (epanter) wrote :

The issue I mentioned is fixed with openvswitch v2.12.1 in https://github.com/openvswitch/ovs/commit/044c8406c403e76326248855dabc3131afe9d8aa

Revision history for this message
Philipp Krivanec (pkrivanec) wrote :

Hi,

that is interesting, I will update the openvswitch to v2.12.1 and test my config.

At the moment I'm busy with debugging a VXLAN Overlay, but I will run the tests as soon as possible.

Best Regards
Phil

Revision history for this message
Philipp Krivanec (pkrivanec) wrote :

Hi,

I updated the openvswitch to 1.12.1 and my first test shows that the problem is resolved.

I will continue testing my configuration over the next week.

Best regards,
Phil

Revision history for this message
Philipp Krivanec (pkrivanec) wrote :

Hi,

I've gone through all of my test cases and everything looks fine.

The update of openvswitch to v2.12.1 fixed the problem.

Best regards,

Phil

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix merged to neutron (master)

Reviewed: https://review.opendev.org/738551
Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=959d8b6d73e2a6ab1a45c9a7b0b05ae163e650fc
Submitter: Zuul
Branch: master

commit 959d8b6d73e2a6ab1a45c9a7b0b05ae163e650fc
Author: LIU Yulong <email address hidden>
Date: Fri Jul 10 17:25:15 2020 +0800

    Local mac direct flow for non-openflow firewall

    When there is no openflow firewall, aka the ovs agent security group
    is disabled or Noop/HybridIptable, this patch will introduce a different
    ingress pipeline for bridge ports which will avoid ingress flood:
    (1) table=0, in_port=patch_bridge,dl_vlan=physical_vlan action=mod_vlan:local_vlan,goto:60 (original)
    (2) table=60, in_port=patch_bridge action=goto:61 (new)
    (3) table=61, dl_dst=local_port_mac,dl_vlan=local_vlan, action=strip_vlan,output:<ofport> (changes)

    And changes the local ports pipeline:
    (1) table=0, in_port=local_ofport action=goto:25 (original)
    (2) table=25, in_port=local_ofport,dl_src=local_port_mac action=goto:60 (original)
    (3) table=60, in_port=local_ofport,dl_src=local_port_mac action=local_vlan->reg6,goto:61 (changes)
    (4) table=61, dl_dst=local_port_mac,reg6=local_vlan, action=output:<ofport> (changes)

    Closes-Bug: #1884708
    Closes-Bug: #1881070
    Related-Bug: #1732067
    Related-Bug: #1866445
    Related-Bug: #1883321

    Change-Id: Iecf9cffaf02616342f1727ad7db85545d8adbec2

tags: added: neutron-proactive-backport-potential
Revision history for this message
Trent Lloyd (lathiat) wrote :

Hit and reproduced the same in a test environment, also fixed with the upgrade to openvswitch v2.12.1

Note that v2.13.0 also has the same bug, it is fixed in v2.13.1

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to neutron (stable/victoria)

Related fix proposed to branch: stable/victoria
Review: https://review.opendev.org/759363

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to neutron (stable/ussuri)

Related fix proposed to branch: stable/ussuri
Review: https://review.opendev.org/759364

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to neutron (stable/train)

Related fix proposed to branch: stable/train
Review: https://review.opendev.org/759365

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to neutron (stable/stein)

Related fix proposed to branch: stable/stein
Review: https://review.opendev.org/759366

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to neutron (stable/rocky)

Related fix proposed to branch: stable/rocky
Review: https://review.opendev.org/759367

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to neutron (stable/queens)

Related fix proposed to branch: stable/queens
Review: https://review.opendev.org/759369

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix merged to neutron (stable/ussuri)

Reviewed: https://review.opendev.org/759364
Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=ef14d258eea91ef563c63334b2da1623d93418f3
Submitter: Zuul
Branch: stable/ussuri

commit ef14d258eea91ef563c63334b2da1623d93418f3
Author: LIU Yulong <email address hidden>
Date: Fri Jul 10 17:25:15 2020 +0800

    Local mac direct flow for non-openflow firewall

    When there is no openflow firewall, aka the ovs agent security group
    is disabled or Noop/HybridIptable, this patch will introduce a different
    ingress pipeline for bridge ports which will avoid ingress flood:
    (1) table=0, in_port=patch_bridge,dl_vlan=physical_vlan action=mod_vlan:local_vlan,goto:60 (original)
    (2) table=60, in_port=patch_bridge action=goto:61 (new)
    (3) table=61, dl_dst=local_port_mac,dl_vlan=local_vlan, action=strip_vlan,output:<ofport> (changes)

    And changes the local ports pipeline:
    (1) table=0, in_port=local_ofport action=goto:25 (original)
    (2) table=25, in_port=local_ofport,dl_src=local_port_mac action=goto:60 (original)
    (3) table=60, in_port=local_ofport,dl_src=local_port_mac action=local_vlan->reg6,goto:61 (changes)
    (4) table=61, dl_dst=local_port_mac,reg6=local_vlan, action=output:<ofport> (changes)

    Closes-Bug: #1884708
    Closes-Bug: #1881070
    Related-Bug: #1732067
    Related-Bug: #1866445
    Related-Bug: #1883321

    Change-Id: Iecf9cffaf02616342f1727ad7db85545d8adbec2
    (cherry picked from commit 959d8b6d73e2a6ab1a45c9a7b0b05ae163e650fc)

tags: added: in-stable-ussuri
tags: added: in-stable-train
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix merged to neutron (stable/train)

Reviewed: https://review.opendev.org/759365
Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=c06895e8e78de06c25d36cb347313240432953cf
Submitter: Zuul
Branch: stable/train

commit c06895e8e78de06c25d36cb347313240432953cf
Author: LIU Yulong <email address hidden>
Date: Fri Jul 10 17:25:15 2020 +0800

    Local mac direct flow for non-openflow firewall

    When there is no openflow firewall, aka the ovs agent security group
    is disabled or Noop/HybridIptable, this patch will introduce a different
    ingress pipeline for bridge ports which will avoid ingress flood:
    (1) table=0, in_port=patch_bridge,dl_vlan=physical_vlan action=mod_vlan:local_vlan,goto:60 (original)
    (2) table=60, in_port=patch_bridge action=goto:61 (new)
    (3) table=61, dl_dst=local_port_mac,dl_vlan=local_vlan, action=strip_vlan,output:<ofport> (changes)

    And changes the local ports pipeline:
    (1) table=0, in_port=local_ofport action=goto:25 (original)
    (2) table=25, in_port=local_ofport,dl_src=local_port_mac action=goto:60 (original)
    (3) table=60, in_port=local_ofport,dl_src=local_port_mac action=local_vlan->reg6,goto:61 (changes)
    (4) table=61, dl_dst=local_port_mac,reg6=local_vlan, action=output:<ofport> (changes)

    Closes-Bug: #1884708
    Closes-Bug: #1881070
    Related-Bug: #1732067
    Related-Bug: #1866445
    Related-Bug: #1883321

    Change-Id: Iecf9cffaf02616342f1727ad7db85545d8adbec2
    (cherry picked from commit 959d8b6d73e2a6ab1a45c9a7b0b05ae163e650fc)

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix merged to neutron (stable/stein)

Reviewed: https://review.opendev.org/759366
Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=7c757ad3372b5fe015ae4c5e3949c804e8515d20
Submitter: Zuul
Branch: stable/stein

commit 7c757ad3372b5fe015ae4c5e3949c804e8515d20
Author: LIU Yulong <email address hidden>
Date: Fri Jul 10 17:25:15 2020 +0800

    Local mac direct flow for non-openflow firewall

    When there is no openflow firewall, aka the ovs agent security group
    is disabled or Noop/HybridIptable, this patch will introduce a different
    ingress pipeline for bridge ports which will avoid ingress flood:
    (1) table=0, in_port=patch_bridge,dl_vlan=physical_vlan action=mod_vlan:local_vlan,goto:60 (original)
    (2) table=60, in_port=patch_bridge action=goto:61 (new)
    (3) table=61, dl_dst=local_port_mac,dl_vlan=local_vlan, action=strip_vlan,output:<ofport> (changes)

    And changes the local ports pipeline:
    (1) table=0, in_port=local_ofport action=goto:25 (original)
    (2) table=25, in_port=local_ofport,dl_src=local_port_mac action=goto:60 (original)
    (3) table=60, in_port=local_ofport,dl_src=local_port_mac action=local_vlan->reg6,goto:61 (changes)
    (4) table=61, dl_dst=local_port_mac,reg6=local_vlan, action=output:<ofport> (changes)

    Closes-Bug: #1884708
    Closes-Bug: #1881070
    Related-Bug: #1732067
    Related-Bug: #1866445
    Related-Bug: #1883321

    Change-Id: Iecf9cffaf02616342f1727ad7db85545d8adbec2
    (cherry picked from commit 959d8b6d73e2a6ab1a45c9a7b0b05ae163e650fc)

tags: added: in-stable-stein
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix merged to neutron (stable/victoria)

Reviewed: https://review.opendev.org/759363
Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=88bbb58c80b9c888371e25267715b155851d9278
Submitter: Zuul
Branch: stable/victoria

commit 88bbb58c80b9c888371e25267715b155851d9278
Author: LIU Yulong <email address hidden>
Date: Fri Jul 10 17:25:15 2020 +0800

    Local mac direct flow for non-openflow firewall

    When there is no openflow firewall, aka the ovs agent security group
    is disabled or Noop/HybridIptable, this patch will introduce a different
    ingress pipeline for bridge ports which will avoid ingress flood:
    (1) table=0, in_port=patch_bridge,dl_vlan=physical_vlan action=mod_vlan:local_vlan,goto:60 (original)
    (2) table=60, in_port=patch_bridge action=goto:61 (new)
    (3) table=61, dl_dst=local_port_mac,dl_vlan=local_vlan, action=strip_vlan,output:<ofport> (changes)

    And changes the local ports pipeline:
    (1) table=0, in_port=local_ofport action=goto:25 (original)
    (2) table=25, in_port=local_ofport,dl_src=local_port_mac action=goto:60 (original)
    (3) table=60, in_port=local_ofport,dl_src=local_port_mac action=local_vlan->reg6,goto:61 (changes)
    (4) table=61, dl_dst=local_port_mac,reg6=local_vlan, action=output:<ofport> (changes)

    Closes-Bug: #1884708
    Closes-Bug: #1881070
    Related-Bug: #1732067
    Related-Bug: #1866445
    Related-Bug: #1883321

    Change-Id: Iecf9cffaf02616342f1727ad7db85545d8adbec2
    (cherry picked from commit 959d8b6d73e2a6ab1a45c9a7b0b05ae163e650fc)

tags: added: in-stable-victoria
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix merged to neutron (stable/rocky)

Reviewed: https://review.opendev.org/759367
Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=7ce65c94786d2d144a49cb991575534d0771bb20
Submitter: Zuul
Branch: stable/rocky

commit 7ce65c94786d2d144a49cb991575534d0771bb20
Author: LIU Yulong <email address hidden>
Date: Fri Jul 10 17:25:15 2020 +0800

    Local mac direct flow for non-openflow firewall

    When there is no openflow firewall, aka the ovs agent security group
    is disabled or Noop/HybridIptable, this patch will introduce a different
    ingress pipeline for bridge ports which will avoid ingress flood:
    (1) table=0, in_port=patch_bridge,dl_vlan=physical_vlan action=mod_vlan:local_vlan,goto:60 (original)
    (2) table=60, in_port=patch_bridge action=goto:61 (new)
    (3) table=61, dl_dst=local_port_mac,dl_vlan=local_vlan, action=strip_vlan,output:<ofport> (changes)

    And changes the local ports pipeline:
    (1) table=0, in_port=local_ofport action=goto:25 (original)
    (2) table=25, in_port=local_ofport,dl_src=local_port_mac action=goto:60 (original)
    (3) table=60, in_port=local_ofport,dl_src=local_port_mac action=local_vlan->reg6,goto:61 (changes)
    (4) table=61, dl_dst=local_port_mac,reg6=local_vlan, action=output:<ofport> (changes)

    Closes-Bug: #1884708
    Closes-Bug: #1881070
    Related-Bug: #1732067
    Related-Bug: #1866445
    Related-Bug: #1883321

    Conflicts:
        neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py
        neutron/tests/unit/plugins/ml2/drivers/openvswitch/agent/openflow/native/test_br_int.py

    Change-Id: Iecf9cffaf02616342f1727ad7db85545d8adbec2
    (cherry picked from commit 959d8b6d73e2a6ab1a45c9a7b0b05ae163e650fc)

tags: added: in-stable-rocky
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix merged to neutron (stable/queens)

Reviewed: https://review.opendev.org/759369
Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=7fe3e3d1e04d206245480d28534d9ee11949a9fa
Submitter: Zuul
Branch: stable/queens

commit 7fe3e3d1e04d206245480d28534d9ee11949a9fa
Author: LIU Yulong <email address hidden>
Date: Fri Jul 10 17:25:15 2020 +0800

    Local mac direct flow for non-openflow firewall

    When there is no openflow firewall, aka the ovs agent security group
    is disabled or Noop/HybridIptable, this patch will introduce a different
    ingress pipeline for bridge ports which will avoid ingress flood:
    (1) table=0, in_port=patch_bridge,dl_vlan=physical_vlan action=mod_vlan:local_vlan,goto:60 (original)
    (2) table=60, in_port=patch_bridge action=goto:61 (new)
    (3) table=61, dl_dst=local_port_mac,dl_vlan=local_vlan, action=strip_vlan,output:<ofport> (changes)

    And changes the local ports pipeline:
    (1) table=0, in_port=local_ofport action=goto:25 (original)
    (2) table=25, in_port=local_ofport,dl_src=local_port_mac action=goto:60 (original)
    (3) table=60, in_port=local_ofport,dl_src=local_port_mac action=local_vlan->reg6,goto:61 (changes)
    (4) table=61, dl_dst=local_port_mac,reg6=local_vlan, action=output:<ofport> (changes)

    Closes-Bug: #1884708
    Closes-Bug: #1881070
    Related-Bug: #1732067
    Related-Bug: #1866445
    Related-Bug: #1883321

    Conflicts:
        neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py
        neutron/tests/unit/plugins/ml2/drivers/openvswitch/agent/openflow/native/test_br_int.py

    Change-Id: Iecf9cffaf02616342f1727ad7db85545d8adbec2
    (cherry picked from commit 959d8b6d73e2a6ab1a45c9a7b0b05ae163e650fc)

tags: added: in-stable-queens
tags: removed: neutron-proactive-backport-potential
To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Duplicates of this bug

Other bug subscribers