openvswitch firewall flows cause flooding on integration bridge

Bug #1732067 reported by James Denton on 2017-11-14
350
This bug affects 17 people
Affects Status Importance Assigned to Milestone
OpenStack Security Advisory
Undecided
Unassigned
neutron
High
LIU Yulong

Bug Description

Environment: OpenStack Newton
Driver: ML2 w/ OVS
Firewall: openvswitch

In this environment, we have observed OVS flooding network traffic across all ports in a given VLAN on the integration bridge due to the lack of a FDB entry for the destination MAC address. Across the large fleet of 240+ nodes, this is causing a considerable amount of noise on any given node.

In this test, we have 3 machines:

Client: fa:16:3e:e8:59:00 (10.10.60.2)
Server: fa:16:3e:80:cb:0a (10.10.60.9)
Bystander: fa:16:3e:a0:ee:02 (10.10.60.10)

The server is running a web server using netcat:

while true ; do sudo nc -l -p 80 < index.html ; done

Client requests page using curl:

ip netns exec qdhcp-b07e6cb3-0943-45a2-b5ff-efb7e99e4d3d curl http://10.10.60.9/

We should expect to see the communication limited to the client and server. However, the captures below reflect the server->client responses being broadcast out all tap interfaces connected to br-int in the same local vlan:

root@osa-newton-ovs-compute01:~# tcpdump -i tap5f03424d-1c -ne port 80
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on tap5f03424d-1c, link-type EN10MB (Ethernet), capture size 262144 bytes
02:20:30.190675 fa:16:3e:e8:59:00 > fa:16:3e:80:cb:0a, ethertype IPv4 (0x0800), length 74: 10.10.60.2.54796 > 10.10.60.9.80: Flags [S], seq 213484442, win 29200, options [mss 1460,sackOK,TS val 140883559 ecr 0,nop,wscale 7], length 0
02:20:30.191926 fa:16:3e:80:cb:0a > fa:16:3e:e8:59:00, ethertype IPv4 (0x0800), length 74: 10.10.60.9.80 > 10.10.60.2.54796: Flags [S.], seq 90006557, ack 213484443, win 14480, options [mss 1460,sackOK,TS val 95716 ecr 140883559,nop,wscale 4], length 0
02:20:30.192837 fa:16:3e:e8:59:00 > fa:16:3e:80:cb:0a, ethertype IPv4 (0x0800), length 66: 10.10.60.2.54796 > 10.10.60.9.80: Flags [.], ack 1, win 229, options [nop,nop,TS val 140883560 ecr 95716], length 0
02:20:30.192986 fa:16:3e:e8:59:00 > fa:16:3e:80:cb:0a, ethertype IPv4 (0x0800), length 140: 10.10.60.2.54796 > 10.10.60.9.80: Flags [P.], seq 1:75, ack 1, win 229, options [nop,nop,TS val 140883560 ecr 95716], length 74: HTTP: GET / HTTP/1.1
02:20:30.195806 fa:16:3e:80:cb:0a > fa:16:3e:e8:59:00, ethertype IPv4 (0x0800), length 79: 10.10.60.9.80 > 10.10.60.2.54796: Flags [P.], seq 1:14, ack 1, win 905, options [nop,nop,TS val 95717 ecr 140883560], length 13: HTTP
02:20:30.196207 fa:16:3e:e8:59:00 > fa:16:3e:80:cb:0a, ethertype IPv4 (0x0800), length 66: 10.10.60.2.54796 > 10.10.60.9.80: Flags [.], ack 14, win 229, options [nop,nop,TS val 140883561 ecr 95717], length 0
02:20:30.197481 fa:16:3e:80:cb:0a > fa:16:3e:e8:59:00, ethertype IPv4 (0x0800), length 66: 10.10.60.9.80 > 10.10.60.2.54796: Flags [.], ack 75, win 905, options [nop,nop,TS val 95717 ecr 140883560], length 0

^^^ On the server tap we see the bi-directional traffic

root@osa-newton-ovs-compute01:/home/ubuntu# tcpdump -i tapb8051da9-60 -ne port 80
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on tapb8051da9-60, link-type EN10MB (Ethernet), capture size 262144 bytes
02:20:30.192165 fa:16:3e:80:cb:0a > fa:16:3e:e8:59:00, ethertype IPv4 (0x0800), length 74: 10.10.60.9.80 > 10.10.60.2.54796: Flags [S.], seq 90006557, ack 213484443, win 14480, options [mss 1460,sackOK,TS val 95716 ecr 140883559,nop,wscale 4], length 0
02:20:30.195827 fa:16:3e:80:cb:0a > fa:16:3e:e8:59:00, ethertype IPv4 (0x0800), length 79: 10.10.60.9.80 > 10.10.60.2.54796: Flags [P.], seq 1:14, ack 1, win 905, options [nop,nop,TS val 95717 ecr 140883560], length 13: HTTP
02:20:30.197500 fa:16:3e:80:cb:0a > fa:16:3e:e8:59:00, ethertype IPv4 (0x0800), length 66: 10.10.60.9.80 > 10.10.60.2.54796: Flags [.], ack 75, win 905, options [nop,nop,TS val 95717 ecr 140883560], length 0

^^^ On the bystander tap we see the flooded traffic

The FDB tables reflect the lack of CAM entry for the client on br-int bridge. I would expect to see the MAC address on the patch uplink:

root@osa-newton-ovs-compute01:/home/ubuntu# ovs-appctl fdb/show br-int | grep 'fa:16:3e:e8:59:00'
root@osa-newton-ovs-compute01:/home/ubuntu# ovs-appctl fdb/show br-provider | grep 'fa:16:3e:e8:59:00'
    2 850 fa:16:3e:e8:59:00 3

Sources[1] point to the fact that an 'output' action negates the MAC learning mechanism in OVS. Related Table 82 entries are below, and code is here[2]:

cookie=0x94ebb7913c37a0ec, duration=415.490s, table=82, n_packets=5, n_bytes=424, idle_age=31, priority=70,ct_state=+est-rel-rpl,tcp,reg5=0xd,dl_dst=fa:16:3e:80:cb:0a,tp_dst=80 actions=strip_vlan,output:13
cookie=0x94ebb7913c37a0ec, duration=415.489s, table=82, n_packets=354, n_bytes=35229, idle_age=154, priority=70,ct_state=+est-rel-rpl,tcp,reg5=0xd,dl_dst=fa:16:3e:80:cb:0a,tp_dst=22 actions=strip_vlan,output:13
cookie=0x94ebb7913c37a0ec, duration=415.489s, table=82, n_packets=1, n_bytes=78, idle_age=154, priority=70,ct_state=+new-est,tcp,reg5=0xd,dl_dst=fa:16:3e:80:cb:0a,tp_dst=80 actions=ct(commit,zone=NXM_NX_REG6[0..15]),strip_vlan,output:13
cookie=0x94ebb7913c37a0ec, duration=415.489s, table=82, n_packets=1, n_bytes=78, idle_age=415, priority=70,ct_state=+new-est,tcp,reg5=0xd,dl_dst=fa:16:3e:80:cb:0a,tp_dst=22 actions=ct(commit,zone=NXM_NX_REG6[0..15]),strip_vlan,output:13
cookie=0x94ebb7913c37a0ec, duration=415.491s, table=82, n_packets=120, n_bytes=7920, idle_age=305, priority=50,ct_state=+est-rel+rpl,ct_zone=4,ct_mark=0,reg5=0xd,dl_dst=fa:16:3e:80:cb:0a actions=strip_vlan,output:13
cookie=0x94ebb7913c37a0ec, duration=415.491s, table=82, n_packets=0, n_bytes=0, idle_age=415, priority=50,ct_state=-new-est+rel-inv,ct_zone=4,ct_mark=0,reg5=0xd,dl_dst=fa:16:3e:80:cb:0a actions=strip_vlan,output:13

My testing shows that massaging the flow rules to remove the 'output' action and instead use a 'mod_vlan_vid' action (for the sake of getting it working) results in expected behavior:

cookie=0x85cd1a977dd54be0, duration=0.359s, table=82, n_packets=0, n_bytes=0, idle_age=2110, priority=70,ct_state=+est-rel-rpl,tcp,reg5=0xd,dl_dst=fa:16:3e:80:cb:0a,tp_dst=80 actions=mod_vlan_vid:4,NORMAL
cookie=0x85cd1a977dd54be0, duration=0.359s, table=82, n_packets=0, n_bytes=0, idle_age=518, priority=70,ct_state=+est-rel-rpl,tcp,reg5=0xd,dl_dst=fa:16:3e:80:cb:0a,tp_dst=22 actions=mod_vlan_vid:4,NORMAL
cookie=0x85cd1a977dd54be0, duration=0.359s, table=82, n_packets=0, n_bytes=0, idle_age=392, priority=70,ct_state=+new-est,tcp,reg5=0xd,dl_dst=fa:16:3e:80:cb:0a,tp_dst=80 actions=ct(commit,zone=NXM_NX_REG6[0..15]),mod_vlan_vid:4,NORMAL
cookie=0x85cd1a977dd54be0, duration=0.359s, table=82, n_packets=0, n_bytes=0, idle_age=185, priority=70,ct_state=+new-est,tcp,reg5=0xd,dl_dst=fa:16:3e:80:cb:0a,tp_dst=22 actions=ct(commit,zone=NXM_NX_REG6[0..15]),mod_vlan_vid:4,NORMAL
cookie=0x85cd1a977dd54be0, duration=0.361s, table=82, n_packets=0, n_bytes=0, idle_age=5263, priority=50,ct_state=+est-rel+rpl,ct_zone=4,ct_mark=0,reg5=0xd,dl_dst=fa:16:3e:80:cb:0a actions=strip_vlan,output:13
cookie=0x85cd1a977dd54be0, duration=0.361s, table=82, n_packets=0, n_bytes=0, idle_age=5373, priority=50,ct_state=-new-est+rel-inv,ct_zone=4,ct_mark=0,reg5=0xd,dl_dst=fa:16:3e:80:cb:0a actions=strip_vlan,output:13

The MAC of the client shows up now on br-int FDB:

root@osa-newton-ovs-compute01:/home/ubuntu# ovs-appctl fdb/show br-int | grep 'fa:16:3e:e8:59:00'
    1 4 fa:16:3e:e8:59:00 2

The test below shows that traffic is only seen on server tap and not bystander tap:

root@osa-newton-ovs-compute01:/home/ubuntu# tcpdump -i tap5f03424d-1c -ne port 80
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on tap5f03424d-1c, link-type EN10MB (Ethernet), capture size 262144 bytes
03:46:52.606940 fa:16:3e:e8:59:00 > fa:16:3e:80:cb:0a, ethertype IPv4 (0x0800), length 74: 10.10.60.2.55808 > 10.10.60.9.80: Flags [S], seq 3645914146, win 29200, options [mss 1460,sackOK,TS val 142179163 ecr 0,nop,wscale 7], length 0
03:46:52.608880 fa:16:3e:80:cb:0a > fa:16:3e:e8:59:00, ethertype IPv4 (0x0800), length 74: 10.10.60.9.80 > 10.10.60.2.55808: Flags [S.], seq 3531519972, ack 3645914147, win 14480, options [mss 1460,sackOK,TS val 1391324 ecr 142179163,nop,wscale 4], length 0
03:46:52.610175 fa:16:3e:e8:59:00 > fa:16:3e:80:cb:0a, ethertype IPv4 (0x0800), length 66: 10.10.60.2.55808 > 10.10.60.9.80: Flags [.], ack 1, win 229, options [nop,nop,TS val 142179164 ecr 1391324], length 0
03:46:52.610273 fa:16:3e:e8:59:00 > fa:16:3e:80:cb:0a, ethertype IPv4 (0x0800), length 140: 10.10.60.2.55808 > 10.10.60.9.80: Flags [P.], seq 1:75, ack 1, win 229, options [nop,nop,TS val 142179164 ecr 1391324], length 74: HTTP: GET / HTTP/1.1
03:46:52.613851 fa:16:3e:80:cb:0a > fa:16:3e:e8:59:00, ethertype IPv4 (0x0800), length 66: 10.10.60.9.80 > 10.10.60.2.55808: Flags [.], ack 75, win 905, options [nop,nop,TS val 1391325 ecr 142179164], length 0
03:46:52.614007 fa:16:3e:80:cb:0a > fa:16:3e:e8:59:00, ethertype IPv4 (0x0800), length 79: 10.10.60.9.80 > 10.10.60.2.55808: Flags [P.], seq 1:14, ack 75, win 905, options [nop,nop,TS val 1391325 ecr 142179164], length 13: HTTP
03:46:52.614314 fa:16:3e:e8:59:00 > fa:16:3e:80:cb:0a, ethertype IPv4 (0x0800), length 66: 10.10.60.2.55808 > 10.10.60.9.80: Flags [.], ack 14, win 229, options [nop,nop,TS val 142179165 ecr 1391325], length 0

root@osa-newton-ovs-compute01:/home/ubuntu# tcpdump -i tapb8051da9-60 -ne port 80
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on tapb8051da9-60, link-type EN10MB (Ethernet), capture size 262144 bytes

>> Nothing! As expected.

I need to build out an environment using the master branch, but the code at [3] seems to indicate the 'output' action is still specified.

Thanks for taking a look and let me know if you have any questions.

[1] https://mail.openvswitch.org/pipermail/ovs-discuss/2016-August/042276.html
[2] https://github.com/openstack/neutron/blob/newton-eol/neutron/agent/linux/openvswitch_firewall/rules.py#L73
[3] https://github.com/openstack/neutron/blob/master/neutron/agent/linux/openvswitch_firewall/rules.py#L80

James Denton (james-denton) wrote :
Download full text (4.1 KiB)

Just an update -- I was able to replicate this using neutron-openvswitch-agent 11.0.0.0rc2.dev368.

Server:

root@osa-master-ovs:/home/ubuntu# tcpdump -i tap849fa737-65 -ne port 80
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on tap849fa737-65, link-type EN10MB (Ethernet), capture size 262144 bytes
04:29:21.667498 fa:16:3e:32:40:3d > fa:16:3e:f3:2c:61, ethertype IPv4 (0x0800), length 74: 192.168.1.2.42444 > 192.168.1.3.80: Flags [S], seq 364460666, win 29200, options [mss 1460,sackOK,TS val 2559349669 ecr 0,nop,wscale 7], length 0
04:29:21.668573 fa:16:3e:f3:2c:61 > fa:16:3e:32:40:3d, ethertype IPv4 (0x0800), length 74: 192.168.1.3.80 > 192.168.1.2.42444: Flags [S.], seq 4137876027, ack 364460667, win 14480, options [mss 1460,sackOK,TS val 4294931608 ecr 2559349669,nop,wscale 4], length 0
04:29:21.669023 fa:16:3e:32:40:3d > fa:16:3e:f3:2c:61, ethertype IPv4 (0x0800), length 66: 192.168.1.2.42444 > 192.168.1.3.80: Flags [.], ack 1, win 229, options [nop,nop,TS val 2559349670 ecr 4294931608], length 0
04:29:21.669077 fa:16:3e:32:40:3d > fa:16:3e:f3:2c:61, ethertype IPv4 (0x0800), length 141: 192.168.1.2.42444 > 192.168.1.3.80: Flags [P.], seq 1:76, ack 1, win 229, options [nop,nop,TS val 2559349670 ecr 4294931608], length 75: HTTP: GET / HTTP/1.1
04:29:21.669979 fa:16:3e:f3:2c:61 > fa:16:3e:32:40:3d, ethertype IPv4 (0x0800), length 66: 192.168.1.3.80 > 192.168.1.2.42444: Flags [.], ack 76, win 905, options [nop,nop,TS val 4294931609 ecr 2559349670], length 0
04:29:21.672658 fa:16:3e:f3:2c:61 > fa:16:3e:32:40:3d, ethertype IPv4 (0x0800), length 90: 192.168.1.3.80 > 192.168.1.2.42444: Flags [P.], seq 1:25, ack 76, win 905, options [nop,nop,TS val 4294931610 ecr 2559349670], length 24: HTTP
04:29:21.672710 fa:16:3e:32:40:3d > fa:16:3e:f3:2c:61, ethertype IPv4 (0x0800), length 66: 192.168.1.2.42444 > 192.168.1.3.80: Flags [.], ack 25, win 229, options [nop,nop,TS val 2559349671 ecr 4294931610], length 0
04:29:55.543336 fa:16:3e:f3:2c:61 > fa:16:3e:32:40:3d, ethertype IPv4 (0x0800), length 66: 192.168.1.3.80 > 192.168.1.2.42444: Flags [F.], seq 25, ack 76, win 905, options [nop,nop,TS val 4294940076 ecr 2559349671], length 0
04:29:55.579959 fa:16:3e:32:40:3d > fa:16:3e:f3:2c:61, ethertype IPv4 (0x0800), length 66: 192.168.1.2.42444 > 192.168.1.3.80: Flags [F.], seq 76, ack 26, win 229, options [nop,nop,TS val 2559358148 ecr 4294940076], length 0
04:29:55.580711 fa:16:3e:f3:2c:61 > fa:16:3e:32:40:3d, ethertype IPv4 (0x0800), length 66: 192.168.1.3.80 > 192.168.1.2.42444: Flags [.], ack 77, win 905, options [nop,nop,TS val 4294940087 ecr 2559358148], length 0
^C
10 packets captured
10 packets received by filter
0 packets dropped by kernel

Bystander:

root@osa-master-ovs:/home/ubuntu# tcpdump -i tap2bceb97c-0b -ne port 80
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on tap2bceb97c-0b, link-type EN10MB (Ethernet), capture size 262144 bytes
04:29:21.668768 fa:16:3e:f3:2c:61 > fa:16:3e:32:40:3d, ethertype IPv4 (0x0800), length 74: 192.168.1.3.80 > 192.168.1.2.42444: Flags [S.], seq 4137876027, ack 364460667, win 14480, options [mss 1460,sackOK,TS val 4294931608 ecr 2559349669...

Read more...

tags: added: ovs-fw
Peter Slovak (slovak-peto) wrote :

@james-denton, do you by any chance use such IP addresses inside your VMs that Neutron isn't aware of? We spotted the exact same behavior in our setup when we created Neutron ports with IP addresses A, B, C and additionally used IPs X, Y, Z inside the VMs.

The X, Y and Z IPs were managed by Corosync and to be able to communicate with them, we used a somewhat lazy allowed_address_pair 0.0.0.0/0 on the three ports. As we realized later, the OVS firewall driver takes the port IPs and allowed_address_pairs into account when creating some egress flows [1].

Then we ended up with flow groups with criteria "nw_src=10.10.1.1" (the IP on Neutron port) and "nw_src=0.0.0.0" (IP from allowed_address_pairs). The first IP never had any problems with flooding and when using it as source, the target MAC was learned by ovs and appeared in the br-int fdb. I'm guessing adding IPs X, Y and Z to allowed_address_pairs will solve this, but have to verify this yet.

[1] https://github.com/openstack/neutron/blob/newton-eol/neutron/agent/linux/openvswitch_firewall/firewall.py#L405

Peter Slovak (slovak-peto) wrote :

Nevermind, the fix with allowed_address_pairs only worked intermittently in our setup. I haven't successfully traced why it worked in some occasions and didn't work in others. Consider my previous advice invalid.

Yang Li (yang-li) wrote :

I also get the same problem, always no mac in fdb tables that cause tcp ack packet flood to other port, is there a solution for this problem?

Anton Kurbatov (akurbatov) wrote :

Also get the same. It looks like VM hosted on the same node as a victim VM may sniff all traffic and password/cookies/etc may then be leaked.

James Denton (james-denton) wrote :

This may have been resolved around the Pike timeframe, IIRC. Can you please confirm which version of OpenStack you’re using?

Yang Li (yang-li) wrote :

My openstack version is Newton, can you confirm which patch solved this problem, thanks a lot !!
BTW,Not only same node‘s vm, the vms in different nodes also has this problem。

Yang Li (yang-li) wrote :

I modified the openflow, and it worked, no more flood packets.
cookie=0x85cd1a977dd54be0, duration=0.359s, table=82, n_packets=0, n_bytes=0, idle_age=518, priority=70,ct_state=+est-rel-rpl,tcp,reg5=0xd,dl_dst=fa:16:3e:80:cb:0a,tp_dst=22 actions=NORMAL

I'm not sure is there any problem for this modification, it's a workaround, not compatible with later version of neutron

Jesse (jesse-5) wrote :

After tested, It seems that this issue is caused by ovs-fw ingress flows with strip_vlan,output=<port_id> in table=81 and table=82.
When I ping VM's floatingip from outside. for example, floatingip for VM is 172.24.0.157, internal ip is 192.168.111.18/fa:16:3e:b2:c2:84, outside host IP is 172.24.0.3/e6:73:51:97:74:4e.
If outside host has no arp entry for 172.24.0.157, it will send arp broadcast, and br-int on host which VM on will update it's fdb entry.

[root@node-2 ~]# ovs-appctl fdb/show br-int | grep e6:73:51:97:74:4e
    2 3 e6:73:51:97:74:4e 3

This fdb entry timeout is 300s, even though VM ping continues, this entry will not be updated because strip_vlan,output=<port_id> flow. This flow will forward ping package to VM and do not update fdb entry in br-int.
After 300s, this entry disappear, The reply icmp package will flood in br-int bridge (This do not affect the original ping, but flood will affect other VMs on this host. If other VMs has ingress QoS, this flood will affect other VM's network connection).
If you delete 172.24.0.157 fdb entry on outside host by `arp -d 172.24.0.157`, the fdb entry in br-int will come back, and flood is stopped.

To solve this problem, comment #8 is a solution, for ingress package, we just do actions=NORMAL at last to let br-int update fdb entry.

Anton Kurbatov (akurbatov) wrote :

I am also agree flooding is caused due to
- for egress trffic there is action=NORMAL
- for ingress traffic there is direct action=output:PORT
in the openflow rules.

NORMAL action causes openvswitch to flood traffic if there is no dst mac entry in fdb table
(https://github.com/openvswitch/ovs/blob/master/ofproto/ofproto-dpif-xlate.c#L3109)
At the same time MAC address is put into fdb table only if NORMAL action is performed (note, no mac learning is performed for direct output:PORT action)

E.g.
- a VM with m_mac sends packets to a node with node_mac -> it forces an ovs to learn vm_mac and remember this mac in br-int fdb table, then packet is flooded to all ovs ports;
- the node responses to vm_mac using output:VM_PORT rule -> the packet is sent directly to the VM port (without node_mac learning)
- the VM again sends packet to the node -> there is no node_mac in the fdb table and it forces ovs another packet flooding to all ports.

Fix proposed to branch: master
Review: https://review.openstack.org/639009

Changed in neutron:
assignee: nobody → Yang Li (yang-li)
status: New → In Progress
Changed in neutron:
assignee: Yang Li (yang-li) → zhengyong (zhengy23)
Changed in neutron:
assignee: zhengyong (zhengy23) → Zhengdong Wu (zhengdong.wu)
Junien Fridrick (axino) wrote :

Note : an ARP request with broadcast ethernet address (ff:ff:ff:ff:ff:ff) will trigger MAC learning. This is used by VMs when the IP is not already in their MAC table. After what, they switch to unicast ARP requests, which _don't_ trigger MAC learning - so the FDB entry expires after 5 min (by default).

Junien Fridrick (axino) wrote :

This is also GREATLY impacting VM throughput if you have multiple VMs in the same neutron network on a hypervisor.

Hongbin Lu (hongbin.lu) on 2019-04-19
Changed in neutron:
importance: Undecided → High
Junien Fridrick (axino) wrote :

Note : I'm also seeing this behaviour with the iptables_hybrid firewall driver

Changed in neutron:
assignee: Zhengdong Wu (zhengdong.wu) → yangjianfeng (yangjianfeng)

Fix proposed to branch: master
Review: https://review.opendev.org/666991

Changed in neutron:
assignee: yangjianfeng (yangjianfeng) → LIU Yulong (dragon889)
Jeremy Stanley (fungi) wrote :

Since this report concerns a possible security risk, an incomplete security advisory task has been added while the core security reviewers for the affected project or projects confirm the bug and discuss the scope of any vulnerability along with potential solutions.

(The duplicate bug 1813439 was previously being tracked as a potential vulnerability report.)

Changed in ossa:
status: New → Incomplete
information type: Public → Public Security
Jeremy Stanley (fungi) wrote :

Is this related to bug 1837252? The symptoms seem very similar.

sean mooney (sean-k-mooney) wrote :

no this is not related to bug 1837252

ml2/ovs with ovs firewall does not use hybrid plug so there is no linux bridge between ovs
and the tap device so they are complete different bugs caused by different issues.

the effect is similar but not the same underlying cause or network backend.

Hello:

We are currently testing this bug in our development systems. ASAP we'll provide some feedback.

Regards.

Miguel Lavalle (minsel) wrote :

@fungi,

The testing Rodolfo is talking about in #19 will help decide the scope of any potential vulnerability. will keep you posted

LIU Yulong (dragon889) wrote :

We have an updated fix locally which can cover both enable/disable openflow security group (firewall).
I will update it to this patch set: https://review.opendev.org/#/c/666991/

And also I added a new bug for the new solution, it will add a flow table to do something like a switch FDB table. The accepted egress flows will be take care in that. For more information please see here:
https://bugs.launchpad.net/neutron/+bug/1841622

To post a comment you must log in.
This report contains Public Security information  Edit
Everyone can see this security related information.

Duplicates of this bug

Other bug subscribers