Openvswitch Agent - Connexion openvswitch DB Broken

Bug #1864822 reported by Acoss69
24
This bug affects 3 people
Affects Status Importance Assigned to Milestone
neutron
Medium
Slawek Kaplonski
neutron (Ubuntu)
Undecided
Unassigned
Bionic
Undecided
Unassigned

Bug Description

(For SRU template, please see bug 1869808, as the SRU info there applies to this bug also)

Hi all,

We have deployed more OpenStack plateform in my company.
We used kolla ansible to deploy our plateforms.
Here is the configuration that we applied :
kolla_base_distro: "centos"
kolla_install_type : "binary"
openstack_version : "stein"

Neutron architecture :
HA l3 enable
DVR enable
SNAT Enabled
multiple vlan provider : True

Note: Our plateforms are multi-region

Recently, we have upgraded a master region from rocky to stein with kolla ansible upgrade procedure.
Since ugrade, sometimes openvswitch agent lost connexion to ovsdb.
We have found this error in neutron-openvswitch-agent.log : "tcp:127.0.0.1:6640: send error: Broken pipe".
And we have found this errors in ovsdb-server.log :
2020-02-24T23:13:22.644Z|00009|reconnect|ERR|tcp:127.0.0.1:50260: no response to inactivity probe after 5 seconds, disconnecting
2020-02-25T04:10:55.893Z|00010|reconnect|ERR|tcp:127.0.0.1:58544: no response to inactivity probe after 5 seconds, disconnecting
2020-02-25T07:21:12.301Z|00011|reconnect|ERR|tcp:127.0.0.1:34918: no response to inactivity probe after 5 seconds, disconnecting
2020-02-25T09:21:45.533Z|00012|reconnect|ERR|tcp:127.0.0.1:37782: no response to inactivity probe after 5 seconds, disconnecting

When we experience this issue, all "NORMAL" type flows inside br-ex doesn't get out.
Example of flows stuck:
(neutron-openvswitch-agent)[root@cnp69s12p07 /]# ovs-ofctl dump-flows br-ex | grep NORMAL
 cookie=0x7adbd675f988912b, duration=72705.077s, table=0, n_packets=185, n_bytes=16024, idle_age=65534, hard_age=65534, priority=0 actions=NORMAL
 cookie=0x7adbd675f988912b, duration=72695.007s, table=2, n_packets=11835702, n_bytes=5166123797, idle_age=0, hard_age=65534, priority=4,in_port=5,dl_vlan=1 actions=mod_vlan_vid:12,NORMAL
 cookie=0x7adbd675f988912b, duration=72694.928s, table=2, n_packets=4133243, n_bytes=349654412, idle_age=0, hard_age=65534, priority=4,in_port=5,dl_vlan=9 actions=mod_vlan_vid:18,NORMAL

Workaround to solve this issue:
- stop openvswitch_db openvswitch_vswitchd neutron_openvswitch_agent neutron_l3_agent (containers)
- start containers: openvswitch_db openvswitch_vswitchd
- start neutron_l3_agent neutron_openvswitch_agent

Note: we have keep ovs connection timeout options by default :
- of_connect_timeout: 300
- of_request_timeout: 300
- of_inactivity_probe: 10

Thank you in advance for your help.

Acoss69 (acoss69)
summary: - Openvswitch Agent - connexion openvswitch DB Broken
+ Openvswitch Agent - Connexion openvswitch DB Broken
Revision history for this message
James Denton (james-denton) wrote :

Hi,

When this issue occurs, what does the entire flow table for br-ex look like? I am most curious about the first flow of table 0.

Thanks,
James

Revision history for this message
Acoss69 (acoss69) wrote :

Hi,

I have join a screenshot which display the entire flow table for br-ex during the issue (for security reasons, i have hidden vlan id).

Thank you for your help.

Revision history for this message
Acoss69 (acoss69) wrote :

Hi,
Sorry, i have forgot to join the screen.
Thanks.

Revision history for this message
James Denton (james-denton) wrote :

It is that first flow, the 'drop' flow, that is responsible for the issue. I am having a similar issue in a Stein environment and am trying to reproduce in a lab. That drop flow is implemented shortly after the disconnect observed in the logs.

Do you see a log message like this in the neutron-openvswitch-agent.log file?

> "Physical bridge br-ex was just re-created"

Shortly after that message, the drop flow is implemented and the cookies are changed for a subset of the flows. A restart of the agent only temporarily addresses the issue.

I am not positive, but think this patch may have something to do with it:

https://github.com/openstack/neutron/commit/8d8f66eddd43658e59de157dfcbd0f9dc9c716c6#diff-9cca2f63ca397a7e93909a7119fdd16f

To address this temporarily, I made this change to the ovs agent code:

--- ovs_neutron_agent.py 2020-02-22 20:23:43.606606000 +0100
+++ ovs_neutron_agent.py.mod 2020-02-23 00:25:01.014768189 +0100
@@ -2223,7 +2223,7 @@
             # Check if any physical bridge wasn't recreated recently
             added_bridges = idl_monitor.bridges_added + self.added_bridges
             bridges_recreated = self._reconfigure_physical_bridges(
- added_bridges)
+ idl_monitor.bridges_added)
             sync |= bridges_recreated
             # Notify the plugin of tunnel IP
             if self.enable_tunneling and tunnel_sync:

So far, it's working as expected. Still trying to find the root cause.

Revision history for this message
Acoss69 (acoss69) wrote :

In fact, we have also found "Physical bridge br-ex was just re-created" in the openvswitch log (just after broken pipe error with openvswitch db).

Regarding the patch indicate in your message, it is apply in our configuration.
We supposed that when the "broken pipe" message appear, a table 0 with a drop action is created and the br-ex is recreated.

But we don't understand why the table 0 which drop packets stay active until we restart openvswitch_agent/db/vswitch containers?
Thanks in advance.

Revision history for this message
Acoss69 (acoss69) wrote :

Just to know, can you tell me if you have reproduce this issue on your lab with kolla "centos7/binary" release?
Note: in our plateform, we currently use openvswitch 2.11.0-4 release.
Thanks in advance.

Revision history for this message
YAMAMOTO Takashi (yamamoto) wrote :

confirmed by James Denton

tags: added: ovs
Changed in neutron:
importance: Undecided → Medium
status: New → Confirmed
Revision history for this message
James Denton (james-denton) wrote :

I experienced this issue in OpenStack-Ansible Stein (Originally 19.0.2 and upgraded to 19.0.10) release on Ubuntu 18.04 LTS. So far I have been unable to reproduce in a virtual-based lab.

Can you humor me and share the make/model of your NIC connected to br-ex?

Revision history for this message
James Denton (james-denton) wrote :

Also meant to mention OVS 2.11.0. Thanks.

Revision history for this message
Acoss69 (acoss69) wrote :

Our compute node model is a Blade server HP BLGEn10.
The NIC model connected to br-ex is : FLB Adapter 1: HP FlexFabric 20Gb 2-port 630FLB Adapter
We noticed that we don't have experiencing this issue on network node which is a baremetal node (NIC model: HPE Ethernet 10Gb 2-port 562FLR-SFP+ Adpt).
Thanks in advance.

Revision history for this message
James Denton (james-denton) wrote :
Download full text (6.7 KiB)

So, I have not seen this issue in production since implementing that small patch in https://bugs.launchpad.net/neutron/+bug/1864822/comments/4.

However, I can sorta simulate what happens if/when the connection to :6640 is lost, which we did experience in production and Acoss69 referenced in the opening comment. This may help with developing a patch to the OVS agent that could help recover from this condition.

What we see is this: a normal set of flows on the provider bridge (br-ex or br-vlan, in this example):

Every 1.0s: ovs-ofctl dump-flows br-vlan compute1: Tue Mar 3 07:42:42 2020

NXST_FLOW reply (xid=0x4):
 cookie=0xbe35f1e76f2f0e27, duration=468.374s, table=0, n_packets=0, n_bytes=0, idle_age=532, priority=2,in_port=1 actions=resubmit(,1)
 cookie=0xbe35f1e76f2f0e27, duration=469.071s, table=0, n_packets=0, n_bytes=0, idle_age=532, priority=0 actions=NORMAL
 cookie=0xbe35f1e76f2f0e27, duration=468.373s, table=0, n_packets=2, n_bytes=140, idle_age=184, priority=1 actions=resubmit(,3)
 cookie=0xbe35f1e76f2f0e27, duration=468.371s, table=1, n_packets=0, n_bytes=0, idle_age=532, priority=0 actions=resubmit(,2)
 cookie=0xbe35f1e76f2f0e27, duration=467.008s, table=2, n_packets=0, n_bytes=0, idle_age=532, priority=4,in_port=1,dl_vlan=1 actions=mod_vlan_vid:1111,NORMAL
 cookie=0xbe35f1e76f2f0e27, duration=468.370s, table=2, n_packets=0, n_bytes=0, idle_age=532, priority=2,in_port=1 actions=drop
 cookie=0xbe35f1e76f2f0e27, duration=468.339s, table=3, n_packets=0, n_bytes=0, idle_age=532, priority=2,dl_src=fa:16:3f:01:ad:70 actions=output:1
 cookie=0xbe35f1e76f2f0e27, duration=468.329s, table=3, n_packets=0, n_bytes=0, idle_age=532, priority=2,dl_src=fa:16:3f:15:73:1b actions=output:1
 cookie=0xbe35f1e76f2f0e27, duration=468.322s, table=3, n_packets=0, n_bytes=0, idle_age=532, priority=2,dl_src=fa:16:3f:49:67:3e actions=output:1
 cookie=0xbe35f1e76f2f0e27, duration=468.312s, table=3, n_packets=0, n_bytes=0, idle_age=532, priority=2,dl_src=fa:16:3f:b8:7d:b0 actions=output:1
 cookie=0xbe35f1e76f2f0e27, duration=468.368s, table=3, n_packets=2, n_bytes=140, idle_age=184, priority=1 actions=NORMAL

When we see "tcp:127.0.0.1:6640: send error: Broken pipe" in the neutron-openvswitch-agent.log file, it is followed up with something like this:

...
2020-03-03 07:33:50.061 3705 INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-51eb4348-565b-492b-8910-30c8bca078c5 - - - - -] Mapping physical network vlan to bridge br-vlan
2020-03-03 07:33:50.065 3705 INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-51eb4348-565b-492b-8910-30c8bca078c5 - - - - -] Bridge br-vlan datapath-id = 0x000086ce24d0d14a
2020-03-03 07:33:50.153 3705 INFO neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_bridge [req-51eb4348-565b-492b-8910-30c8bca078c5 - - - - -] Bridge br-vlan has datapath-ID 000086ce24d0d14a
2020-03-03 07:33:50.271 3705 INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_dvr_neutron_agent [req-51eb4348-565b-492b-8910-30c8bca078c5 - - - - -] L2 Agent operating ...

Read more...

Revision history for this message
Acoss69 (acoss69) wrote :

Hi,

We have suffer again this week-end about the bug describe in above.
We noticed that the connection broken when our system is overloaded.
In fact, we have accumulate an other problem on our compute node : 35000 mount point.
@James Denton, can you tell us if you have integrate your patch "https://bugs.launchpad.net/neutron/+bug/1864822/comments/4" on the stable stein branch?

Thanks in advance.
Regards,

Revision history for this message
James Denton (james-denton) wrote :

Hi -

I think my patch is/was just a bandaid for whatever is really happening.

We have seen similar issues in other environments lately, and further research led me to this bug:

https://bugs.launchpad.net/neutron/+bug/1817022

Your logs indicated "no response to inactivity probe after 5 seconds", which you bumped to 10. Ours are already at 10, which may not be sufficient under heavy load. We are considering bumping to 30 seconds to see if this addresses the issue.

Revision history for this message
James Denton (james-denton) wrote :

Update:

Modifying those inactivity probe timers did not appear to make a difference. In a few different Stein environments we are seeing similar behavior. If the connection to OVS is lost, like so:

2020-04-02 14:50:21.372 3526056 WARNING ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: send error: Broken pipe
2020-04-02 14:50:41.987 3526056 ERROR OfctlService [-] unknown dpid 112915483458894
2020-04-02 14:50:42.050 3526056 WARNING ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connection dropped (Broken pipe)

Then the agent will implement a drop flow and few other flows (with new cookies) similar to this:

https://bugs.launchpad.net/neutron/+bug/1864822/comments/11

Here are some agent log messages around the time of the issue:

http://paste.openstack.org/show/791823/

Revision history for this message
Slawek Kaplonski (slaweq) wrote :

I looked into this issue today. I wasn't able to reproduce this issue on my local devstack setup.
The problem which I see (and You already mentioned) is that some rules with old cookie id stays in br-ex bridge. So maybe solution/workaround for that would be to add cleaning of rules with old cookie always when bridge was "re-created".
Normally it wouldn't be needed as when new bridge is created it will not have any OF rules, but maybe forcing to clean such leftovers would help neutron-ovs-agent to recover from such problem with connection to ovsdb.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to neutron (master)

Fix proposed to branch: master
Review: https://review.opendev.org/718690

Changed in neutron:
assignee: nobody → Slawek Kaplonski (slaweq)
status: Confirmed → In Progress
Revision history for this message
James Denton (james-denton) wrote :

Thanks for looking into this.

There is definitely the issue of stale flows, but more importantly IMO is that DROP flow implemented in table 0 when it sees the bridge was "re-created":

NXST_FLOW reply (xid=0x4):
 cookie=0xfc7afcb358f7936e, duration=2.522s, table=0, n_packets=0, n_bytes=0, idle_age=2, priority=2,in_port=1 actions=drop

That ends up dropping all outbound traffic from br-int, and only gets replaced on a restart of the agent.

Revision history for this message
James Denton (james-denton) wrote :
Download full text (8.2 KiB)

Hi Slawek,

I've tested the patch. Here's a before and after:

BEFORE:

root@aio1:/home/ubuntu# ovs-ofctl dump-flows br-ex
 cookie=0xae778d2777dbc1a9, duration=1103273.530s, table=0, n_packets=13043, n_bytes=1020882, priority=2,in_port="phy-br-ex" actions=resubmit(,1)
 cookie=0xae778d2777dbc1a9, duration=1103273.850s, table=0, n_packets=0, n_bytes=0, priority=0 actions=NORMAL
 cookie=0xae778d2777dbc1a9, duration=1103273.528s, table=0, n_packets=6389596, n_bytes=462254263, priority=1 actions=resubmit(,3)
 cookie=0xae778d2777dbc1a9, duration=1103273.527s, table=1, n_packets=13043, n_bytes=1020882, priority=0 actions=resubmit(,2)
 cookie=0xae778d2777dbc1a9, duration=854136.480s, table=2, n_packets=6, n_bytes=1702, priority=4,in_port="phy-br-ex",dl_vlan=3 actions=mod_vlan_vid:4,NORMAL
 cookie=0xae778d2777dbc1a9, duration=850505.570s, table=2, n_packets=393, n_bytes=51593, priority=4,in_port="phy-br-ex",dl_vlan=4 actions=mod_vlan_vid:1087,NORMAL
 cookie=0xae778d2777dbc1a9, duration=850505.563s, table=2, n_packets=59, n_bytes=18864, priority=4,in_port="phy-br-ex",dl_vlan=5 actions=mod_vlan_vid:6,NORMAL
 cookie=0xae778d2777dbc1a9, duration=848974.554s, table=2, n_packets=470, n_bytes=63722, priority=4,in_port="phy-br-ex",dl_vlan=6 actions=mod_vlan_vid:1041,NORMAL
 cookie=0xae778d2777dbc1a9, duration=1103273.525s, table=2, n_packets=12115, n_bytes=885001, priority=2,in_port="phy-br-ex" actions=drop
 cookie=0xae778d2777dbc1a9, duration=1103273.524s, table=3, n_packets=6389596, n_bytes=462254263, priority=1 actions=NORMAL

Simulate disconnect with a restart of openvswitch-switch:

root@aio1:/home/ubuntu# systemctl restart openvswitch-switch

Check flows. Notice the new cookies and the DROP flow in table 0:

root@aio1:/home/ubuntu# ovs-ofctl dump-flows br-ex
 cookie=0x532069c53a5426a6, duration=2.763s, table=0, n_packets=0, n_bytes=0, priority=2,in_port="phy-br-ex" actions=drop
 cookie=0xae778d2777dbc1a9, duration=2.863s, table=0, n_packets=11, n_bytes=719, priority=1 actions=resubmit(,3)
 cookie=0x532069c53a5426a6, duration=2.768s, table=0, n_packets=0, n_bytes=0, priority=0 actions=NORMAL
 cookie=0xae778d2777dbc1a9, duration=2.861s, table=1, n_packets=0, n_bytes=0, priority=0 actions=resubmit(,2)
 cookie=0x532069c53a5426a6, duration=2.654s, table=2, n_packets=0, n_bytes=0, priority=4,in_port="phy-br-ex",dl_vlan=6 actions=mod_vlan_vid:1041,NORMAL
 cookie=0x532069c53a5426a6, duration=2.648s, table=2, n_packets=0, n_bytes=0, priority=4,in_port="phy-br-ex",dl_vlan=5 actions=mod_vlan_vid:6,NORMAL
 cookie=0x532069c53a5426a6, duration=2.471s, table=2, n_packets=0, n_bytes=0, priority=4,in_port="phy-br-ex",dl_vlan=4 actions=mod_vlan_vid:1087,NORMAL
 cookie=0x532069c53a5426a6, duration=2.466s, table=2, n_packets=0, n_bytes=0, priority=4,in_port="phy-br-ex",dl_vlan=3 actions=mod_vlan_vid:4,NORMAL
 cookie=0xae778d2777dbc1a9, duration=2.860s, table=2, n_packets=0, n_bytes=0, priority=2,in_port="phy-br-ex" actions=drop
 cookie=0xae778d2777dbc1a9, duration=2.858s, table=3, n_packets=11, n_bytes=719, priority=1 actions=NORMAL

Restart the agent:

 root@aio1:/home/ubuntu# systemctl restart neutron-openvswitch-agent

Flows are rebuilt with a new cookie and the flow...

Read more...

Revision history for this message
Slawek Kaplonski (slaweq) wrote :

I'm not sure if this DROP rule is really a culprit of the issue.
It comes from: https://github.com/openstack/neutron/blob/stable/stein/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L1376 so it is installed every time when phyical bridge is configured.
I checked on my local env with devstack and I also have such rule:

cookie=0xf8e18c7feb8a151, duration=13.960s, table=0, n_packets=0, n_bytes=0, idle_age=14, priority=2,in_port=1 actions=drop

but it doesn't causes any problems.
In Your logs from last comment, I also see that number of packets matched for that drop rule is 0.

And also, even when I'm restarting openvswitch-switch process, I can't reproduce this issue with flows on my env. But I'm doing it on master branch. Tomorrow I will try on stable/Stein.
Can You also tell me if You have e.g. dvr enabled in this setup? Or anything else which may help me to reproduce that.

Revision history for this message
James Denton (james-denton) wrote :

Yes, this is OVS+DVR w/ openstack-ansible (fairly recent master). Neutron 16.0.0.0b2.dev61.

I simulated failure again by restarting openvswitch-switch. Here is an ongoing ping from the qdhcp namespace for a network mapped to VLAN 6 out br-ex (172.23.208.1 being the gateway, a physical firewall):

root@aio1:~# ip netns exec qdhcp-04a47906-d8f2-4600-b1da-c5f15b89ed19 ping 172.23.208.1
PING 172.23.208.1 (172.23.208.1) 56(84) bytes of data.
64 bytes from 172.23.208.1: icmp_seq=1 ttl=255 time=1.49 ms
64 bytes from 172.23.208.1: icmp_seq=2 ttl=255 time=0.576 ms
64 bytes from 172.23.208.1: icmp_seq=3 ttl=255 time=0.511 ms
64 bytes from 172.23.208.1: icmp_seq=4 ttl=255 time=0.495 ms
64 bytes from 172.23.208.1: icmp_seq=5 ttl=255 time=0.608 ms
64 bytes from 172.23.208.1: icmp_seq=6 ttl=255 time=0.502 ms
64 bytes from 172.23.208.1: icmp_seq=7 ttl=255 time=0.496 ms
64 bytes from 172.23.208.1: icmp_seq=8 ttl=255 time=0.494 ms
64 bytes from 172.23.208.1: icmp_seq=9 ttl=255 time=0.486 ms
64 bytes from 172.23.208.1: icmp_seq=10 ttl=255 time=0.551 ms
64 bytes from 172.23.208.1: icmp_seq=11 ttl=255 time=0.513 ms
64 bytes from 172.23.208.1: icmp_seq=12 ttl=255 time=0.516 ms
^C
--- 172.23.208.1 ping statistics ---
32 packets transmitted, 12 received, 62% packet loss, time 31708ms
rtt min/avg/max/mdev = 0.486/0.603/1.495/0.272 ms

The packet loss began when I restarted OVS. You can see the drop flow w/ packets matched (dropped outbound ICMP):

Every 1.0s: ovs-ofctl dump-flows br-ex aio1: Thu Apr 9 14:37:05 2020

NXST_FLOW reply (xid=0x4):
 cookie=0x16c893e7b2dab29c, duration=110.848s, table=0, n_packets=16, n_bytes=1568, idle_age=95, priority=2,in_port=1 actions=drop
 cookie=0x16c893e7b2dab29c, duration=110.852s, table=0, n_packets=865, n_bytes=63243, idle_age=0, priority=0 actions=NORMAL
 cookie=0x16c893e7b2dab29c, duration=110.746s, table=2, n_packets=0, n_bytes=0, idle_age=110, priority=4,in_port=1,dl_vlan=6 actions=mod_vlan_vid:1041,NORMAL
 cookie=0x16c893e7b2dab29c, duration=110.574s, table=2, n_packets=0, n_bytes=0, idle_age=110, priority=4,in_port=1,dl_vlan=5 actions=mod_vlan_vid:6,NORMAL
 cookie=0x16c893e7b2dab29c, duration=110.568s, table=2, n_packets=0, n_bytes=0, idle_age=110, priority=4,in_port=1,dl_vlan=3 actions=mod_vlan_vid:4,NORMAL
 cookie=0x16c893e7b2dab29c, duration=110.562s, table=2, n_packets=0, n_bytes=0, idle_age=110, priority=4,in_port=1,dl_vlan=4 actions=mod_vlan_vid:1087,NORMAL

Note: This behavior is also seen on computes. I only have an AIO at the moment. On computes, the connection to between agent and ovsdb is lost which seems to trigger this, rather than a forced restart of openvswitch.

Revision history for this message
James Denton (james-denton) wrote :
Download full text (6.9 KiB)

@Slawek - I was able to reproduce this behavior in Devstack (Master) configured for DVR. For your convenience, I have provided the output from "stock" devstack (legacy routers) and DVR-enabled devstack (dvr_snat).

NOTE: This seems to impact 'vlan' networks, NOT 'flat' networks, in a DVR scenario.

-=-=-=-=-=-

== STOCK ==

local.conf:

[[local|localrc]]
ADMIN_PASSWORD=password
DATABASE_PASSWORD=password
RABBIT_PASSWORD=password
SERVICE_PASSWORD=password
Q_PLUGIN=ml2
Q_ML2_TENANT_NETWORK_TYPE=vxlan
disable_service horizon cinder swift

DevStack Version: ussuri
Change: 01826e1c5b65e8d9c88b4f195bb688137b28c0c5 Merge "Remove fixup_virtualenv" 2020-04-09 16:00:35 +0000
OS Version: CentOS 7.7.1908 Core

2020-04-11 22:23:12.009 | stack.sh completed in 1034 seconds.
[jdenton@localhost devstack]$

[jdenton@localhost ~]$ cat /etc/neutron/l3_agent.ini | grep agent_mode
#agent_mode = legacy

[jdenton@localhost devstack]$ sudo ovs-ofctl dump-flows br-ex
 cookie=0x511d1a5fb4dc2c8, duration=213.398s, table=0, n_packets=31, n_bytes=2098, priority=4,in_port="phy-br-ex",dl_vlan=2 actions=strip_vlan,NORMAL
 cookie=0x511d1a5fb4dc2c8, duration=248.562s, table=0, n_packets=41, n_bytes=3810, priority=2,in_port="phy-br-ex" actions=drop
 cookie=0x511d1a5fb4dc2c8, duration=248.580s, table=0, n_packets=12, n_bytes=1272, priority=0 actions=NORMAL

source openrc admin admin
openstack network create --provider-network-type vlan --provider-physical-network public --provider-segment 1000 vlan1000
openstack subnet create --subnet-range 192.168.77.0/24 --network vlan1000 subnet1000

[jdenton@localhost devstack]$ sudo ovs-ofctl dump-flows br-ex
 cookie=0x511d1a5fb4dc2c8, duration=250.059s, table=0, n_packets=31, n_bytes=2098, priority=4,in_port="phy-br-ex",dl_vlan=2 actions=strip_vlan,NORMAL
 cookie=0x511d1a5fb4dc2c8, duration=1.864s, table=0, n_packets=2, n_bytes=180, priority=4,in_port="phy-br-ex",dl_vlan=4 actions=mod_vlan_vid:1000,NORMAL
 cookie=0x511d1a5fb4dc2c8, duration=285.223s, table=0, n_packets=45, n_bytes=4194, priority=2,in_port="phy-br-ex" actions=drop
 cookie=0x511d1a5fb4dc2c8, duration=285.241s, table=0, n_packets=12, n_bytes=1272, priority=0 actions=NORMAL

>> Simulate disconnect by restarting OVS:
sudo systemctl restart openvswitch

[jdenton@localhost devstack]$ sudo ovs-ofctl dump-flows br-ex
 cookie=0x4957ef95c18218f4, duration=4.505s, table=0, n_packets=0, n_bytes=0, priority=4,in_port="phy-br-ex",dl_vlan=4 actions=mod_vlan_vid:1000,NORMAL
 cookie=0x4957ef95c18218f4, duration=4.477s, table=0, n_packets=0, n_bytes=0, priority=4,in_port="phy-br-ex",dl_vlan=2 actions=strip_vlan,NORMAL
 cookie=0x4957ef95c18218f4, duration=4.545s, table=0, n_packets=0, n_bytes=0, priority=2,in_port="phy-br-ex" actions=drop
 cookie=0x4957ef95c18218f4, duration=4.550s, table=0, n_packets=0, n_bytes=0, priority=0 actions=NORMAL

>> Everything looks OK.

-=-=-=-=-=-=-

== DVR ==

local.conf:

[[local|localrc]]
ADMIN_PASSWORD=password
DATABASE_PASSWORD=password
RABBIT_PASSWORD=password
SERVICE_PASSWORD=password
Q_PLUGIN=ml2
Q_ML2_TENANT_NETWORK_TYPE=vxlan
Q_DVR_MODE=dvr_snat
disable_service horizon cinder swift

DevStack Version: ussuri
Change: 01826e1c5b65e8d9c88b4f195bb688137b28...

Read more...

Revision history for this message
Acoss69 (acoss69) wrote :

Hi,
We confirme that we have experience this issue only on our plateform with DVR enable and vlan external provider type.
We have an another plateform without DVR enable and we don't have suffer about this issue.
Thank you very much for your help and contribution.
Regards,

Revision history for this message
Slawek Kaplonski (slaweq) wrote :

Hi,

Quick update. I am able to reproduce this issue with dvr enabled. I know more or less what is wrong there but I don't know yet exactly how to fix it. I will continue work on it.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote :

Fix proposed to branch: master
Review: https://review.opendev.org/721554

Revision history for this message
Slawek Kaplonski (slaweq) wrote :

Please try my last patch https://review.opendev.org/721554 which I hope will solve this issue for You.

Revision history for this message
Acoss69 (acoss69) wrote :

Hi,
We have apply your patch in our sandbox plateform (only on compute node).
We have kill openvswitch db to simulate a drop connexion between openvswitch agent and db.
After 5 minutes, we restart the openvswith db and all "NORMAL" type flows inside br-ex get out normaly.
We will continue tomorrow to test your patch in our plateforms.
Thanks a lot for all.

Revision history for this message
Wei Hui (huiweics) wrote :

any idea why ovs agent lost connection to ovsdb-server, why physical bridge was re-create? if physical bridge was deleted then created, all physical flows lost, bus flows remained.

Revision history for this message
James Denton (james-denton) wrote :

@Slawek - Happy to report that your patch appears to be working as described. I restarted the openvswitch service and observed the same flows with new cookies. The 'drop' flow was not implemented. This was tested on Ubuntu 18.04 w/ today's Master (victoria).

@Wei - I am not sure why, but we have noticed an increase of disconnects after deploying or upgrading to Stein. This would then result in the issue described here.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to neutron (master)

Related fix proposed to branch: master
Review: https://review.opendev.org/726770

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix merged to neutron (master)

Reviewed: https://review.opendev.org/718690
Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=63c45b37666efc7fb33fc154fa828050b3dc7945
Submitter: Zuul
Branch: master

commit 63c45b37666efc7fb33fc154fa828050b3dc7945
Author: Slawek Kaplonski <email address hidden>
Date: Thu Apr 9 14:37:38 2020 +0200

    Ensure that stale flows are cleaned from phys_bridges

    In case when neutron-ovs-agent will notice that any of physical
    bridges was "re-created", we should also ensure that stale Open
    Flow rules (with old cookie id) are cleaned.
    This patch is doing exactly that.

    Change-Id: I7c7c8a4c371d6f4afdaab51ed50950e2b20db30f
    Related-Bug: #1864822

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to neutron (stable/ussuri)

Related fix proposed to branch: stable/ussuri
Review: https://review.opendev.org/730820

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to neutron (stable/train)

Related fix proposed to branch: stable/train
Review: https://review.opendev.org/730822

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to neutron (stable/stein)

Related fix proposed to branch: stable/stein
Review: https://review.opendev.org/730823

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to neutron (master)

Reviewed: https://review.opendev.org/721554
Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=91f0bf3c8511bf3b0cc63746f767d8d4dce601bd
Submitter: Zuul
Branch: master

commit 91f0bf3c8511bf3b0cc63746f767d8d4dce601bd
Author: Slawek Kaplonski <email address hidden>
Date: Tue Apr 21 10:30:52 2020 +0200

    [DVR] Reconfigure re-created physical bridges for dvr routers

    In case when physical bridge is removed and created again it
    is initialized by neutron-ovs-agent.
    But if agent has enabled distributed routing, dvr related
    flows wasn't configured again and that lead to connectivity issues
    in case of DVR routers.

    This patch fixes it by adding configuration of dvr related flows
    if distributed routing is enabled in agent's configuration.

    It also adds reset list of phys_brs in dvr_agent. Without that there
    were different objects used in ovs agent and dvr_agent classes thus
    e.g. 2 various cookie ids were set on flows in physical bridge.
    This was also the same issue in case when openvswitch was restarted and
    all bridges were reconfigured.
    Now in such case there is correctly new cookie_id configured for all
    flows.

    Change-Id: I710f00f0f542bcf7fa2fc60800797b90f9f77e14
    Closes-Bug: #1864822

Changed in neutron:
status: In Progress → Fix Released
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to neutron (stable/ussuri)

Fix proposed to branch: stable/ussuri
Review: https://review.opendev.org/731294

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to neutron (stable/train)

Fix proposed to branch: stable/train
Review: https://review.opendev.org/731295

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to neutron (stable/stein)

Fix proposed to branch: stable/stein
Review: https://review.opendev.org/731296

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to neutron (stable/rocky)

Fix proposed to branch: stable/rocky
Review: https://review.opendev.org/731321

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to neutron (stable/queens)

Fix proposed to branch: stable/queens
Review: https://review.opendev.org/731329

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix merged to neutron (stable/ussuri)

Reviewed: https://review.opendev.org/730820
Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=ae4b3edce23aa4d1597fc759a2599b240a01754c
Submitter: Zuul
Branch: stable/ussuri

commit ae4b3edce23aa4d1597fc759a2599b240a01754c
Author: Slawek Kaplonski <email address hidden>
Date: Thu Apr 9 14:37:38 2020 +0200

    Ensure that stale flows are cleaned from phys_bridges

    In case when neutron-ovs-agent will notice that any of physical
    bridges was "re-created", we should also ensure that stale Open
    Flow rules (with old cookie id) are cleaned.
    This patch is doing exactly that.

    Change-Id: I7c7c8a4c371d6f4afdaab51ed50950e2b20db30f
    Related-Bug: #1864822
    (cherry picked from commit 63c45b37666efc7fb33fc154fa828050b3dc7945)

tags: added: in-stable-ussuri
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to neutron (stable/rocky)

Related fix proposed to branch: stable/rocky
Review: https://review.opendev.org/733080

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to neutron (stable/queens)

Related fix proposed to branch: stable/queens
Review: https://review.opendev.org/733082

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix merged to neutron (master)

Reviewed: https://review.opendev.org/726770
Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=45482e300aab781064281b3e06a17ae488831151
Submitter: Zuul
Branch: master

commit 45482e300aab781064281b3e06a17ae488831151
Author: Slawek Kaplonski <email address hidden>
Date: Mon May 11 12:09:46 2020 +0200

    Don't check if any bridges were recrected when OVS was restarted

    In case when openvswitch was restarted, full sync of all bridges will
    be always triggered by neutron-ovs-agent so there is no need to check
    in same rpc_loop iteration if bridges were recreated.

    Change-Id: I3cc1f1b7dc480d54a7cee369e4638f9fd597c759
    Related-bug: #1864822

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to neutron (stable/ussuri)

Related fix proposed to branch: stable/ussuri
Review: https://review.opendev.org/734028

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to neutron (stable/ussuri)

Reviewed: https://review.opendev.org/731294
Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=b03f9d4c433dbe32d1d18e2b2e6b19cd1f1905b2
Submitter: Zuul
Branch: stable/ussuri

commit b03f9d4c433dbe32d1d18e2b2e6b19cd1f1905b2
Author: Slawek Kaplonski <email address hidden>
Date: Tue Apr 21 10:30:52 2020 +0200

    [DVR] Reconfigure re-created physical bridges for dvr routers

    In case when physical bridge is removed and created again it
    is initialized by neutron-ovs-agent.
    But if agent has enabled distributed routing, dvr related
    flows wasn't configured again and that lead to connectivity issues
    in case of DVR routers.

    This patch fixes it by adding configuration of dvr related flows
    if distributed routing is enabled in agent's configuration.

    It also adds reset list of phys_brs in dvr_agent. Without that there
    were different objects used in ovs agent and dvr_agent classes thus
    e.g. 2 various cookie ids were set on flows in physical bridge.
    This was also the same issue in case when openvswitch was restarted and
    all bridges were reconfigured.
    Now in such case there is correctly new cookie_id configured for all
    flows.

    Change-Id: I710f00f0f542bcf7fa2fc60800797b90f9f77e14
    Closes-Bug: #1864822
    (cherry picked from commit 91f0bf3c8511bf3b0cc63746f767d8d4dce601bd)

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix merged to neutron (stable/ussuri)

Reviewed: https://review.opendev.org/734028
Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=b8e7886d8b52a144241c0cc2c30b670ce5ddf926
Submitter: Zuul
Branch: stable/ussuri

commit b8e7886d8b52a144241c0cc2c30b670ce5ddf926
Author: Slawek Kaplonski <email address hidden>
Date: Mon May 11 12:09:46 2020 +0200

    Don't check if any bridges were recrected when OVS was restarted

    In case when openvswitch was restarted, full sync of all bridges will
    be always triggered by neutron-ovs-agent so there is no need to check
    in same rpc_loop iteration if bridges were recreated.

    Change-Id: I3cc1f1b7dc480d54a7cee369e4638f9fd597c759
    Related-bug: #1864822
    (cherry picked from commit 45482e300aab781064281b3e06a17ae488831151)

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix merged to neutron (stable/train)

Reviewed: https://review.opendev.org/730822
Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=0ce032275a1c6fa6b62efa9cb258212199aef1fc
Submitter: Zuul
Branch: stable/train

commit 0ce032275a1c6fa6b62efa9cb258212199aef1fc
Author: Slawek Kaplonski <email address hidden>
Date: Thu Apr 9 14:37:38 2020 +0200

    Ensure that stale flows are cleaned from phys_bridges

    In case when neutron-ovs-agent will notice that any of physical
    bridges was "re-created", we should also ensure that stale Open
    Flow rules (with old cookie id) are cleaned.
    This patch is doing exactly that.

    Change-Id: I7c7c8a4c371d6f4afdaab51ed50950e2b20db30f
    Related-Bug: #1864822
    (cherry picked from commit 63c45b37666efc7fb33fc154fa828050b3dc7945)

tags: added: in-stable-train
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to neutron (stable/train)

Reviewed: https://review.opendev.org/731295
Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=2376c54000e5a6a2d7602634b93197c6a5d7c3d1
Submitter: Zuul
Branch: stable/train

commit 2376c54000e5a6a2d7602634b93197c6a5d7c3d1
Author: Slawek Kaplonski <email address hidden>
Date: Tue Apr 21 10:30:52 2020 +0200

    [DVR] Reconfigure re-created physical bridges for dvr routers

    In case when physical bridge is removed and created again it
    is initialized by neutron-ovs-agent.
    But if agent has enabled distributed routing, dvr related
    flows wasn't configured again and that lead to connectivity issues
    in case of DVR routers.

    This patch fixes it by adding configuration of dvr related flows
    if distributed routing is enabled in agent's configuration.

    It also adds reset list of phys_brs in dvr_agent. Without that there
    were different objects used in ovs agent and dvr_agent classes thus
    e.g. 2 various cookie ids were set on flows in physical bridge.
    This was also the same issue in case when openvswitch was restarted and
    all bridges were reconfigured.
    Now in such case there is correctly new cookie_id configured for all
    flows.

    Change-Id: I710f00f0f542bcf7fa2fc60800797b90f9f77e14
    Closes-Bug: #1864822
    (cherry picked from commit 91f0bf3c8511bf3b0cc63746f767d8d4dce601bd)

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to neutron (stable/stein)

Reviewed: https://review.opendev.org/731296
Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=89fd5ec537f699146b6e0cf5cc7ba1aa0ce081dd
Submitter: Zuul
Branch: stable/stein

commit 89fd5ec537f699146b6e0cf5cc7ba1aa0ce081dd
Author: Slawek Kaplonski <email address hidden>
Date: Tue Apr 21 10:30:52 2020 +0200

    [DVR] Reconfigure re-created physical bridges for dvr routers

    In case when physical bridge is removed and created again it
    is initialized by neutron-ovs-agent.
    But if agent has enabled distributed routing, dvr related
    flows wasn't configured again and that lead to connectivity issues
    in case of DVR routers.

    This patch fixes it by adding configuration of dvr related flows
    if distributed routing is enabled in agent's configuration.

    It also adds reset list of phys_brs in dvr_agent. Without that there
    were different objects used in ovs agent and dvr_agent classes thus
    e.g. 2 various cookie ids were set on flows in physical bridge.
    This was also the same issue in case when openvswitch was restarted and
    all bridges were reconfigured.
    Now in such case there is correctly new cookie_id configured for all
    flows.

    Change-Id: I710f00f0f542bcf7fa2fc60800797b90f9f77e14
    Closes-Bug: #1864822
    (cherry picked from commit 91f0bf3c8511bf3b0cc63746f767d8d4dce601bd)

tags: added: in-stable-stein
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix merged to neutron (stable/rocky)

Reviewed: https://review.opendev.org/733080
Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=9087c3a4ea309d3c4f0a11b297cc8573f2d6df06
Submitter: Zuul
Branch: stable/rocky

commit 9087c3a4ea309d3c4f0a11b297cc8573f2d6df06
Author: Slawek Kaplonski <email address hidden>
Date: Thu Apr 9 14:37:38 2020 +0200

    Ensure that stale flows are cleaned from phys_bridges

    In case when neutron-ovs-agent will notice that any of physical
    bridges was "re-created", we should also ensure that stale Open
    Flow rules (with old cookie id) are cleaned.
    This patch is doing exactly that.

    Conflicts:
        neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py

    Change-Id: I7c7c8a4c371d6f4afdaab51ed50950e2b20db30f
    Related-Bug: #1864822
    (cherry picked from commit 63c45b37666efc7fb33fc154fa828050b3dc7945)

tags: added: in-stable-rocky
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to neutron (stable/rocky)

Reviewed: https://review.opendev.org/731321
Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=d3d93b4077428c399349208df9819b8413184ef7
Submitter: Zuul
Branch: stable/rocky

commit d3d93b4077428c399349208df9819b8413184ef7
Author: Slawek Kaplonski <email address hidden>
Date: Tue Apr 21 10:30:52 2020 +0200

    [DVR] Reconfigure re-created physical bridges for dvr routers

    In case when physical bridge is removed and created again it
    is initialized by neutron-ovs-agent.
    But if agent has enabled distributed routing, dvr related
    flows wasn't configured again and that lead to connectivity issues
    in case of DVR routers.

    This patch fixes it by adding configuration of dvr related flows
    if distributed routing is enabled in agent's configuration.

    It also adds reset list of phys_brs in dvr_agent. Without that there
    were different objects used in ovs agent and dvr_agent classes thus
    e.g. 2 various cookie ids were set on flows in physical bridge.
    This was also the same issue in case when openvswitch was restarted and
    all bridges were reconfigured.
    Now in such case there is correctly new cookie_id configured for all
    flows.

    Conflicts:
        neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py

    Change-Id: I710f00f0f542bcf7fa2fc60800797b90f9f77e14
    Closes-Bug: #1864822
    (cherry picked from commit 91f0bf3c8511bf3b0cc63746f767d8d4dce601bd)

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix merged to neutron (stable/queens)

Reviewed: https://review.opendev.org/733082
Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=2baf0aa83f551b5c88caad64da837630b9daea46
Submitter: Zuul
Branch: stable/queens

commit 2baf0aa83f551b5c88caad64da837630b9daea46
Author: Slawek Kaplonski <email address hidden>
Date: Thu Apr 9 14:37:38 2020 +0200

    Ensure that stale flows are cleaned from phys_bridges

    In case when neutron-ovs-agent will notice that any of physical
    bridges was "re-created", we should also ensure that stale Open
    Flow rules (with old cookie id) are cleaned.
    This patch is doing exactly that.

    Conflicts:
        neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py

    Change-Id: I7c7c8a4c371d6f4afdaab51ed50950e2b20db30f
    Related-Bug: #1864822
    (cherry picked from commit 63c45b37666efc7fb33fc154fa828050b3dc7945)

tags: added: in-stable-queens
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to neutron (stable/queens)

Reviewed: https://review.opendev.org/731329
Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=33217c9c43e64183b1051347de5c8a58d568808d
Submitter: Zuul
Branch: stable/queens

commit 33217c9c43e64183b1051347de5c8a58d568808d
Author: Slawek Kaplonski <email address hidden>
Date: Tue Apr 21 10:30:52 2020 +0200

    [DVR] Reconfigure re-created physical bridges for dvr routers

    In case when physical bridge is removed and created again it
    is initialized by neutron-ovs-agent.
    But if agent has enabled distributed routing, dvr related
    flows wasn't configured again and that lead to connectivity issues
    in case of DVR routers.

    This patch fixes it by adding configuration of dvr related flows
    if distributed routing is enabled in agent's configuration.

    It also adds reset list of phys_brs in dvr_agent. Without that there
    were different objects used in ovs agent and dvr_agent classes thus
    e.g. 2 various cookie ids were set on flows in physical bridge.
    This was also the same issue in case when openvswitch was restarted and
    all bridges were reconfigured.
    Now in such case there is correctly new cookie_id configured for all
    flows.

    Conflicts:
        neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py

    Change-Id: I710f00f0f542bcf7fa2fc60800797b90f9f77e14
    Closes-Bug: #1864822
    (cherry picked from commit 91f0bf3c8511bf3b0cc63746f767d8d4dce601bd)

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix merged to neutron (stable/stein)

Reviewed: https://review.opendev.org/730823
Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=6cf718c5056c1d9ea5dbc40a1b41b3f3a16d00a5
Submitter: Zuul
Branch: stable/stein

commit 6cf718c5056c1d9ea5dbc40a1b41b3f3a16d00a5
Author: Slawek Kaplonski <email address hidden>
Date: Thu Apr 9 14:37:38 2020 +0200

    Ensure that stale flows are cleaned from phys_bridges

    In case when neutron-ovs-agent will notice that any of physical
    bridges was "re-created", we should also ensure that stale Open
    Flow rules (with old cookie id) are cleaned.
    This patch is doing exactly that.

    Change-Id: I7c7c8a4c371d6f4afdaab51ed50950e2b20db30f
    Related-Bug: #1864822
    (cherry picked from commit 63c45b37666efc7fb33fc154fa828050b3dc7945)

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to neutron (stable/train)

Related fix proposed to branch: stable/train
Review: https://review.opendev.org/744436

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to neutron (stable/stein)

Related fix proposed to branch: stable/stein
Review: https://review.opendev.org/744438

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to neutron (stable/rocky)

Related fix proposed to branch: stable/rocky
Review: https://review.opendev.org/744439

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to neutron (stable/queens)

Related fix proposed to branch: stable/queens
Review: https://review.opendev.org/744440

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix merged to neutron (stable/queens)

Reviewed: https://review.opendev.org/744440
Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=2f4bb76338433f2d3afc10facadee95876b9a5b7
Submitter: Zuul
Branch: stable/queens

commit 2f4bb76338433f2d3afc10facadee95876b9a5b7
Author: Slawek Kaplonski <email address hidden>
Date: Mon May 11 12:09:46 2020 +0200

    Don't check if any bridges were recrected when OVS was restarted

    In case when openvswitch was restarted, full sync of all bridges will
    be always triggered by neutron-ovs-agent so there is no need to check
    in same rpc_loop iteration if bridges were recreated.

    Conflicts:
        neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py

    Change-Id: I3cc1f1b7dc480d54a7cee369e4638f9fd597c759
    Related-bug: #1864822
    (cherry picked from commit 45482e300aab781064281b3e06a17ae488831151)
    (cherry picked from commit b8e7886d8b52a144241c0cc2c30b670ce5ddf926)

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix merged to neutron (stable/stein)

Reviewed: https://review.opendev.org/744438
Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=a93e6c922c397de640586469d986a713da102330
Submitter: Zuul
Branch: stable/stein

commit a93e6c922c397de640586469d986a713da102330
Author: Slawek Kaplonski <email address hidden>
Date: Mon May 11 12:09:46 2020 +0200

    Don't check if any bridges were recrected when OVS was restarted

    In case when openvswitch was restarted, full sync of all bridges will
    be always triggered by neutron-ovs-agent so there is no need to check
    in same rpc_loop iteration if bridges were recreated.

    Change-Id: I3cc1f1b7dc480d54a7cee369e4638f9fd597c759
    Related-bug: #1864822
    (cherry picked from commit 45482e300aab781064281b3e06a17ae488831151)
    (cherry picked from commit b8e7886d8b52a144241c0cc2c30b670ce5ddf926)

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix merged to neutron (stable/train)

Reviewed: https://review.opendev.org/744436
Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=25078dd730c74bf2e344b955b39fa797b13a5d0a
Submitter: Zuul
Branch: stable/train

commit 25078dd730c74bf2e344b955b39fa797b13a5d0a
Author: Slawek Kaplonski <email address hidden>
Date: Mon May 11 12:09:46 2020 +0200

    Don't check if any bridges were recrected when OVS was restarted

    In case when openvswitch was restarted, full sync of all bridges will
    be always triggered by neutron-ovs-agent so there is no need to check
    in same rpc_loop iteration if bridges were recreated.

    Change-Id: I3cc1f1b7dc480d54a7cee369e4638f9fd597c759
    Related-bug: #1864822
    (cherry picked from commit 45482e300aab781064281b3e06a17ae488831151)
    (cherry picked from commit b8e7886d8b52a144241c0cc2c30b670ce5ddf926)

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix merged to neutron (stable/rocky)

Reviewed: https://review.opendev.org/744439
Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=8bba2a8f6e6500c1b996ada7cc77e6d48379b995
Submitter: Zuul
Branch: stable/rocky

commit 8bba2a8f6e6500c1b996ada7cc77e6d48379b995
Author: Slawek Kaplonski <email address hidden>
Date: Mon May 11 12:09:46 2020 +0200

    Don't check if any bridges were recrected when OVS was restarted

    In case when openvswitch was restarted, full sync of all bridges will
    be always triggered by neutron-ovs-agent so there is no need to check
    in same rpc_loop iteration if bridges were recreated.

    Conflicts:
        neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py

    Change-Id: I3cc1f1b7dc480d54a7cee369e4638f9fd597c759
    Related-bug: #1864822
    (cherry picked from commit 45482e300aab781064281b3e06a17ae488831151)
    (cherry picked from commit b8e7886d8b52a144241c0cc2c30b670ce5ddf926)

Revision history for this message
Brian Murray (brian-murray) wrote : Missing SRU information

Thanks for uploading the fix for this bug report to -proposed. However, when reviewing the package in -proposed and the details of this bug report I noticed that the bug description is missing information required for the SRU process. You can find full details at http://wiki.ubuntu.com/StableReleaseUpdates#Procedure but essentially this bug is missing some of the following: a statement of impact, a test case and details regarding the regression potential. Thanks in advance!

Changed in neutron (Ubuntu):
status: New → Incomplete
Dan Streetman (ddstreet)
description: updated
Revision history for this message
Dan Streetman (ddstreet) wrote :

> bug description is missing information required for the SRU process

sorry, I updated the description to refer this bug to the sru template in bug 1869808 as that sru info applies to all other bugs in the upload.

Revision history for this message
Łukasz Zemczak (sil2100) wrote : Please test proposed package

Hello Acoss69, or anyone else affected,

Accepted neutron into bionic-proposed. The package will build now and be available at https://launchpad.net/ubuntu/+source/neutron/2:12.1.1-0ubuntu4 in a few hours, and then in the -proposed repository.

Please help us by testing this new package. See https://wiki.ubuntu.com/Testing/EnableProposed for documentation on how to enable and use -proposed. Your feedback will aid us getting this update out to other Ubuntu users.

If this package fixes the bug for you, please add a comment to this bug, mentioning the version of the package you tested, what testing has been performed on the package and change the tag from verification-needed-bionic to verification-done-bionic. If it does not fix the bug for you, please add a comment stating that, and change the tag to verification-failed-bionic. In either case, without details of your testing we will not be able to proceed.

Further information regarding the verification process can be found at https://wiki.ubuntu.com/QATeam/PerformingSRUVerification . Thank you in advance for helping!

N.B. The updated package will be released to -updates after the bug(s) fixed by this package have been verified and the package has been in -proposed for a minimum of 7 days.

Changed in neutron (Ubuntu Bionic):
status: New → Fix Committed
tags: added: verification-needed verification-needed-bionic
Revision history for this message
Edward Hope-Morley (hopem) wrote :

All SRU verification completed and performed in https://bugs.launchpad.net/neutron/+bug/1869808 so please refer to that LP for the results.

tags: added: verification-done verification-done-bionic
removed: verification-needed verification-needed-bionic
Revision history for this message
Łukasz Zemczak (sil2100) wrote : Update Released

The verification of the Stable Release Update for neutron has completed successfully and the package is now being released to -updates. Subsequently, the Ubuntu Stable Release Updates Team is being unsubscribed and will not receive messages about this bug report. In the event that you encounter a regression using the package from -updates please report a new bug using ubuntu-bug and tag the bug report regression-update so we can easily find any regressions.

Revision history for this message
Launchpad Janitor (janitor) wrote :

This bug was fixed in the package neutron - 2:12.1.1-0ubuntu4

---------------
neutron (2:12.1.1-0ubuntu4) bionic; urgency=medium

  * Fix interrupt of VLAN traffic on reboot of neutron-ovs-agent:
  - d/p/0001-ovs-agent-signal-to-plugin-if-tunnel-refresh-needed.patch (LP: #1853613)
  - d/p/0002-Do-not-block-connection-between-br-int-and-br-phys-o.patch (LP: #1869808)
  - d/p/0003-Ensure-that-stale-flows-are-cleaned-from-phys_bridge.patch (LP: #1864822)
  - d/p/0004-DVR-Reconfigure-re-created-physical-bridges-for-dvr-.patch (LP: #1864822)
  - d/p/0005-Ensure-drop-flows-on-br-int-at-agent-startup-for-DVR.patch (LP: #1887148)
  - d/p/0006-Don-t-check-if-any-bridges-were-recrected-when-OVS-w.patch (LP: #1864822)
  - d/p/0007-Not-remove-the-running-router-when-MQ-is-unreachable.patch (LP: #1871850)

 -- Edward Hope-Morley <email address hidden> Mon, 22 Feb 2021 16:55:40 +0000

Changed in neutron (Ubuntu Bionic):
status: Fix Committed → Fix Released
To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Duplicates of this bug

Other bug subscribers