We are affected by the same problem. Flows on physical bridges are deleted upon restart of the neutron-openvswitch-agent on _compute_ host. OS: Ubuntu Xenial 16.04 Kernel: 4.4.0-83-generic Openstack Distribution: Fuel Community Edition 10 neutron-openvswitch-agent: 2:9.2.0-1~u16.04+mos15 openvswitch: 2.6.1-0~u1604+mos1 Flow dump with a working setup and one VM: root@dl580-r4-1:~# ovs-ofctl dump-flows br-biodb NXST_FLOW reply (xid=0x4): cookie=0xa870e454201864c5, duration=31.624s, table=0, n_packets=34, n_bytes=2992, idle_age=1, priority=4,in_port=1,dl_vlan=3 actions=strip_vlan,NORMAL cookie=0xa870e454201864c5, duration=62679.598s, table=0, n_packets=491, n_bytes=69574, idle_age=1, priority=2,in_port=1 actions=drop cookie=0xa870e454201864c5, duration=62679.717s, table=0, n_packets=634, n_bytes=51232, idle_age=1, priority=0 actions=NORMAL Bridge setup: Bridge br-biodb Controller "tcp:127.0.0.1:6633" is_connected: true fail_mode: secure Port br-biodb Interface br-biodb type: internal Port phy-br-biodb Interface phy-br-biodb type: patch options: {peer=int-br-biodb} Port "bond0.603" Interface "bond0.603" bond0.603 is a vlan tagged LACP bond of the ethernet interfaces. Network associated to bridge uses flat network type. VM running on that host is able to ping externel router / baremetal machines outside of cloud setup. Restart of host (error not always reproducible by agent restart only), side note: neutron-openvswitch-agent does not start properly, stating: 2017-07-13 08:15:05.699 2659 ERROR neutron.plugins.ml2.drivers.openvswitch.agent .ovs_neutron_agent [-] Tunneling can't be enabled with invalid local_ip '192.168.11.83'. IP couldn't be found on this host's interfaces. The IP address is associated to the br-mesh ovs bridge and present, so this is probably a startup race. Stating the agent manually afterwards works Current flows before agent starts: root@dl580-r4-1:~# ovs-ofctl dump-flows br-biodb NXST_FLOW reply (xid=0x4): root@dl580-r4-1:~# Flows after agent start: root@dl580-r4-1:~# ovs-ofctl dump-flows br-biodb NXST_FLOW reply (xid=0x4): root@dl580-r4-1:~# ovs-ofcft snoop output: OFPT_FEATURES_REQUEST (OF1.3) (xid=0x2a8f299c): OFPT_FEATURES_REPLY (OF1.3) (xid=0x2a8f299c): dpid:00005cb901e425b0 n_tables:254, n_buffers:256 capabilities: FLOW_STATS TABLE_STATS PORT_STATS GROUP_STATS QUEUE_STATS OFPST_PORT_DESC request (OF1.3) (xid=0x2a8f299d): port=ANY OFPST_PORT_DESC reply (OF1.3) (xid=0x2a8f299d): 1(phy-br-biodb): addr:c2:cc:e6:c1:7c:bf config: 0 state: 0 speed: 0 Mbps now, 0 Mbps max 2(bond0.603): addr:5c:b9:01:e4:25:b0 config: 0 state: 0 current: 10GB-FD speed: 10000 Mbps now, 0 Mbps max LOCAL(br-biodb): addr:5c:b9:01:e4:25:b0 config: PORT_DOWN state: LINK_DOWN speed: 0 Mbps now, 0 Mbps max OFPT_ECHO_REQUEST (OF1.3) (xid=0x0): 0 bytes of payload OFPT_ECHO_REPLY (OF1.3) (xid=0x0): 0 bytes of payload .... Seconds restart of agent: OFPT_ECHO_REQUEST (OF1.3) (xid=0x0): 0 bytes of payload OFPT_ECHO_REPLY (OF1.3) (xid=0x0): 0 bytes of payload OFPT_FEATURES_REQUEST (OF1.3) (xid=0xd734cd3f): OFPT_FEATURES_REPLY (OF1.3) (xid=0xd734cd3f): dpid:00005cb901e425b0 n_tables:254, n_buffers:256 capabilities: FLOW_STATS TABLE_STATS PORT_STATS GROUP_STATS QUEUE_STATS OFPST_PORT_DESC request (OF1.3) (xid=0xd734cd40): port=ANY OFPST_PORT_DESC reply (OF1.3) (xid=0xd734cd40): 1(phy-br-biodb): addr:c2:cc:e6:c1:7c:bf config: 0 state: 0 speed: 0 Mbps now, 0 Mbps max 2(bond0.603): addr:5c:b9:01:e4:25:b0 config: 0 state: 0 current: 10GB-FD speed: 10000 Mbps now, 0 Mbps max LOCAL(br-biodb): addr:5c:b9:01:e4:25:b0 config: PORT_DOWN state: LINK_DOWN speed: 0 Mbps now, 0 Mbps max OFPT_FLOW_MOD (OF1.3) (xid=0xd734cd41): ADD priority=0 cookie:0x935d84534b04db5e out_port:0 actions=NORMAL OFPT_BARRIER_REQUEST (OF1.3) (xid=0xd734cd42): OFPT_BARRIER_REPLY (OF1.3) (xid=0xd734cd42): OFPT_FLOW_MOD (OF1.3) (xid=0xd734cd43): ADD priority=2,in_port=1 cookie:0x935d84534b04db5e out_port:0 actions=drop OFPT_BARRIER_REQUEST (OF1.3) (xid=0xd734cd44): OFPT_BARRIER_REPLY (OF1.3) (xid=0xd734cd44): OFPT_FLOW_MOD (OF1.3) (xid=0xd734cd45): ADD priority=0 cookie:0xbeddce2ae6e0beeb out_port:0 actions=NORMAL OFPT_BARRIER_REQUEST (OF1.3) (xid=0xd734cd46): OFPT_BARRIER_REPLY (OF1.3) (xid=0xd734cd46): OFPT_FLOW_MOD (OF1.3) (xid=0xd734cd47): ADD priority=2,in_port=1 cookie:0xbeddce2ae6e0beeb out_port:0 actions=drop OFPT_BARRIER_REQUEST (OF1.3) (xid=0xd734cd48): OFPT_BARRIER_REPLY (OF1.3) (xid=0xd734cd48): OFPT_ECHO_REQUEST (OF1.3) (xid=0x0): 0 bytes of payload OFPT_ECHO_REPLY (OF1.3) (xid=0x0): 0 bytes of payload .... Flows after second restart: root@dl580-r4-1:~# ovs-ofctl dump-flows br-biodb NXST_FLOW reply (xid=0x4): cookie=0xbeddce2ae6e0beeb, duration=33.591s, table=0, n_packets=0, n_bytes=0, idle_age=33, priority=2,in_port=1 actions=drop cookie=0xbeddce2ae6e0beeb, duration=33.600s, table=0, n_packets=1, n_bytes=66, idle_age=0, priority=0 actions=NORMAL After migrating a VM with a port in the affected network back to the host: root@dl580-r4-1:~# ovs-ofctl dump-flows br-biodb NXST_FLOW reply (xid=0x4): root@dl580-r4-1:~# After two agent restarts the flows were restored for a short time and removed afterwards: OFPT_FEATURES_REQUEST (OF1.3) (xid=0x12c474): OFPT_FEATURES_REPLY (OF1.3) (xid=0x12c474): dpid:00005cb901e425b0 n_tables:254, n_buffers:256 capabilities: FLOW_STATS TABLE_STATS PORT_STATS GROUP_STATS QUEUE_STATS OFPST_PORT_DESC request (OF1.3) (xid=0x12c475): port=ANY OFPST_PORT_DESC reply (OF1.3) (xid=0x12c475): 1(phy-br-biodb): addr:c2:cc:e6:c1:7c:bf config: 0 state: 0 speed: 0 Mbps now, 0 Mbps max 2(bond0.603): addr:5c:b9:01:e4:25:b0 config: 0 state: 0 current: 10GB-FD speed: 10000 Mbps now, 0 Mbps max LOCAL(br-biodb): addr:5c:b9:01:e4:25:b0 config: PORT_DOWN state: LINK_DOWN speed: 0 Mbps now, 0 Mbps max OFPT_ECHO_REQUEST (OF1.3) (xid=0x0): 0 bytes of payload OFPT_ECHO_REPLY (OF1.3) (xid=0x0): 0 bytes of payload OFPT_ECHO_REQUEST (OF1.3) (xid=0x0): 0 bytes of payload OFPT_ECHO_REPLY (OF1.3) (xid=0x0): 0 bytes of payload OFPT_ECHO_REQUEST (OF1.3) (xid=0x0): 0 bytes of payload OFPT_ECHO_REPLY (OF1.3) (xid=0x0): 0 bytes of payload OFPT_ECHO_REQUEST (OF1.3) (xid=0x0): 0 bytes of payload OFPT_ECHO_REPLY (OF1.3) (xid=0x0): 0 bytes of payload .... second restart since no flows were created .... OFPT_FEATURES_REQUEST (OF1.3) (xid=0xfa70ef0b): OFPT_FEATURES_REPLY (OF1.3) (xid=0xfa70ef0b): dpid:00005cb901e425b0 n_tables:254, n_buffers:256 capabilities: FLOW_STATS TABLE_STATS PORT_STATS GROUP_STATS QUEUE_STATS OFPST_PORT_DESC request (OF1.3) (xid=0xfa70ef0c): port=ANY OFPST_PORT_DESC reply (OF1.3) (xid=0xfa70ef0c): 1(phy-br-biodb): addr:c2:cc:e6:c1:7c:bf config: 0 state: 0 speed: 0 Mbps now, 0 Mbps max 2(bond0.603): addr:5c:b9:01:e4:25:b0 config: 0 state: 0 current: 10GB-FD speed: 10000 Mbps now, 0 Mbps max LOCAL(br-biodb): addr:5c:b9:01:e4:25:b0 config: PORT_DOWN state: LINK_DOWN speed: 0 Mbps now, 0 Mbps max OFPT_FLOW_MOD (OF1.3) (xid=0xfa70ef0d): ADD priority=0 cookie:0xb57edc818c705014 out_port:0 actions=NORMAL OFPT_BARRIER_REQUEST (OF1.3) (xid=0xfa70ef0e): OFPT_BARRIER_REPLY (OF1.3) (xid=0xfa70ef0e): OFPT_FLOW_MOD (OF1.3) (xid=0xfa70ef0f): ADD priority=2,in_port=1 cookie:0xb57edc818c705014 out_port:0 actions=drop OFPT_BARRIER_REQUEST (OF1.3) (xid=0xfa70ef10): OFPT_BARRIER_REPLY (OF1.3) (xid=0xfa70ef10): OFPT_FLOW_MOD (OF1.3) (xid=0xfa70ef11): ADD priority=0 cookie:0x88306669d5f2e5ea out_port:0 actions=NORMAL OFPT_BARRIER_REQUEST (OF1.3) (xid=0xfa70ef12): OFPT_BARRIER_REPLY (OF1.3) (xid=0xfa70ef12): OFPT_FLOW_MOD (OF1.3) (xid=0xfa70ef13): ADD priority=2,in_port=1 cookie:0x88306669d5f2e5ea out_port:0 actions=drop OFPT_BARRIER_REQUEST (OF1.3) (xid=0xfa70ef14): OFPT_BARRIER_REPLY (OF1.3) (xid=0xfa70ef14): OFPT_FLOW_MOD (OF1.3) (xid=0xfa70ef15): ADD priority=4,in_port=1,dl_vlan=1 cookie:0x88306669d5f2e5ea out_port:0 actions=pop_vlan,NORMAL OFPT_BARRIER_REQUEST (OF1.3) (xid=0xfa70ef16): OFPT_BARRIER_REPLY (OF1.3) (xid=0xfa70ef16): OFPST_FLOW request (OF1.3) (xid=0xfa70ef17): OFPST_FLOW reply (OF1.3) (xid=0xfa70ef17): cookie=0x88306669d5f2e5ea, duration=1.133s, table=0, n_packets=2, n_bytes=140, priority=4,in_port=1,dl_vlan=1 actions=pop_vlan,NORMAL cookie=0x88306669d5f2e5ea, duration=3.423s, table=0, n_packets=3, n_bytes=370, priority=2,in_port=1 actions=drop cookie=0x88306669d5f2e5ea, duration=3.431s, table=0, n_packets=2, n_bytes=102, priority=0 actions=NORMAL OFPT_BARRIER_REQUEST (OF1.3) (xid=0xfa70ef18): OFPT_BARRIER_REPLY (OF1.3) (xid=0xfa70ef18): OFPT_FLOW_MOD (OF1.3) (xid=0xfa70ef19): DEL table:255 priority=0 cookie:0x88306669d5f2e5ea/0xffffffffffffffff actions=drop OFPT_BARRIER_REQUEST (OF1.3) (xid=0xfa70ef1a): OFPT_BARRIER_REPLY (OF1.3) (xid=0xfa70ef1a): OFPST_FLOW request (OF1.3) (xid=0xfa70ef1b): OFPST_FLOW reply (OF1.3) (xid=0xfa70ef1b): OFPT_BARRIER_REQUEST (OF1.3) (xid=0xfa70ef1c): OFPT_BARRIER_REPLY (OF1.3) (xid=0xfa70ef1c): OFPT_ECHO_REQUEST (OF1.3) (xid=0x0): 0 bytes of payload OFPT_ECHO_REPLY (OF1.3) (xid=0x0): 0 bytes of payload ... The "DEL table:255 priority=0 cookie:0x88306669d5f2e5ea/0xffffffffffffffff actions=drop" probably removes all flows from bridge. Result: root@dl580-r4-1:~# ovs-ofctl dump-flows br-biodb NXST_FLOW reply (xid=0x4): root@dl580-r4-1:~# There are also two different cookies used in the OFPT_FLOW_MOD ADD statements. To restore networking for the VM, I have to remove and re-add the bridge while the agent is running, restart the agent (which sets to correct flows), and finally add the bond port. Afterward the VM is able to ping the other servers again. The problem is reproducible, so patching for additional debug output is possible / more logs are available.