I did some deep analysis of one failed test in https://b4ee658ece9317c00235-25614151543ec2e702e81ba3282ddc61.ssl.cf5.rackcdn.com/695479/3/check/tempest-slow-py3/2060c35/testr_results.html.gz VM ID: b99494b0-cfbe-4ced-bf0f-040759e6fdf5 Port id: 7ae157c7-0f96-467c-8707-00d30f65a9a4 VM was first spawned on controller node and than unshelved on compute1. Here is what happend on controller node during removing of VM's interface: ovsdb monitor noticed that port was removed at: Nov 25 16:51:57.914160 ubuntu-bionic-rax-dfw-0013031003 neutron-openvswitch-agent[11632]: DEBUG neutron.agent.common.async_process [-] Output received from [ovsdb-client monitor tcp:127.0.0.1:6640 Interface name,ofport,external_ids --format=json]: {"data":[["6493a059-28c4-4d1b-8d6f-80c93682f043","delete","tap7ae157c7-0f",-1,["map",[["attached-mac","fa:16:3e:92:8b:ff"],["iface-id","7ae157c7-0f96-467c-8707-00d30f65a9a4"],["iface-status","active"],["vm-id","b99494b0-cfbe-4ced-bf0f-040759e6fdf5"]]]]],"headings":["row","action","name","ofport","external_ids"]} {{(pid=11632) _read_stdout /opt/stack/neutron/neutron/agent/common/async_process.py:262}} But in that time there was already running rpc_loop iteration in which this port was treated as updated: Nov 25 16:51:57.886502 ubuntu-bionic-rax-dfw-0013031003 neutron-openvswitch-agent[11632]: DEBUG neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [None req-dffe9755-afe2-4a99-8126-3816edb50b2b None None] Starting to process devices in:{'added': set(), 'removed': set(), 'current': {'7ae157c7-0f96-467c-8707-00d30f65a9a4', 'd25f6c96-c47d-4337-9bb8-4742749ae807', '52b47a4a-18fe-45d4-811d-d75b8190c08f', '3f98c244-3e31-479b-a31e-303d9f944f09', 'dd303d88-c2c0-4141-8e11-b395d78fd23d', 'ad668826-f453-4798-b672-5452bce056c3', 'de189df6-6ad5-44ad-a4d7-885b6ce37390', '1ca74f7a-67d6-4eee-be50-237ab4d32e53', 'c667b8dd-2d9f-4fd5-99cd-54b36d743d09'}, 'updated': {'7ae157c7-0f96-467c-8707-00d30f65a9a4'}} {{(pid=11632) rpc_loop /opt/stack/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:2438}} Unfortunatelly, as port was removed from ovs brigde, its ofport was already -1 so it wasn't processed properly by ovs firwall driver: Nov 25 16:51:57.905049 ubuntu-bionic-rax-dfw-0013031003 neutron-openvswitch-agent[11632]: INFO neutron.agent.linux.openvswitch_firewall.firewall [None req-dffe9755-afe2-4a99-8126-3816edb50b2b None None] port 7ae157c7-0f96-467c-8707-00d30f65a9a4 does not exist in ovsdb: Port 7ae157c7-0f96-467c-8707-00d30f65a9a4 is not managed by this agent.. And it was added to skipped ports at Nov 25 16:51:57.910390 ubuntu-bionic-rax-dfw-0013031003 neutron-openvswitch-agent[11632]: INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [None req-dffe9755-afe2-4a99-8126-3816edb50b2b None None] Ports {'7ae157c7-0f96-467c-8707-00d30f65a9a4'} skipped, changing status to down And in this iteration port was in skipped devices so it was removed from port_info[“current”]. Because of that in next iteration it wasn’t treated as deleted even if there was event from ovsdb-monitor received about that. And that cause to not cleaned firewall rules from br-int for this port and traffic which has dest_mac=port['mac_address'] was never going out from integration bridge due to rule which I posted in comment above.