I dig into that issue again and I think that I more or less know what is going on there (but not sure 100% why it's like that). So in the case when that test is failing, the problem is that after instance is shelved, there is info that port is in DOWN state: Jan 14 14:31:19.984862 ubuntu-focal-rax-ord-0028007551 neutron-server[70556]: DEBUG neutron.plugins.ml2.rpc [None req-58bb688c-d96a-49ab-9dc8-a4845ca7d6fa None None] Device 63c13ed9-d3ad-409b-8412-134ae41c404b no longer exists at agent ovs-agent-ubuntu-focal-rax-ord-0028007551 {{(pid=70556) update_device_down /opt/sta ck/neutron/neutron/plugins/ml2/rpc.py:259}} Jan 14 14:31:20.132185 ubuntu-focal-rax-ord-0028007551 neutron-server[70556]: DEBUG neutron.plugins.ml2.plugin [None req-58bb688c-d96a-49ab-9dc8-a4845ca7d6fa None None] Current status of the port 63c13ed9-d3ad-409b-8412-134ae41c404b is: ACTIVE; New status is: DOWN {{(pid=70556) _update_individual_port_db_status /opt/stack/neutron/neutron/plugins/ml2/plugin.py:2213}} but just after that, and before nova actuall do unshelve of the VM, there is notification from the DHCP agent that port's provisioning is completed: Jan 14 14:31:24.275380 ubuntu-focal-rax-ord-0028007551 neutron-server[70556]: DEBUG neutron.db.provisioning_blocks [None req-d4f3e3de-4409-406b-adf2-34be949f3734 None None] Provisioning complete for port 63c13ed9-d3ad-409b-8412-134ae41c404b triggered by entity DHCP. {{(pid=70556) provisioning_complete /opt/stack/neutron/neutron/db/provisioning_blocks.py:139}} Jan 14 14:31:24.275680 ubuntu-focal-rax-ord-0028007551 neutron-server[70556]: DEBUG neutron_lib.callbacks.manager [None req-d4f3e3de-4409-406b-adf2-34be949f3734 None None] Publish callbacks ['neutron.plugins.ml2.plugin.Ml2Plugin._port_provisioned-1912498'] for port (63c13ed9-d3ad-409b-8412-134ae41c404b), provisioning_complete {{(pid=70556) _notify_loop /usr/local/lib/python3.8/dist-packages/neutron_lib/callbacks/manager.py:176}} Jan 14 14:31:24.804444 ubuntu-focal-rax-ord-0028007551 neutron-server[70556]: DEBUG neutron.plugins.ml2.plugin [None req-d4f3e3de-4409-406b-adf2-34be949f3734 None None] Current status of the port 63c13ed9-d3ad-409b-8412-134ae41c404b is: DOWN; New status is: ACTIVE {{(pid=70556) _update_individual_port_db_status /opt/stack/neutron/neutron/plugins/ml2/plugin.py:2213}} Jan 14 14:31:24.804773 ubuntu-focal-rax-ord-0028007551 neutron-server[70556]: DEBUG neutron_lib.callbacks.manager [None req-d4f3e3de-4409-406b-adf2-34be949f3734 None None] Publish callbacks ['neutron.plugins.ml2.plugin.SecurityGroupDbMixin._ensure_default_security_group_handler-1790959'] for port (63c13ed9-d3ad-409b-8412-134ae41c404b), before_update {{(pid=70556) _notify_loop /usr/local/lib/python3.8/dist-packages/neutron_lib/callbacks/manager.py:176}} Jan 14 14:31:24.877831 ubuntu-focal-rax-ord-0028007551 neutron-server[70556]: DEBUG neutron.notifiers.nova [-] Sending events: [{'server_uuid': 'a18584c2-4138-4c50-b426-f68b4c69b490', 'name': 'network-vif-plugged', 'status': 'completed', 'tag': '63c13ed9-d3ad-409b-8412-134ae41c404b'}] {{(pid=70556) send_events /opt/stack/neutron/neutron/notifiers/nova.py:262}} And that triggers send of the notification to nova. But nova don't expect it yet as it will create port and start waiting for the event notification few seconds later. The port is then provisioned by the L2 agent but as status of the port was already ACTIVE, notification is not send: Jan 14 14:31:27.994762 ubuntu-focal-rax-ord-0028007551 neutron-server[70556]: DEBUG neutron.db.provisioning_blocks [None req-58bb688c-d96a-49ab-9dc8-a4845ca7d6fa None None] Provisioning complete for port 63c13ed9-d3ad-409b-8412-134ae41c404b triggered by entity L2. {{(pid=70556) provisioning_complete /opt/stack/neutron/neutron/db/provisioning_blocks.py:139}} Jan 14 14:31:27.995136 ubuntu-focal-rax-ord-0028007551 neutron-server[70556]: DEBUG neutron_lib.callbacks.manager [None req-58bb688c-d96a-49ab-9dc8-a4845ca7d6fa None None] Publish callbacks ['neutron.plugins.ml2.plugin.Ml2Plugin._port_provisioned-1912498'] for port (63c13ed9-d3ad-409b-8412-134ae41c404b), provisioning_complete {{(pid=70556) _notify_loop /usr/local/lib/python3.8/dist-packages/neutron_lib/callbacks/manager.py:176}} Jan 14 14:31:28.082389 ubuntu-focal-rax-ord-0028007551 neutron-server[70556]: DEBUG neutron.plugins.ml2.plugin [None req-58bb688c-d96a-49ab-9dc8-a4845ca7d6fa None None] Current status of the port 63c13ed9-d3ad-409b-8412-134ae41c404b is: ACTIVE; New status is: ACTIVE {{(pid=70556) _update_individual_port_db_status /opt/stack/neutron/neutron/plugins/ml2/plugin.py:2213}} ------------ That is root cause of the issue. Now, the question is why DHCP agent sent provisioning complete message to the neutron-server, if subnet used in that test has dhcp disabled. In cases when this test passes, there is no such notification send at all. That happened probably due to error in the DHCP agent: Jan 14 14:31:22.926902 ubuntu-focal-rax-ord-0028007551 neutron-dhcp-agent[71829]: ERROR neutron.agent.dhcp.agent oslo_messaging.rpc.client.RemoteError: Remote error: SubnetInUse Unable to complete operation on subnet 681cb5db-8776-4fe8-93e1-b7861598a00f: This subnet is being modified by another concurrent operation. Jan 14 14:31:22.930490 ubuntu-focal-rax-ord-0028007551 neutron-dhcp-agent[71829]: ERROR neutron.agent.dhcp.agent ['Traceback (most recent call last):\n', ' File "/usr/local/lib/python3.8/dist-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming\n res = self.dispatcher.dispatch(message)\n', ' File "/usr/local/lib/python3.8/dist-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch\n return self._do_dispatch(endpoint, method, ctxt, args)\n', ' File "/usr/local/lib/python3.8/dist-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch\n result = func(ctxt, **new_args)\n', ' File "/usr/local/lib/python3.8/dist-packages/oslo_messaging/rpc/server.py", line 241, in inner\n return func(*args, **kwargs)\n', ' File "/usr/local/lib/python3.8/dist-packages/neutron_lib/db/api.py", line 139, in wrapped\n setattr(e, \'_RETRY_EXCEEDED\', True)\n', ' File "/usr/local/lib/python3.8/dist-packages/oslo_utils/excutils.py", line 227, in __exit__\n self.force_reraise()\n', ' File "/usr/local/lib/python3.8/dist-packages/oslo_utils/excutils.py", line 200, in force_reraise\n raise self.value\n', ' File "/usr/local/lib/python3.8/dist-packages/neutron_lib/db/api.py", line 135, in wrapped\n return f(*args, **kwargs)\n', ' File "/usr/local/lib/python3.8/dist-packages/oslo_db/api.py", line 154, in wrapper\n ectxt.value = e.inner_exc\n', ' File "/usr/local/lib/python3.8/dist-packages/oslo_utils/excutils.py", line 227, in __exit__\n self.force_reraise()\n', ' File "/usr/local/lib/python3.8/dist-packages/oslo_utils/excutils.py", line 200, in force_reraise\n raise self.value\n', ' File "/usr/local/lib/python3.8/dist-packages/oslo_db/api.py", line 142, in wrapper\n return f(*args, **kwargs)\n', ' File "/usr/local/lib/python3.8/dist-packages/neutron_lib/db/api.py", line 183, in wrapped\n LOG.debug("Retry wrapper got retriable exception: %s", e)\n', ' File "/usr/local/lib/python3.8/dist-packages/oslo_utils/excutils.py", line 227, in __exit__\n self.force_reraise()\n', ' File "/usr/local/lib/python3.8/dist-packages/oslo_utils/excutils.py", line 200, in force_reraise\n raise self.value\n', ' File "/usr/local/lib/python3.8/dist-packages/neutron_lib/db/api.py", line 179, in wrapped\n return f(*dup_args, **dup_kwargs)\n', ' File "/opt/stack/neutron/neutron/quota/resource_registry.py", line 95, in wrapper\n ret_val = f(_self, context, *args, **kwargs)\n', ' File "/opt/stack/neutron/neutron/api/rpc/handlers/dhcp_rpc.py", line 306, in create_dhcp_port\n return self._port_action(plugin, context, port, \'create_port\')\n', ' File "/opt/stack/neutron/neutron/api/rpc/handlers/dhcp_rpc.py", line 116, in _port_action\n return p_utils.create_port(plugin, context, port)\n', ' File "/usr/local/lib/python3.8/dist-packages/neutron_lib/plugins/utils.py", line 337, in create_port\n return core_plugin.create_port(\n', ' File "/opt/stack/neutron/neutron/common/utils.py", line 701, in inner\n return f(*args, **kwargs)\n', ' File "/usr/local/lib/python3.8/dist-packages/neutron_lib/db/api.py", line 218, in wrapped\n return method(*args, **kwargs)\n', ' File "/usr/local/lib/python3.8/dist-packages/neutron_lib/db/api.py", line 139, in wrapped\n setattr(e, \'_RETRY_EXCEEDED\', True)\n', ' File "/usr/local/lib/python3.8/dist-packages/oslo_utils/excutils.py", line 227, in __exit__\n self.force_reraise()\n', ' File "/usr/local/lib/python3.8/dist-packages/oslo_utils/excutils.py", line 200, in force_reraise\n raise self.value\n', ' File "/usr/local/lib/python3.8/dist-packages/neutron_lib/db/api.py", line 135, in wrapped\n return f(*args, **kwargs)\n', ' File "/usr/local/lib/python3.8/dist-packages/oslo_db/api.py", line 154, in wrapper\n ectxt.value = e.inner_exc\n', ' File "/usr/local/lib/python3.8/dist-packages/oslo_utils/excutils.py", line 227, in __exit__\n self.force_reraise()\n', ' File "/usr/local/lib/python3.8/dist-packages/oslo_utils/excutils.py", line 200, in force_reraise\n raise self.value\n', ' File "/usr/local/lib/python3.8/dist-packages/oslo_db/api.py", line 142, in wrapper\n return f(*args, **kwargs)\n', ' File "/usr/local/lib/python3.8/dist-packages/neutron_lib/db/api.py", line 183, in wrapped\n LOG.debug("Retry wrapper got retriable exception: %s", e)\n', ' File "/usr/local/lib/python3.8/dist-packages/oslo_utils/excutils.py", line 227, in __exit__\n self.force_reraise()\n', ' File "/usr/local/lib/python3.8/dist-packages/oslo_utils/excutils.py", line 200, in force_reraise\n raise self.value\n', ' File "/usr/local/lib/python3.8/dist-packages/neutron_lib/db/api.py", line 179, in wrapped\n return f(*dup_args, **dup_kwargs)\n', ' File "/opt/stack/neutron/neutron/plugins/ml2/plugin.py", line 1485, in create_port\n result, mech_context = self._create_port_db(context, port)\n', ' File "/opt/stack/neutron/neutron/plugins/ml2/plugin.py", line 1450, in _create_port_db\n port_db = self.create_port_db(context, port)\n', ' File "/opt/stack/neutron/neutron/db/db_base_plugin_v2.py", line 1466, in create_port_db\n self.ipam.allocate_ips_for_port_and_store(\n', ' File "/opt/stack/neutron/neutron/db/ipam_pluggable_backend.py", line 219, in allocate_ips_for_port_and_store\n ips = self.allocate_ips_for_port(context, port_copy)\n', ' File "/usr/local/lib/python3.8/dist-packages/neutron_lib/db/api.py", line 218, in wrapped\n return method(*args, **kwargs)\n', ' File "/usr/local/lib/python3.8/dist-packages/oslo_db/sqlalchemy/enginefacade.py", line 1010, in wrapper\n return fn(*args, **kwargs)\n', ' File "/opt/stack/neutron/neutron/db/ipam_pluggable_backend.py", line 226, in allocate_ips_for_port\n return self._allocate_ips_for_port(context, port)\n', ' File "/opt/stack/neutron/neutron/db/ipam_pluggable_backend.py", line 258, in _allocate_ips_for_port\n subnets = self._ipam_get_subnets(\n', ' File "/opt/stack/neutron/neutron/db/ipam_backend_mixin.py", line 694, in _ipam_get_subnets\n subnet.lock_register(\n', ' File "/opt/stack/neutron/neutron/db/models_v2.py", line 55, in lock_register\n raise exception\n', 'neutron_lib.exceptions.SubnetInUse: Unable to complete operation on subnet 681cb5db-8776-4fe8-93e1-b7861598a00f: This subnet is being modified by another concurrent operation.\n']. Jan 14 14:31:22.930490 ubuntu-focal-rax-ord-0028007551 neutron-dhcp-agent[71829]: ERROR neutron.agent.dhcp.agent And that error triggered full sync of the networks, and network used in that test was synced there. This triggered reporting that port provisioning was completed by the DHCP agent. Looking at the code, it seems that we can do 2 things to avoid such issues (both can be applied independently): 1. We can not send PROVISIONING_COMPLETE notifications if there wasn't provisioning block set by the entity, it's in https://github.com/openstack/neutron/blob/0a89986932bef0c7200fb731ab54832608926fbb/neutron/db/provisioning_blocks.py#L137 - that way we wouldn't trigger switch of the port from DOWN to UP state by the entity which didn't even had provisioning block for that port. 2. In the DHCP agent we should call "enable()" method of the driver only if there are subnets with enabled DHCP in the network: https://github.com/openstack/neutron/blob/0a89986932bef0c7200fb731ab54832608926fbb/neutron/agent/dhcp/agent.py#L403 - that way we shouldn't send notification about that specific port to the neutron server at all.