I spent some time adding LOG.debug statements all over the place while trying to reproduce this bug, and then read bug 1535918. They're the same bug. To summarise: a. When spawning a VM, the libvirt driver waits for events from Neutron [1] b. What events it waits for is decided by _get_neutron_events [2] c. _get_neutron_events will only return a vif-plugged event if the vif isn't active [3] d. The network_info that _get_neutron_events gets passed in [2] ultimately comes from [4] e. If the instance's info cache hasn't been refreshed recently in step d, the vif will be active, and _get_neutron_events will return an empty list in c f. When it comes time to actually wait for events in [5], no waiting is done because the events list is empty. g. The instance boots up just fine even if the vif-plugged event never makes its way to the destination compute. Instead, it's received by the source compute service whenever it comes back online. So really, evacuation was always broken, but we got away with it just enough times for nobody to notice because we sometimes got "lucky" with a stale cache (which is actually a bug). [1] https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L4824 [2] https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L4817 [3] https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L4787 [4] https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L2810 [5] https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L475 Snippets of logs follow, but with some LOG.debug statements that I put in, so not everything will appear the same in a vanilla devstack: _get_neutron_events: 2016-10-08 11:31:01.824 DEBUG nova.virt.libvirt.driver [req-6a93955f-cc1e-4918-af95-53a7fa1d81fe admin admin] [{"profile": {}, "ovs_interfaceid": "aaac55e8-92c8-4280-a871-fab899661a30", "preserve_on_delete": false, "network": {"bridge": "br-int", "subnets": [{"ips": [{"meta": {}, "version": 4, "type": "fixed", "floating_ips": [], "address": "10.0.0.10"}], "version": 4, "meta": {"dhcp_server": "10.0.0.2"}, "dns": [], "routes": [], "cidr": "10.0.0.0/24", "gateway": {"meta": {}, "version": 4, "type": "gateway", "address": "10.0.0.1"}}, {"ips": [{"meta": {}, "version": 6, "type": "fixed", "floating_ips": [], "address": "2001:db8:8000:0:f816:3eff:fe0b:e6c2"}], "version": 6, "meta": {"dhcp_server": "2001:db8:8000:0:f816:3eff:fec1:b2e"}, "dns": [], "routes": [], "cidr": "2001:db8:8000::/64", "gateway": {"meta": {}, "version": 6, "type": "gateway", "address": "2001:db8:8000::1"}}], "meta": {"injected": false, "tenant_id": "c145e829d46745bf822fac809223b61e", "mtu": 1450}, "id": "296e8f10-41f9-4b81-ba40-272aa8603d3b", "label": "private"}, "devname": "tapaaac55e8-92", "vnic_type": "normal", "qbh_params": null, "meta": {}, "details": {"port_filter": true, "ovs_hybrid_plug": true}, "address": "fa:16:3e:0b:e6:c2", "active": true, "type": "ovs", "id": "aaac55e8-92c8-4280-a871-fab899661a30", "qbg_params": null}] from (pid=17819) _get_neutron_events /opt/stack/nova/nova/virt/libvirt/driver.py:4786 2016-10-08 11:31:01.824 DEBUG nova.virt.libvirt.driver [req-6a93955f-cc1e-4918-af95-53a7fa1d81fe admin admin] Returning events=[] from (pid=17819) _get_neutron_events /opt/stack/nova/nova/virt/libvirt/driver.py:4790 wait_for_instance_event: 2016-10-08 11:31:03.342 DEBUG nova.compute.manager [req-6a93955f-cc1e-4918-af95-53a7fa1d81fe admin admin] Iterating over events={} from (pid=17819) wait_for_instance_event /opt/stack/nova/nova/compute/manager.py:475 2016-10-08 11:31:03.343 DEBUG nova.compute.manager [req-6a93955f-cc1e-4918-af95-53a7fa1d81fe admin admin] Done iterating from (pid=17819) wait_for_instance_event /opt/stack/nova/nova/compute/manager.py:483