Comment 5 for bug 1714247

Revision history for this message
Lucian Petrut (petrutlucian94) wrote :

The network info fetched here [1] will be empty as it relies on the info cache [2], which won't contain vif details for deleted instances, as far as I can see.

There won't be any trace in the logs, it's just that those vifs will not be unplugged. It's really easy to reproduce, all you have to do is:
1. boot an instance, while using ovs ports
2. kill the nova compute service and wait for it to be reported as 'down'.
3. destroy the instance
4. bring the nova compute service back up. it will destroy the instance but the ports will not be unplugged. If iSCSI volumes were attached to that instance, the according iSCSI sessions will be leaked as well.

As for the volume connections, the BDMs are properly fetched. But if the nova compute service comes back up after the instances have been deleted from the db and volumes disconnected on the Cinder side, we end up having stale iSCSI sessions (of course, if using an iSCSI backend). The issue is that os-brick attempts to flush an inaccessible device, which fails. Until recently, it wasn't erroring out, moving on and removing the iSCSI session, but as per a recent change [3], it requires a 'force' flag in order to ignore flush exceptions.

[1] https://github.com/openstack/nova/blob/16.0.0.0rc2/nova/compute/manager.py#L2264
[2] https://github.com/openstack/nova/blob/16.0.0.0rc2/nova/objects/instance.py#L1169-L1172
[3] https://github.com/openstack/os-brick/commit/400ca5d6db818b966065001571e59198c6966e2f