DiskNotFound exception raised when updating available resources

Bug #1770375 reported by Raoul Hidalgo Charman
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
Invalid
Undecided
Unassigned
devstack
Expired
Undecided
Unassigned

Bug Description

With a new devstack installation, when nova tries to initially update available resources of compute nodes it fails leaving no hypervisors available.

The following traceback is given in the nova compute logs:

 DEBUG oslo_service.periodic_task [None req-af656602-6b0a-4820-a5a0-f0a177286ed8 None None] Running periodic task ComputeManager.update_available_resource {{(pid=10526) run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215}}
ERROR nova.compute.manager [None req-af656602-6b0a-4820-a5a0-f0a177286ed8 None None] No compute node record for host wn1209301.in.tier2.hep.manchester.ac.uk: ComputeHostNotFound_Remote: Compute host wn1209301.in.tier2.hep.manchester.ac.uk could not be found.
DEBUG nova.compute.resource_tracker [None req-af656602-6b0a-4820-a5a0-f0a177286ed8 None None] Auditing locally available compute resources for wn1209301.in.tier2.hep.manchester.ac.uk (node: wn1209301.in.tier2.hep.manchester.ac.uk) {{(pid=10526) update_available_resource /opt/stack/nova/nova/compute/resource_tracker.py:663}}
ERROR nova.compute.manager [None req-af656602-6b0a-4820-a5a0-f0a177286ed8 None None] Error updating resources for node wn1209301.in.tier2.hep.manchester.ac.uk.: DiskNotFound: No disk at /opt/stack/data/nova/instances/d974d379-a237-4848-be27-d284a2b53696/disk
ERROR nova.compute.manager Traceback (most recent call last):
ERROR nova.compute.manager File "/opt/stack/nova/nova/compute/manager.py", line 7334, in update_available_resource_for_node
ERROR nova.compute.manager rt.update_available_resource(context, nodename)
ERROR nova.compute.manager File "/opt/stack/nova/nova/compute/resource_tracker.py", line 664, in update_available_resource
ERROR nova.compute.manager resources = self.driver.get_available_resource(nodename)
ERROR nova.compute.manager File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 6395, in get_available_resource
ERROR nova.compute.manager disk_over_committed = self._get_disk_over_committed_size_total()
ERROR nova.compute.manager File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 7992, in _get_disk_over_committed_size_total
ERROR nova.compute.manager err_ctxt.reraise = False
ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
ERROR nova.compute.manager self.force_reraise()
ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise
ERROR nova.compute.manager six.reraise(self.type_, self.value, self.tb)
ERROR nova.compute.manager File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 7938, in _get_disk_over_committed_size_total
ERROR nova.compute.manager config, block_device_info)
ERROR nova.compute.manager File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 7846, in _get_instance_disk_info_from_config
ERROR nova.compute.manager dk_size = disk_api.get_allocated_disk_size(path)
ERROR nova.compute.manager File "/opt/stack/nova/nova/virt/disk/api.py", line 109, in get_allocated_disk_size
ERROR nova.compute.manager return images.qemu_img_info(path).disk_size
ERROR nova.compute.manager File "/opt/stack/nova/nova/virt/images.py", line 57, in qemu_img_info
ERROR nova.compute.manager raise exception.DiskNotFound(location=path)
ERROR nova.compute.manager DiskNotFound: No disk at /opt/stack/data/nova/instances/d974d379-a237-4848-be27-d284a2b53696/disk

This path doesn't exist, as no instances have been created. Thought it might be a something left over in the sql db, but no instances are listed there. The instance id changes if you reinstall openstack by running unstack.sh and then stack.sh again.
This leaves the hypervisor unavailable and so instances can't be created.

On devstack commit b89bfa21b0e144d8160478b54a45a1087ea3e1df
On nova commit a8d3c61a579648f3ee469f81aee24d57cc8f37cb
Hypervisor: libvirt/kvm
Networking: Neutron w/ ovs

Revision history for this message
Matt Riedemann (mriedem) wrote :

I'm pretty sure unstack.sh is no longer really supported in devstack.

Changed in nova:
status: New → Invalid
Revision history for this message
Matt Riedemann (mriedem) wrote :

If this is something you think needs to be handled during unstack.sh, then you need to change it in devstack, not nova.

Revision history for this message
Raoul Hidalgo Charman (raoulhc) wrote :

Ah didn't know unstack was unsupported.

Had more of a poke around and it turned out that some inactive libvirt domains had been left, which was causing all of the fuss, so you're right this might be something that should be made sure to be cleared in devstack somewhere.

Revision history for this message
Dr. Jens Harbott (j-harbott) wrote :

Please show the complete steps to reproduce this issue, starting from a fresh instance. What OS and version are you running this on? Also please attach your local.conf file.

Changed in devstack:
status: New → Incomplete
Revision history for this message
Launchpad Janitor (janitor) wrote :

[Expired for devstack because there has been no activity for 60 days.]

Changed in devstack:
status: Incomplete → Expired
Revision history for this message
Jason Rich (jrich) wrote :

Old "bug" but I just ran into this with Pike.

Looks like for me, the cause was due to an attempt to clean up my nodes and start with a fresh openstack cluster. The controller node's databases were wiped out, and the nova directories on the compute hosts were also wiped out. Openstack services were reinstalled/refreshed "from scratch", however the compute nodes still had old instance data in /etc/libvirt/qemu/<instances>.xml

Once those old files were removed, nova came up normally. Hope that is helpful to someone fighting this type of issue.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.