Comment 0 for bug 1465416

Revision history for this message
Dmitry Guryanov (dguryanov) wrote :

If the instance is in SHUTOFF state, volume state is 'in-use', so a volume driver for a NAS storage decides to call os-assisted-volume-snapshots:delete.

The only driver, which supports this API is libvirt, so we go to LibvirtDriver.volume_snapshot_delete. Which in turn calls

            result = virt_dom.blockRebase(rebase_disk, rebase_base,
                                          rebase_bw, rebase_flags)

Which raises an exception if a domain is not running:

015-06-16 00:58:48.155 DEBUG nova.virt.libvirt.driver [req-8cee70dd-2808-4fa6-88da-7f1bb9e0e370 nova service] volume_snapshot_delete: delete_info: {u'type': u'qcow2', u'merge_target_file': None, u'file_to_merge': None, u'volume_id': u'e650a0cb-abbf-4bb3-843e-9fb762953c7e'} from (pid=20313) _volume_snapshot_delete /opt/stack/nova/nova/virt/libvirt/driver.py:1826
2015-06-16 00:58:48.156 DEBUG nova.virt.libvirt.driver [req-8cee70dd-2808-4fa6-88da-7f1bb9e0e370 nova service] found device at vda from (pid=20313) _volume_snapshot_delete /opt/stack/nova/nova/virt/libvirt/driver.py:1875
2015-06-16 00:58:48.156 DEBUG nova.virt.libvirt.driver [req-8cee70dd-2808-4fa6-88da-7f1bb9e0e370 nova service] disk: vda, base: None, bw: 0, flags: 0 from (pid=20313) _volume_snapshot_delete /opt/stack/nova/nova/virt/libvirt/driver.py:1947
2015-06-16 00:58:48.157 ERROR nova.virt.libvirt.driver [req-8cee70dd-2808-4fa6-88da-7f1bb9e0e370 nova service] Error occurred during volume_snapshot_delete, sending error status to Cinder.
2015-06-16 00:58:48.157 TRACE nova.virt.libvirt.driver Traceback (most recent call last):
2015-06-16 00:58:48.157 TRACE nova.virt.libvirt.driver File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 2020, in volume_snapshot_delete
2015-06-16 00:58:48.157 TRACE nova.virt.libvirt.driver snapshot_id, delete_info=delete_info)
2015-06-16 00:58:48.157 TRACE nova.virt.libvirt.driver File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 1950, in _volume_snapshot_delete
2015-06-16 00:58:48.157 TRACE nova.virt.libvirt.driver rebase_bw, rebase_flags)
2015-06-16 00:58:48.157 TRACE nova.virt.libvirt.driver File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 183, in doit
2015-06-16 00:58:48.157 TRACE nova.virt.libvirt.driver result = proxy_call(self._autowrap, f, *args, **kwargs)
2015-06-16 00:58:48.157 TRACE nova.virt.libvirt.driver File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 141, in proxy_call
2015-06-16 00:58:48.157 TRACE nova.virt.libvirt.driver rv = execute(f, *args, **kwargs)
2015-06-16 00:58:48.157 TRACE nova.virt.libvirt.driver File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 122, in execute
2015-06-16 00:58:48.157 TRACE nova.virt.libvirt.driver six.reraise(c, e, tb)
2015-06-16 00:58:48.157 TRACE nova.virt.libvirt.driver File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 80, in tworker
2015-06-16 00:58:48.157 TRACE nova.virt.libvirt.driver rv = meth(*args, **kwargs)
2015-06-16 00:58:48.157 TRACE nova.virt.libvirt.driver File "/usr/lib/python2.7/site-packages/libvirt.py", line 865, in blockRebase
2015-06-16 00:58:48.157 TRACE nova.virt.libvirt.driver if ret == -1: raise libvirtError ('virDomainBlockRebase() failed', dom=self)
2015-06-16 00:58:48.157 TRACE nova.virt.libvirt.driver libvirtError: Requested operation is not valid: domain is not running

I'm, using devstack, which checked out openstack's repos on 15.06.2015.
I'm experiencing the problem with my new volume driver https://review.openstack.org/#/c/188869/8 , but glusterfs and quobyte volume drivers are surely have the same bug.