If the instance is in SHUTOFF state, volume state is 'in-use', so a volume driver for a NAS storage decides to call os-assisted-volume-snapshots:delete.
The only driver, which supports this API is libvirt, so we go to LibvirtDriver.volume_snapshot_delete. Which in turn calls
result = virt_dom.blockRebase(rebase_disk, rebase_base, rebase_bw, rebase_flags)
Which raises an exception if a domain is not running:
015-06-16 00:58:48.155 DEBUG nova.virt.libvirt.driver [req-8cee70dd-2808-4fa6-88da-7f1bb9e0e370 nova service] volume_snapshot_delete: delete_info: {u'type': u'qcow2', u'merge_target_file': None, u'file_to_merge': None, u'volume_id': u'e650a0cb-abbf-4bb3-843e-9fb762953c7e'} from (pid=20313) _volume_snapshot_delete /opt/stack/nova/nova/virt/libvirt/driver.py:1826
2015-06-16 00:58:48.156 DEBUG nova.virt.libvirt.driver [req-8cee70dd-2808-4fa6-88da-7f1bb9e0e370 nova service] found device at vda from (pid=20313) _volume_snapshot_delete /opt/stack/nova/nova/virt/libvirt/driver.py:1875
2015-06-16 00:58:48.156 DEBUG nova.virt.libvirt.driver [req-8cee70dd-2808-4fa6-88da-7f1bb9e0e370 nova service] disk: vda, base: None, bw: 0, flags: 0 from (pid=20313) _volume_snapshot_delete /opt/stack/nova/nova/virt/libvirt/driver.py:1947
2015-06-16 00:58:48.157 ERROR nova.virt.libvirt.driver [req-8cee70dd-2808-4fa6-88da-7f1bb9e0e370 nova service] Error occurred during volume_snapshot_delete, sending error status to Cinder.
2015-06-16 00:58:48.157 TRACE nova.virt.libvirt.driver Traceback (most recent call last):
2015-06-16 00:58:48.157 TRACE nova.virt.libvirt.driver File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 2020, in volume_snapshot_delete
2015-06-16 00:58:48.157 TRACE nova.virt.libvirt.driver snapshot_id, delete_info=delete_info)
2015-06-16 00:58:48.157 TRACE nova.virt.libvirt.driver File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 1950, in _volume_snapshot_delete
2015-06-16 00:58:48.157 TRACE nova.virt.libvirt.driver rebase_bw, rebase_flags)
2015-06-16 00:58:48.157 TRACE nova.virt.libvirt.driver File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 183, in doit
2015-06-16 00:58:48.157 TRACE nova.virt.libvirt.driver result = proxy_call(self._autowrap, f, *args, **kwargs)
2015-06-16 00:58:48.157 TRACE nova.virt.libvirt.driver File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 141, in proxy_call
2015-06-16 00:58:48.157 TRACE nova.virt.libvirt.driver rv = execute(f, *args, **kwargs)
2015-06-16 00:58:48.157 TRACE nova.virt.libvirt.driver File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 122, in execute
2015-06-16 00:58:48.157 TRACE nova.virt.libvirt.driver six.reraise(c, e, tb)
2015-06-16 00:58:48.157 TRACE nova.virt.libvirt.driver File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 80, in tworker
2015-06-16 00:58:48.157 TRACE nova.virt.libvirt.driver rv = meth(*args, **kwargs)
2015-06-16 00:58:48.157 TRACE nova.virt.libvirt.driver File "/usr/lib/python2.7/site-packages/libvirt.py", line 865, in blockRebase
2015-06-16 00:58:48.157 TRACE nova.virt.libvirt.driver if ret == -1: raise libvirtError ('virDomainBlockRebase() failed', dom=self)
2015-06-16 00:58:48.157 TRACE nova.virt.libvirt.driver libvirtError: Requested operation is not valid: domain is not running
I'm, using devstack, which checked out openstack's repos on 15.06.2015.
I'm experiencing the problem with my new volume driver https://review.openstack.org/#/c/188869/8 , but glusterfs and quobyte volume drivers are surely have the same bug.
If the instance is in SHUTOFF state, volume state is 'in-use', so a volume driver for a NAS storage decides to call os-assisted- volume- snapshots: delete.
The only driver, which supports this API is libvirt, so we go to LibvirtDriver. volume_ snapshot_ delete. Which in turn calls
result = virt_dom. blockRebase( rebase_ disk, rebase_base,
rebase_ bw, rebase_flags)
Which raises an exception if a domain is not running:
015-06-16 00:58:48.155 DEBUG nova.virt. libvirt. driver [req-8cee70dd- 2808-4fa6- 88da-7f1bb9e0e3 70 nova service] volume_ snapshot_ delete: delete_info: {u'type': u'qcow2', u'merge_ target_ file': None, u'file_to_merge': None, u'volume_id': u'e650a0cb- abbf-4bb3- 843e-9fb762953c 7e'} from (pid=20313) _volume_ snapshot_ delete /opt/stack/ nova/nova/ virt/libvirt/ driver. py:1826 libvirt. driver [req-8cee70dd- 2808-4fa6- 88da-7f1bb9e0e3 70 nova service] found device at vda from (pid=20313) _volume_ snapshot_ delete /opt/stack/ nova/nova/ virt/libvirt/ driver. py:1875 libvirt. driver [req-8cee70dd- 2808-4fa6- 88da-7f1bb9e0e3 70 nova service] disk: vda, base: None, bw: 0, flags: 0 from (pid=20313) _volume_ snapshot_ delete /opt/stack/ nova/nova/ virt/libvirt/ driver. py:1947 libvirt. driver [req-8cee70dd- 2808-4fa6- 88da-7f1bb9e0e3 70 nova service] Error occurred during volume_ snapshot_ delete, sending error status to Cinder. libvirt. driver Traceback (most recent call last): libvirt. driver File "/opt/stack/ nova/nova/ virt/libvirt/ driver. py", line 2020, in volume_ snapshot_ delete libvirt. driver snapshot_id, delete_ info=delete_ info) libvirt. driver File "/opt/stack/ nova/nova/ virt/libvirt/ driver. py", line 1950, in _volume_ snapshot_ delete libvirt. driver rebase_bw, rebase_flags) libvirt. driver File "/usr/lib/ python2. 7/site- packages/ eventlet/ tpool.py" , line 183, in doit libvirt. driver result = proxy_call( self._autowrap, f, *args, **kwargs) libvirt. driver File "/usr/lib/ python2. 7/site- packages/ eventlet/ tpool.py" , line 141, in proxy_call libvirt. driver rv = execute(f, *args, **kwargs) libvirt. driver File "/usr/lib/ python2. 7/site- packages/ eventlet/ tpool.py" , line 122, in execute libvirt. driver six.reraise(c, e, tb) libvirt. driver File "/usr/lib/ python2. 7/site- packages/ eventlet/ tpool.py" , line 80, in tworker libvirt. driver rv = meth(*args, **kwargs) libvirt. driver File "/usr/lib/ python2. 7/site- packages/ libvirt. py", line 865, in blockRebase libvirt. driver if ret == -1: raise libvirtError ('virDomainBloc kRebase( ) failed', dom=self) libvirt. driver libvirtError: Requested operation is not valid: domain is not running
2015-06-16 00:58:48.156 DEBUG nova.virt.
2015-06-16 00:58:48.156 DEBUG nova.virt.
2015-06-16 00:58:48.157 ERROR nova.virt.
2015-06-16 00:58:48.157 TRACE nova.virt.
2015-06-16 00:58:48.157 TRACE nova.virt.
2015-06-16 00:58:48.157 TRACE nova.virt.
2015-06-16 00:58:48.157 TRACE nova.virt.
2015-06-16 00:58:48.157 TRACE nova.virt.
2015-06-16 00:58:48.157 TRACE nova.virt.
2015-06-16 00:58:48.157 TRACE nova.virt.
2015-06-16 00:58:48.157 TRACE nova.virt.
2015-06-16 00:58:48.157 TRACE nova.virt.
2015-06-16 00:58:48.157 TRACE nova.virt.
2015-06-16 00:58:48.157 TRACE nova.virt.
2015-06-16 00:58:48.157 TRACE nova.virt.
2015-06-16 00:58:48.157 TRACE nova.virt.
2015-06-16 00:58:48.157 TRACE nova.virt.
2015-06-16 00:58:48.157 TRACE nova.virt.
2015-06-16 00:58:48.157 TRACE nova.virt.
I'm, using devstack, which checked out openstack's repos on 15.06.2015. /review. openstack. org/#/c/ 188869/ 8 , but glusterfs and quobyte volume drivers are surely have the same bug.
I'm experiencing the problem with my new volume driver https:/