I just spent some time looking at this in a recent nova-ceph-multistore job failure and I think:
1. Sean appears to be correct that this API shouldn't be called in the Ceph case. Ceph disks are of source type "network" and the libvirt driver logic intentionally will not consider them.
2. AFAICT the only Cinder driver that will send 'type' is the remotefs driver [1]. None of the others send 'type', so it appears valid to not send 'type'.
[1] https://github.com/openstack/cinder/blob/1ecfffafa6019cf6230a5af675e40c2a985dd6eb/cinder/volume/drivers/remotefs.py#L1833-L1849
-----------------------------
Excerpts from the failed job:
Nova receives a request to attach volume d4a23ed5-c8c0-4fe2-acfa-b2b69db86f21:
May 31 14:19:30.665033 np0037636881 nova-compute[102717]: INFO nova.compute.manager [None req-9aadc198-7fa1-4e7e-812f-242d121238dd tempest-VolumesAssistedSnapshotsTest-1161291479 tempest-VolumesAssistedSnapshotsTest-1161291479-project-admin] [instance: 56d9dfed-ad43-4426-b888-de2e4244e25d] Attaching volume d4a23ed5-c8c0-4fe2-acfa-b2b69db86f21 to /dev/vdb
May 31 14:19:30.784610 np0037636881 nova-compute[102717]: DEBUG os_brick.utils [None req-9aadc198-7fa1-4e7e-812f-242d121238dd tempest-VolumesAssistedSnapshotsTest-1161291479 tempest-VolumesAssistedSnapshotsTest-1161291479-project-admin] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '10.210.192.62', 'multipath': False, 'enforce_multipath': True, 'host': 'np0037636881', 'execute': None}" {{(pid=102717) trace_logging_wrapper /opt/stack/data/venv/lib/python3.10/site-packages/os_brick/utils.py:176}}
May 31 14:19:32.490131 np0037636881 nova-compute[102717]: DEBUG os_brick.utils [None req-9aadc198-7fa1-4e7e-812f-242d121238dd tempest-VolumesAssistedSnapshotsTest-1161291479 tempest-VolumesAssistedSnapshotsTest-1161291479-project-admin] <== get_connector_properties: return (1705ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '10.210.192.62', 'host': 'np0037636881', 'multipath': False, 'initiator': 'iqn.2016-04.com.open-iscsi:7ee3f6178548', 'do_local_attach': False, 'uuid': '3fe1b450-6dc2-4e92-9133-7bf0af20715d', 'system uuid': '2afe8293-e7a7-25a4-a5ca-eae3fc88b212', 'nvme_native_multipath': False} {{(pid=102717) trace_logging_wrapper /opt/stack/data/venv/lib/python3.10/site-packages/os_brick/utils.py:203}}
May 31 14:19:32.490566 np0037636881 nova-compute[102717]: DEBUG nova.virt.block_device [None req-9aadc198-7fa1-4e7e-812f-242d121238dd tempest-VolumesAssistedSnapshotsTest-1161291479 tempest-VolumesAssistedSnapshotsTest-1161291479-project-admin] [instance: 56d9dfed-ad43-4426-b888-de2e4244e25d] Updating existing volume attachment record: a603f0d2-3c77-4b62-8a37-1a07beb93e91 {{(pid=102717) _volume_attach /opt/stack/nova/nova/virt/block_device.py:665}}
May 31 14:19:34.185042 np0037636881 nova-compute[102717]: DEBUG nova.virt.libvirt.driver [None req-9aadc198-7fa1-4e7e-812f-242d121238dd tempest-VolumesAssistedSnapshotsTest-1161291479 tempest-VolumesAssistedSnapshotsTest-1161291479-project-admin] [instance: 56d9dfed-ad43-4426-b888-de2e4244e25d] Attempting to attach volume d4a23ed5-c8c0-4fe2-acfa-b2b69db86f21 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. {{(pid=102717) _check_discard_for_attach_volume /opt/stack/nova/nova/virt/libvirt/driver.py:2266}}
May 31 14:19:34.189114 np0037636881 nova-compute[102717]: DEBUG nova.virt.libvirt.guest [None req-9aadc198-7fa1-4e7e-812f-242d121238dd tempest-VolumesAssistedSnapshotsTest-1161291479 tempest-VolumesAssistedSnapshotsTest-1161291479-project-admin] attach device xml:
May 31 14:19:34.189114 np0037636881 nova-compute[102717]:
May 31 14:19:34.189114 np0037636881 nova-compute[102717]:
May 31 14:19:34.189114 np0037636881 nova-compute[102717]:
May 31 14:19:34.189114 np0037636881 nova-compute[102717]:
May 31 14:19:34.189114 np0037636881 nova-compute[102717]:
May 31 14:19:34.189114 np0037636881 nova-compute[102717]:
May 31 14:19:34.189114 np0037636881 nova-compute[102717]:
May 31 14:19:34.189114 np0037636881 nova-compute[102717]:
May 31 14:19:34.189114 np0037636881 nova-compute[102717]:
May 31 14:19:34.189114 np0037636881 nova-compute[102717]: d4a23ed5-c8c0-4fe2-acfa-b2b69db86f21
May 31 14:19:34.189114 np0037636881 nova-compute[102717]:
May 31 14:19:34.189114 np0037636881 nova-compute[102717]: {{(pid=102717) attach_device /opt/stack/nova/nova/virt/libvirt/guest.py:338}}
Nova completes attaching volume d4a23ed5-c8c0-4fe2-acfa-b2b69db86f21.
May 31 14:19:37.166833 np0037636881 nova-compute[102717]: DEBUG oslo_concurrency.lockutils [None req-9aadc198-7fa1-4e7e-812f-242d121238dd tempest-VolumesAssistedSnapshotsTest-1161291479 tempest-VolumesAssistedSnapshotsTest-1161291479-project-admin] Lock "56d9dfed-ad43-4426-b888-de2e4244e25d" "released" by "nova.compute.manager.ComputeManager.attach_volume..do_attach_volume" :: held 6.502s {{(pid=102717) inner /opt/stack/data/venv/lib/python3.10/site-packages/oslo_concurrency/lockutils.py:421}}
Nova receives a request to assisted volume snapshot volume d4a23ed5-c8c0-4fe2-acfa-b2b69db86f21:
May 31 14:19:38.030511 np0037636881 [93038]: DEBUG nova.api.openstack.wsgi [None req-286b54c3-6bb1-418a-a5eb-171c2a57081a tempest-VolumesAssistedSnapshotsTest-897520790 tempest-VolumesAssistedSnapshotsTest-897520790-project] Action: 'create', calling method: >, body: {"snapshot": {"volume_id": "d4a23ed5-c8c0-4fe2-acfa-b2b69db86f21", "create_info": {"snapshot_id": "73a4945c-9a11-433f-ba52-04b818420d38", "type": "qcow2", "new_file": "new_file"}}} {{(pid=93038) _process_stack /opt/stack/nova/nova/api/openstack/wsgi.py:518}}
4998
Nova (correctly) fails to do the snapshot because disks of source type "network" are intentionally not valid for the snapshot and sends the status (not found) to Cinder:
May 31 14:19:38.155679 np0037636881 nova-compute[102717]: DEBUG nova.virt.libvirt.driver [None req-286b54c3-6bb1-418a-a5eb-171c2a57081a tempest-VolumesAssistedSnapshotsTest-897520790 tempest-VolumesAssistedSnapshotsTest-897520790-project] [instance: 56d9dfed-ad43-4426-b888-de2e4244e25d] volume_snapshot_create: create_info: {'snapshot_id': '73a4945c-9a11-433f-ba52-04b818420d38', 'type': 'qcow2', 'new_file': 'new_file'} {{(pid=102717) volume_snapshot_create /opt/stack/nova/nova/virt/libvirt/driver.py:3706}}
May 31 14:19:38.173312 np0037636881 nova-compute[102717]: ERROR nova.virt.libvirt.driver [None req-286b54c3-6bb1-418a-a5eb-171c2a57081a tempest-VolumesAssistedSnapshotsTest-897520790 tempest-VolumesAssistedSnapshotsTest-897520790-project] [instance: 56d9dfed-ad43-4426-b888-de2e4244e25d] Error occurred during volume_snapshot_create, sending error status to Cinder.: nova.exception.InternalError: Found no disk to snapshot.
May 31 14:19:38.173312 np0037636881 nova-compute[102717]: ERROR nova.virt.libvirt.driver [instance: 56d9dfed-ad43-4426-b888-de2e4244e25d] Traceback (most recent call last):
May 31 14:19:38.173312 np0037636881 nova-compute[102717]: ERROR nova.virt.libvirt.driver [instance: 56d9dfed-ad43-4426-b888-de2e4244e25d] File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 3724, in volume_snapshot_create
May 31 14:19:38.173312 np0037636881 nova-compute[102717]: ERROR nova.virt.libvirt.driver [instance: 56d9dfed-ad43-4426-b888-de2e4244e25d] self._volume_snapshot_create(context, instance, guest,
May 31 14:19:38.173312 np0037636881 nova-compute[102717]: ERROR nova.virt.libvirt.driver [instance: 56d9dfed-ad43-4426-b888-de2e4244e25d] File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 3611, in _volume_snapshot_create
May 31 14:19:38.173312 np0037636881 nova-compute[102717]: ERROR nova.virt.libvirt.driver [instance: 56d9dfed-ad43-4426-b888-de2e4244e25d] raise exception.InternalError(msg)
May 31 14:19:38.173312 np0037636881 nova-compute[102717]: ERROR nova.virt.libvirt.driver [instance: 56d9dfed-ad43-4426-b888-de2e4244e25d] nova.exception.InternalError: Found no disk to snapshot.
Nova receives a request to delete the (nonexistent) snapshot:
May 31 14:19:38.133729 np0037636881 [93038]: DEBUG nova.api.openstack.wsgi [None req-8a71d587-20bd-4eb5-b43e-312353ccd434 tempest-VolumesAssistedSnapshotsTest-897520790 tempest-VolumesAssistedSnapshotsTest-897520790-project] Calling method '>' {{(pid=93038) _process_stack /opt/stack/nova/nova/api/openstack/wsgi.py:520}}
May 31 14:19:38.273055 np0037636881 nova-compute[102717]: DEBUG nova.virt.libvirt.driver [None req-8a71d587-20bd-4eb5-b43e-312353ccd434 tempest-VolumesAssistedSnapshotsTest-897520790 tempest-VolumesAssistedSnapshotsTest-897520790-project] [instance: 56d9dfed-ad43-4426-b888-de2e4244e25d] volume_snapshot_delete: delete_info: {'volume_id': 'd4a23ed5-c8c0-4fe2-acfa-b2b69db86f21'} {{(pid=102717) _volume_snapshot_delete /opt/stack/nova/nova/virt/libvirt/driver.py:3807}}
And fails due to the KeyError but it would fail or no-op anyway because a snapshot was never created:
May 31 14:19:38.273966 np0037636881 nova-compute[102717]: ERROR nova.virt.libvirt.driver [None req-8a71d587-20bd-4eb5-b43e-312353ccd434 tempest-VolumesAssistedSnapshotsTest-897520790 tempest-VolumesAssistedSnapshotsTest-897520790-project] [instance: 56d9dfed-ad43-4426-b888-de2e4244e25d] Error occurred during volume_snapshot_delete, sending error status to Cinder.: KeyError: 'type'
May 31 14:19:38.273966 np0037636881 nova-compute[102717]: ERROR nova.virt.libvirt.driver [instance: 56d9dfed-ad43-4426-b888-de2e4244e25d] Traceback (most recent call last):
May 31 14:19:38.273966 np0037636881 nova-compute[102717]: ERROR nova.virt.libvirt.driver [instance: 56d9dfed-ad43-4426-b888-de2e4244e25d] File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 3980, in volume_snapshot_delete
May 31 14:19:38.273966 np0037636881 nova-compute[102717]: ERROR nova.virt.libvirt.driver [instance: 56d9dfed-ad43-4426-b888-de2e4244e25d] self._volume_snapshot_delete(context, instance, volume_id,
May 31 14:19:38.273966 np0037636881 nova-compute[102717]: ERROR nova.virt.libvirt.driver [instance: 56d9dfed-ad43-4426-b888-de2e4244e25d] File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 3810, in _volume_snapshot_delete
May 31 14:19:38.273966 np0037636881 nova-compute[102717]: ERROR nova.virt.libvirt.driver [instance: 56d9dfed-ad43-4426-b888-de2e4244e25d] if delete_info['type'] != 'qcow2':
May 31 14:19:38.273966 np0037636881 nova-compute[102717]: ERROR nova.virt.libvirt.driver [instance: 56d9dfed-ad43-4426-b888-de2e4244e25d] KeyError: 'type'
May 31 14:19:38.273966 np0037636881 nova-compute[102717]: ERROR nova.virt.libvirt.driver [instance: 56d9dfed-ad43-4426-b888-de2e4244e25d]
May 31 14:19:38.291486 np0037636881 nova-compute[102717]: DEBUG nova.network.neutron [req-03359e28-998b-4504-86cb-e9e62f8762ff req-cad50a6c-bcea-4ad1-b383-77d5809776c8 service nova] [instance: 1a3cb257-0ef1-459c-a69c-897d716662c7] Instance cache missing network info. {{(pid=102717) _get_preexisting_port_ids /opt/stack/nova/nova/network/neutron.py:3323}}
Sending of status to Cinder (for the create) fails because snapshot_id is not found (which is expected because the snapshot was rejected for not found). Note that this is received AFTER the delete request:
May 31 14:19:38.488406 np0037636881 nova-compute[102717]: ERROR nova.virt.libvirt.driver [None req-286b54c3-6bb1-418a-a5eb-171c2a57081a tempest-VolumesAssistedSnapshotsTest-897520790 tempest-VolumesAssistedSnapshotsTest-897520790-project] Failed to send updated snapshot status to volume service.: nova.exception.SnapshotNotFound: Snapshot 73a4945c-9a11-433f-ba52-04b818420d38 could not be found.