Swap of volumes tempest test with multiattach fails

Bug #1829881 reported by Rajini Karthik
8
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
Invalid
Undecided
Unassigned

Bug Description

Swap test case of volume with multiattach fails

tempest.api.compute.admin.test_volume_swap.TestMultiAttachVolumeSwap.test_volume_swap_with_multiattach [337.211730s] ... FAILED

Error log in nova
May 07 01:56:28.117664 dellscfc nova-compute[1036]: ERROR nova.virt.block_device [None req-858576cb-bdf3-49c1-b0af-34cb538712b0 tempest-AttachVolumeMultiAttachTest-948792761 tempest-AttachVolumeMultiAttachTest-948792761] [instance: 2d08d9a4-6eab-4869-ba48-802c546c7eb7] Failed to detach volume f33ba2e1-b649-478c-af4c-dba062423ad2 from /dev/vdb: oslo_concurrency.processutils.ProcessExecutionError: Unexpected error while running command.
May 07 01:56:28.117664 dellscfc nova-compute[1036]: Command: /lib/udev/scsi_id --page 0x83 --whitelisted /dev/disk/by-path/pci-0000:04:00.0-fc-0x5000d3100101d530-lun-3
May 07 01:56:28.117664 dellscfc nova-compute[1036]: Exit code: 1
May 07 01:56:28.117664 dellscfc nova-compute[1036]: Stdout: ''
May 07 01:56:28.117664 dellscfc nova-compute[1036]: Stderr: ''
May 07 01:56:28.117664 dellscfc nova-compute[1036]: ERROR nova.virt.block_device [instance: 2d08d9a4-6eab-4869-ba48-802c546c7eb7] Traceback (most recent call last):
May 07 01:56:28.117664 dellscfc nova-compute[1036]: ERROR nova.virt.block_device [instance: 2d08d9a4-6eab-4869-ba48-802c546c7eb7] File "/opt/stack/new/nova/nova/virt/block_device.py", line 326, in driver_detach
May 07 01:56:28.117664 dellscfc nova-compute[1036]: ERROR nova.virt.block_device [instance: 2d08d9a4-6eab-4869-ba48-802c546c7eb7] encryption=encryption)
May 07 01:56:28.117664 dellscfc nova-compute[1036]: ERROR nova.virt.block_device [instance: 2d08d9a4-6eab-4869-ba48-802c546c7eb7] File "/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 1717, in detach_volume
May 07 01:56:28.117664 dellscfc nova-compute[1036]: ERROR nova.virt.block_device [instance: 2d08d9a4-6eab-4869-ba48-802c546c7eb7] encryption=encryption)
May 07 01:56:28.117664 dellscfc nova-compute[1036]: ERROR nova.virt.block_device [instance: 2d08d9a4-6eab-4869-ba48-802c546c7eb7] File "/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 1345, in _disconnect_volume
May 07 01:56:28.117664 dellscfc nova-compute[1036]: ERROR nova.virt.block_device [instance: 2d08d9a4-6eab-4869-ba48-802c546c7eb7] vol_driver.disconnect_volume(connection_info, instance)
May 07 01:56:28.117664 dellscfc nova-compute[1036]: ERROR nova.virt.block_device [instance: 2d08d9a4-6eab-4869-ba48-802c546c7eb7] File "/opt/stack/new/nova/nova/virt/libvirt/volume/fibrechannel.py", line 72, in disconnect_volume
May 07 01:56:28.117664 dellscfc nova-compute[1036]: ERROR nova.virt.block_device [instance: 2d08d9a4-6eab-4869-ba48-802c546c7eb7] connection_info['data'])
May 07 01:56:28.117664 dellscfc nova-compute[1036]: ERROR nova.virt.block_device [instance: 2d08d9a4-6eab-4869-ba48-802c546c7eb7] File "/usr/local/lib/python3.6/dist-packages/os_brick/utils.py", line 150, in trace_logging_wrapper
May 07 01:56:28.117664 dellscfc nova-compute[1036]: ERROR nova.virt.block_device [instance: 2d08d9a4-6eab-4869-ba48-802c546c7eb7] result = f(*args, **kwargs)
May 07 01:56:28.117664 dellscfc nova-compute[1036]: ERROR nova.virt.block_device [instance: 2d08d9a4-6eab-4869-ba48-802c546c7eb7] File "/usr/local/lib/python3.6/dist-packages/oslo_concurrency/lockutils.py", line 328, in inner
May 07 01:56:28.117664 dellscfc nova-compute[1036]: ERROR nova.virt.block_device [instance: 2d08d9a4-6eab-4869-ba48-802c546c7eb7] return f(*args, **kwargs)
May 07 01:56:28.117664 dellscfc nova-compute[1036]: ERROR nova.virt.block_device [instance: 2d08d9a4-6eab-4869-ba48-802c546c7eb7] File "/usr/local/lib/python3.6/dist-packages/os_brick/initiator/connectors/fibre_channel.py", line 336, in disconnect_volume
May 07 01:56:28.117664 dellscfc nova-compute[1036]: ERROR nova.virt.block_device [instance: 2d08d9a4-6eab-4869-ba48-802c546c7eb7] wwn = self._linuxscsi.get_scsi_wwn(path)
May 07 01:56:28.117664 dellscfc nova-compute[1036]: ERROR nova.virt.block_device [instance: 2d08d9a4-6eab-4869-ba48-802c546c7eb7] File "/usr/local/lib/python3.6/dist-packages/os_brick/initiator/linuxscsi.py", line 163, in get_scsi_wwn
May 07 01:56:28.117664 dellscfc nova-compute[1036]: ERROR nova.virt.block_device [instance: 2d08d9a4-6eab-4869-ba48-802c546c7eb7] root_helper=self._root_helper)
May 07 01:56:28.117664 dellscfc nova-compute[1036]: ERROR nova.virt.block_device [instance: 2d08d9a4-6eab-4869-ba48-802c546c7eb7] File "/usr/local/lib/python3.6/dist-packages/os_brick/executor.py", line 52, in _execute
May 07 01:56:28.117664 dellscfc nova-compute[1036]: ERROR nova.virt.block_device [instance: 2d08d9a4-6eab-4869-ba48-802c546c7eb7] result = self.__execute(*args, **kwargs)
May 07 01:56:28.117664 dellscfc nova-compute[1036]: ERROR nova.virt.block_device [instance: 2d08d9a4-6eab-4869-ba48-802c546c7eb7] File "/usr/local/lib/python3.6/dist-packages/os_brick/privileged/rootwrap.py", line 169, in execute
May 07 01:56:28.117664 dellscfc nova-compute[1036]: ERROR nova.virt.block_device [instance: 2d08d9a4-6eab-4869-ba48-802c546c7eb7] return execute_root(*cmd, **kwargs)
May 07 01:56:28.117664 dellscfc nova-compute[1036]: ERROR nova.virt.block_device [instance: 2d08d9a4-6eab-4869-ba48-802c546c7eb7] File "/usr/local/lib/python3.6/dist-packages/oslo_privsep/priv_context.py", line 242, in _wrap
May 07 01:56:28.117664 dellscfc nova-compute[1036]: ERROR nova.virt.block_device [instance: 2d08d9a4-6eab-4869-ba48-802c546c7eb7] return self.channel.remote_call(name, args, kwargs)
May 07 01:56:28.117664 dellscfc nova-compute[1036]: ERROR nova.virt.block_device [instance: 2d08d9a4-6eab-4869-ba48-802c546c7eb7] File "/usr/local/lib/python3.6/dist-packages/oslo_privsep/daemon.py", line 204, in remote_call
May 07 01:56:28.117664 dellscfc nova-compute[1036]: ERROR nova.virt.block_device [instance: 2d08d9a4-6eab-4869-ba48-802c546c7eb7] raise exc_type(*result[2])
May 07 01:56:28.117664 dellscfc nova-compute[1036]: ERROR nova.virt.block_device [instance: 2d08d9a4-6eab-4869-ba48-802c546c7eb7] oslo_concurrency.processutils.ProcessExecutionError: Unexpected error while running command.
May 07 01:56:28.117664 dellscfc nova-compute[1036]: ERROR nova.virt.block_device [instance: 2d08d9a4-6eab-4869-ba48-802c546c7eb7] Command: /lib/udev/scsi_id --page 0x83 --whitelisted /dev/disk/by-path/pci-0000:04:00.0-fc-0x5000d3100101d530-lun-3
May 07 01:56:28.117664 dellscfc nova-compute[1036]: ERROR nova.virt.block_device [instance: 2d08d9a4-6eab-4869-ba48-802c546c7eb7] Exit code: 1
May 07 01:56:28.117664 dellscfc nova-compute[1036]: ERROR nova.virt.block_device [instance: 2d08d9a4-6eab-4869-ba48-802c546c7eb7] Stdout: ''
May 07 01:56:28.117664 dellscfc nova-compute[1036]: ERROR nova.virt.block_device [instance: 2d08d9a4-6eab-4869-ba48-802c546c7eb7] Stderr: ''
May 07 01:56:28.117664 dellscfc nova-compute[1036]: ERROR nova.virt.block_device [instance: 2d08d9a4-6eab-4869-ba48-802c546c7eb7]
May 07 01:56:28.170915 dellscfc nova-compute[1036]: DEBUG oslo_concurrency.lockutils [None req-858576cb-bdf3-49c1-b0af-34cb538712b0 tempest-AttachVolumeMultiAttachTest-948792761 tempest-AttachVolumeMultiAttachTest-948792761] Lock "2d08d9a4-6eab-4869-ba48-802c546c7eb7" released by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.992s {{(pid=1036) inner /usr/local/lib/python3.6/dist-packages/oslo_concurrency/lockutils.py:339}}
May 07 01:56:28.227431 dellscfc nova-compute[1036]: ERROR oslo_messaging.rpc.server [None req-858576cb-bdf3-49c1-b0af-34cb538712b0 tempest-AttachVolumeMultiAttachTest-948792761 tempest-AttachVolumeMultiAttachTest-948792761] Exception during message handling: oslo_concurrency.processutils.ProcessExecutionError: Unexpected error while running command.
May 07 01:56:28.227431 dellscfc nova-compute[1036]: Command: /lib/udev/scsi_id --page 0x83 --whitelisted /dev/disk/by-path/pci-0000:04:00.0-fc-0x5000d3100101d530-lun-3
May 07 01:56:28.227431 dellscfc nova-compute[1036]: Exit code: 1
May 07 01:56:28.227431 dellscfc nova-compute[1036]: Stdout: ''
May 07 01:56:28.227431 dellscfc nova-compute[1036]: Stderr: ''

Seems when ‘scsi_id’ runs, the disk has already gone. No errors on the cinder side

Driver: Dell EMC SC - FC Driver

Revision history for this message
Matt Riedemann (mriedem) wrote :

Note that due to bug 1775418 we're working on blocking swap volume for multiattach volumes with more than one read-write attachment: https://review.opendev.org/#/c/572790/

tags: added: libvirt multiattach swap volumes
Revision history for this message
Matt Riedemann (mriedem) wrote :

Can you explain a bit more about the scenario in which you hit this? It looks like a Dell cinder volume type has to be involved - can you be specific about which volume backend you're using?

And is this just a single instance attached to the multiattach-capable volume, or are multiple instances attached to the volume when the swap happens?

Is this intermittent or 100% recreatable?

Changed in nova:
status: New → Incomplete
Revision history for this message
Matt Riedemann (mriedem) wrote :

Oh sorry I see "Driver: Dell EMC SC - FC Driver" now. But the other questions apply.

Revision history for this message
Rajini Karthik (rajini-karthik) wrote :

We have tried several check on https://review.opendev.org/#/c/656835/4 and there’s one consistent case failure on FC
tempest.api.compute.admin.test_volume_swap.TestMultiAttachVolumeSwap.test_volume_swap_with_multiattach [337.211730s] ... FAILED

This is multiattach test case that is failing.

Seems when ‘scsi_id’ runs, the disk has already gone.
Looks to me still like some multiattach issue ( maybe prematurely terminates connection when one instance detaches)

And seems there’s no MANUAL cinder/nova command to do such swap test.
The case logic looks simple: https://github.com/openstack/tempest/blob/master/tempest/api/compute/admin/test_volume_swap.py#L165
The error happens when tempest calls nova api to update attachments.
https://github.com/openstack/tempest/blob/master/tempest/api/compute/admin/test_volume_swap.py#L197
self.admin_servers_client.update_attached_volume(
            server1['id'], volume1['id'], volumeId=volume2['id'])

Update_attached_volume is defined https://github.com/openstack/tempest/blob/master/tempest/lib/services/compute/servers_client.py#L432

For now we are going to disable this one test case

Changed in nova:
status: Incomplete → New
summary: - Swap of volumes with multiattach fails
+ Swap of volumes tempest test with multiattach fails
Revision history for this message
Lee Yarwood (lyarwood) wrote :

Marking this as invalid given bug #1775418 and the fact this appears to be an issue within os-brick anyway.

Changed in nova:
status: New → Invalid
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.