Fail to retype instances' two attached volume at the same time

Bug #1912312 reported by zhaoleilc
10
This bug affects 2 people
Affects Status Importance Assigned to Milestone
Cinder
Triaged
Medium
Unassigned

Bug Description

Description
===========
There are a large number of volumes attached to the instance and
their volume types are all ceph. There will be an error if two of
them are retyped to fibre channel volume type at the same time.

Steps to reproduce
==================
1. Creates a instance and attaches two volumes which belong to ceph
volume type to this instance.(If this instance is booted from volume
which belongs to ceph volume type, you need only attach another.)
2. Retype the two volumes to fibre channel volume type at the same time.

Expected result
===============
Both two volumes are retyped successfully.

Actual result
=============
One of them fails to retype and remains its old type.

Environment
===========
Rocky version of OpenStack

Logs & Configs
==============
2020-12-29 06:17:28.607 3750030 INFO nova.compute.manager [req-35a3460e-a376-4444-bc0f-28eb0d47fcfb 2cb6355ff195452aacbccec7eb77b8a0 ace558fe93cd4abe91054e99912c2da2 - 38b69a94f057440f93fd6313cab2cd9e 38b69a94f057440f93fd6313cab2cd9e] [instance: 8cb874a6-5e62-48d6-a1fa-6cdfe42f26ea] Swapping volume d75f3989-e75a-4e23-ac6d-c0a6f1884a97 for 59b57a6d-f815-4303-9df5-0b483bd42ffa
2020-12-29 06:17:31.588 3750030 INFO os_brick.initiator.connectors.fibre_channel [-] Fibre Channel volume device not yet found. Will rescan & retry. Try number: 0.
2020-12-29 06:17:33.638 3750030 INFO os_brick.initiator.linuxscsi [req-35a3460e-a376-4444-bc0f-28eb0d47fcfb 2cb6355ff195452aacbccec7eb77b8a0 ace558fe93cd4abe91054e99912c2da2 - 38b69a94f057440f93fd6313cab2cd9e 38b69a94f057440f93fd6313cab2cd9e] Find Multipath device file for volume WWN 3600507670881014de000000000001623
2020-12-29 06:17:37.272 3750030 ERROR nova.compute.manager [req-05e319a0-ab2d-49b8-9ab3-fa14075123c8 2cb6355ff195452aacbccec7eb77b8a0 ace558fe93cd4abe91054e99912c2da2 - 38b69a94f057440f93fd6313cab2cd9e 38b69a94f057440f93fd6313cab2cd9e] [instance: 8cb874a6-5e62-48d6-a1fa-6cdfe42f26ea] Failed to swap volume b54341e0-1154-4a0a-80c9-8b99109e893a for 9e0c0efa-43b0-4ea5-8424-a3997a3eabbc: libvirtError: block copy still active: domain has active block job
2020-12-29 06:17:37.272 3750030 ERROR nova.compute.manager [instance: 8cb874a6-5e62-48d6-a1fa-6cdfe42f26ea] Traceback (most recent call last):
2020-12-29 06:17:37.272 3750030 ERROR nova.compute.manager [instance: 8cb874a6-5e62-48d6-a1fa-6cdfe42f26ea] File "/var/lib/openstack/lib/python2.7/site-packages/nova/compute/manager.py", line 6030, in _swap_volume
2020-12-29 06:17:37.272 3750030 ERROR nova.compute.manager [instance: 8cb874a6-5e62-48d6-a1fa-6cdfe42f26ea] mountpoint, resize_to)
2020-12-29 06:17:37.272 3750030 ERROR nova.compute.manager [instance: 8cb874a6-5e62-48d6-a1fa-6cdfe42f26ea] File "/var/lib/openstack/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 1854, in swap_volume
2020-12-29 06:17:37.272 3750030 ERROR nova.compute.manager [instance: 8cb874a6-5e62-48d6-a1fa-6cdfe42f26ea] old_conn=old_connection_info)
2020-12-29 06:17:37.272 3750030 ERROR nova.compute.manager [instance: 8cb874a6-5e62-48d6-a1fa-6cdfe42f26ea] File "/var/lib/openstack/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 1812, in _swap_volume
2020-12-29 06:17:37.272 3750030 ERROR nova.compute.manager [instance: 8cb874a6-5e62-48d6-a1fa-6cdfe42f26ea] self._host.write_instance_config(xml)
2020-12-29 06:17:37.272 3750030 ERROR nova.compute.manager [instance: 8cb874a6-5e62-48d6-a1fa-6cdfe42f26ea] File "/var/lib/openstack/lib/python2.7/site-packages/nova/virt/libvirt/host.py", line 864, in write_instance_config
2020-12-29 06:17:37.272 3750030 ERROR nova.compute.manager [instance: 8cb874a6-5e62-48d6-a1fa-6cdfe42f26ea] domain = self.get_connection().defineXML(xml)
2020-12-29 06:17:37.272 3750030 ERROR nova.compute.manager [instance: 8cb874a6-5e62-48d6-a1fa-6cdfe42f26ea] File "/var/lib/openstack/lib/python2.7/site-packages/eventlet/tpool.py", line 186, in doit
2020-12-29 06:17:37.272 3750030 ERROR nova.compute.manager [instance: 8cb874a6-5e62-48d6-a1fa-6cdfe42f26ea] result = proxy_call(self._autowrap, f, *args, **kwargs)
2020-12-29 06:17:37.272 3750030 ERROR nova.compute.manager [instance: 8cb874a6-5e62-48d6-a1fa-6cdfe42f26ea] File "/var/lib/openstack/lib/python2.7/site-packages/eventlet/tpool.py", line 144, in proxy_call
2020-12-29 06:17:37.272 3750030 ERROR nova.compute.manager [instance: 8cb874a6-5e62-48d6-a1fa-6cdfe42f26ea] rv = execute(f, *args, **kwargs)
2020-12-29 06:17:37.272 3750030 ERROR nova.compute.manager [instance: 8cb874a6-5e62-48d6-a1fa-6cdfe42f26ea] File "/var/lib/openstack/lib/python2.7/site-packages/eventlet/tpool.py", line 125, in execute
2020-12-29 06:17:37.272 3750030 ERROR nova.compute.manager [instance: 8cb874a6-5e62-48d6-a1fa-6cdfe42f26ea] six.reraise(c, e, tb)
2020-12-29 06:17:37.272 3750030 ERROR nova.compute.manager [instance: 8cb874a6-5e62-48d6-a1fa-6cdfe42f26ea] File "/var/lib/openstack/lib/python2.7/site-packages/eventlet/tpool.py", line 83, in tworker
2020-12-29 06:17:37.272 3750030 ERROR nova.compute.manager [instance: 8cb874a6-5e62-48d6-a1fa-6cdfe42f26ea] rv = meth(*args, **kwargs)
2020-12-29 06:17:37.272 3750030 ERROR nova.compute.manager [instance: 8cb874a6-5e62-48d6-a1fa-6cdfe42f26ea] File "/var/lib/openstack/lib/python2.7/site-packages/libvirt.py", line 3676, in defineXML
2020-12-29 06:17:37.272 3750030 ERROR nova.compute.manager [instance: 8cb874a6-5e62-48d6-a1fa-6cdfe42f26ea] if ret is None:raise libvirtError('virDomainDefineXML() failed', conn=self)
2020-12-29 06:17:37.272 3750030 ERROR nova.compute.manager [instance: 8cb874a6-5e62-48d6-a1fa-6cdfe42f26ea] libvirtError: block copy still active: domain has active block job

Tags: retype volume
tags: added: retype volume
Changed in cinder:
status: New → Triaged
importance: Undecided → Medium
Revision history for this message
huangbingyan (bingyanh) wrote :

hi Did you solve it?

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.