Unable to detach volume from instance when previously removed from the inactive config

Bug #1887946 reported by Lee Yarwood
12
This bug affects 2 people
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
Fix Released
Medium
Lee Yarwood
Queens
Fix Released
Medium
Unassigned
Rocky
Fix Released
Medium
Lee Yarwood
Stein
Fix Released
Medium
Elod Illes
Train
Fix Released
Medium
Lee Yarwood
Ussuri
Fix Released
Medium
Unassigned

Bug Description

Description
===========
$subject, can often be encountered when previous attempts to detach a volume have failed due to the device still being used within the guestOS.

This initial attempt will remove the device from the inactive config but fail to remove it from the active config. Any subsequent attempt will then fail as the initial call continues to attempt to remove the device from both the inactive and live configs.

Prior to libvirt v4.1.0 this raised either a VIR_ERR_INVALID_ARG or VIR_ERR_OPERATION_FAILED error code from libvirt that n-cpu would handle, retrying the detach against the live config.

Since libvirt v4.1.0 however this now raises a VIR_ERR_DEVICE_MISSING error code. This is not handled by Nova resulting in no attempt being made to detach the device from the live config.

Steps to reproduce
==================

# Start with a volume attached as vdb (ignore the source ;))

$ sudo virsh domblklist 4b1a0828-8dcc-4b73-a05e-5b50cb62c8f8
 Target Source
------------------------------------------------------------------------------------
 vda /opt/stack/data/nova/instances/4b1a0828-8dcc-4b73-a05e-5b50cb62c8f8/disk
 vdb iqn.2010-10.org.openstack:volume-37cc97fa-9776-4b31-8f3f-cb1f18ff1db6/0

# Detach from the inactive config

$ sudo virsh detach-disk --config 4b1a0828-8dcc-4b73-a05e-5b50cb62c8f8 vdb
Disk detached successfully

# Confirm the device is still listed on the live config

$ sudo virsh domblklist 4b1a0828-8dcc-4b73-a05e-5b50cb62c8f8
 Target Source
------------------------------------------------------------------------------------
 vda /opt/stack/data/nova/instances/4b1a0828-8dcc-4b73-a05e-5b50cb62c8f8/disk
 vdb iqn.2010-10.org.openstack:volume-37cc97fa-9776-4b31-8f3f-cb1f18ff1db6/0

# and removed from the persistent config

$ sudo virsh domblklist --inactive 4b1a0828-8dcc-4b73-a05e-5b50cb62c8f8
 Target Source
------------------------------------------------------------------------------------
 vda /opt/stack/data/nova/instances/4b1a0828-8dcc-4b73-a05e-5b50cb62c8f8/disk

# Attempt to detach the volume

$ openstack server remove volume 4b1a0828-8dcc-4b73-a05e-5b50cb62c8f8 test

Expected result
===============
The initial attempt to detach the device fails as the device isn't present in the inactive config but we continue to ensure the device is removed from the live config.

Actual result
=============
n-cpu doesn't handle the initial failure as the raised libvirt error code isn't recongnised.

Environment
===========
1. Exact version of OpenStack you are running. See the following
  list for all releases: http://docs.openstack.org/releases/

   b7161fe9b92f0045e97c300a80e58d32b6f49be1

2. Which hypervisor did you use?
   (For example: Libvirt + KVM, Libvirt + XEN, Hyper-V, PowerKVM, ...)
   What's the version of that?

   libvirt + KVM

2. Which storage type did you use?
   (For example: Ceph, LVM, GPFS, ...)
   What's the version of that?

   N/A

3. Which networking type did you use?
   (For example: nova-network, Neutron with OpenVSwitch, ...)

   N/A

Logs & Configs
==============

$ openstack server remove volume 4b1a0828-8dcc-4b73-a05e-5b50cb62c8f8 test ; journalctl -u devstack@n-cpu -f
[..]
Jul 16 17:26:53 localhost.localdomain nova-compute[190210]: DEBUG oslo_concurrency.lockutils [None req-16d62ef9-d492-4012-bb6d-37e5611ede50 admin admin] Lock "4b1a0828-8dcc-4b73-a05e-5b50cb62c8f8" released by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.141s {{(pid=190210) inner /usr/local/lib/python3.7/site-packages/oslo_concurrency/lockutils.py:371}}
Jul 16 17:26:53 localhost.localdomain nova-compute[190210]: ERROR oslo_messaging.rpc.server [None req-16d62ef9-d492-4012-bb6d-37e5611ede50 admin admin] Exception during message handling: libvirt.libvirtError: device not found: no target device vdb
Jul 16 17:26:53 localhost.localdomain nova-compute[190210]: ERROR oslo_messaging.rpc.server Traceback (most recent call last):
Jul 16 17:26:53 localhost.localdomain nova-compute[190210]: ERROR oslo_messaging.rpc.server File "/usr/local/lib/python3.7/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming
Jul 16 17:26:53 localhost.localdomain nova-compute[190210]: ERROR oslo_messaging.rpc.server res = self.dispatcher.dispatch(message)
Jul 16 17:26:53 localhost.localdomain nova-compute[190210]: ERROR oslo_messaging.rpc.server File "/usr/local/lib/python3.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 273, in dispatch
Jul 16 17:26:53 localhost.localdomain nova-compute[190210]: ERROR oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, ctxt, args)
Jul 16 17:26:53 localhost.localdomain nova-compute[190210]: ERROR oslo_messaging.rpc.server File "/usr/local/lib/python3.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 193, in _do_dispatch
Jul 16 17:26:53 localhost.localdomain nova-compute[190210]: ERROR oslo_messaging.rpc.server result = func(ctxt, **new_args)
Jul 16 17:26:53 localhost.localdomain nova-compute[190210]: ERROR oslo_messaging.rpc.server File "/opt/stack/nova/nova/exception_wrapper.py", line 78, in wrapped
Jul 16 17:26:53 localhost.localdomain nova-compute[190210]: ERROR oslo_messaging.rpc.server function_name, call_dict, binary)
Jul 16 17:26:53 localhost.localdomain nova-compute[190210]: ERROR oslo_messaging.rpc.server File "/usr/local/lib/python3.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
Jul 16 17:26:53 localhost.localdomain nova-compute[190210]: ERROR oslo_messaging.rpc.server self.force_reraise()
Jul 16 17:26:53 localhost.localdomain nova-compute[190210]: ERROR oslo_messaging.rpc.server File "/usr/local/lib/python3.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise
Jul 16 17:26:53 localhost.localdomain nova-compute[190210]: ERROR oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb)
Jul 16 17:26:53 localhost.localdomain nova-compute[190210]: ERROR oslo_messaging.rpc.server File "/usr/local/lib/python3.7/site-packages/six.py", line 703, in reraise
Jul 16 17:26:53 localhost.localdomain nova-compute[190210]: ERROR oslo_messaging.rpc.server raise value
Jul 16 17:26:53 localhost.localdomain nova-compute[190210]: ERROR oslo_messaging.rpc.server File "/opt/stack/nova/nova/exception_wrapper.py", line 69, in wrapped
Jul 16 17:26:53 localhost.localdomain nova-compute[190210]: ERROR oslo_messaging.rpc.server return f(self, context, *args, **kw)
Jul 16 17:26:53 localhost.localdomain nova-compute[190210]: ERROR oslo_messaging.rpc.server File "/opt/stack/nova/nova/compute/utils.py", line 1440, in decorated_function
Jul 16 17:26:53 localhost.localdomain nova-compute[190210]: ERROR oslo_messaging.rpc.server return function(self, context, *args, **kwargs)
Jul 16 17:26:53 localhost.localdomain nova-compute[190210]: ERROR oslo_messaging.rpc.server File "/opt/stack/nova/nova/compute/manager.py", line 216, in decorated_function
Jul 16 17:26:53 localhost.localdomain nova-compute[190210]: ERROR oslo_messaging.rpc.server kwargs['instance'], e, sys.exc_info())
Jul 16 17:26:53 localhost.localdomain nova-compute[190210]: ERROR oslo_messaging.rpc.server File "/usr/local/lib/python3.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
Jul 16 17:26:53 localhost.localdomain nova-compute[190210]: ERROR oslo_messaging.rpc.server self.force_reraise()
Jul 16 17:26:53 localhost.localdomain nova-compute[190210]: ERROR oslo_messaging.rpc.server File "/usr/local/lib/python3.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise
Jul 16 17:26:53 localhost.localdomain nova-compute[190210]: ERROR oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb)
Jul 16 17:26:53 localhost.localdomain nova-compute[190210]: ERROR oslo_messaging.rpc.server File "/usr/local/lib/python3.7/site-packages/six.py", line 703, in reraise
Jul 16 17:26:53 localhost.localdomain nova-compute[190210]: ERROR oslo_messaging.rpc.server raise value
Jul 16 17:26:53 localhost.localdomain nova-compute[190210]: ERROR oslo_messaging.rpc.server File "/opt/stack/nova/nova/compute/manager.py", line 204, in decorated_function
Jul 16 17:26:53 localhost.localdomain nova-compute[190210]: ERROR oslo_messaging.rpc.server return function(self, context, *args, **kwargs)
Jul 16 17:26:53 localhost.localdomain nova-compute[190210]: ERROR oslo_messaging.rpc.server File "/opt/stack/nova/nova/compute/manager.py", line 7099, in detach_volume
Jul 16 17:26:53 localhost.localdomain nova-compute[190210]: ERROR oslo_messaging.rpc.server do_detach_volume(context, volume_id, instance, attachment_id)
Jul 16 17:26:53 localhost.localdomain nova-compute[190210]: ERROR oslo_messaging.rpc.server File "/usr/local/lib/python3.7/site-packages/oslo_concurrency/lockutils.py", line 360, in inner
Jul 16 17:26:53 localhost.localdomain nova-compute[190210]: ERROR oslo_messaging.rpc.server return f(*args, **kwargs)
Jul 16 17:26:53 localhost.localdomain nova-compute[190210]: ERROR oslo_messaging.rpc.server File "/opt/stack/nova/nova/compute/manager.py", line 7097, in do_detach_volume
Jul 16 17:26:53 localhost.localdomain nova-compute[190210]: ERROR oslo_messaging.rpc.server attachment_id=attachment_id)
Jul 16 17:26:53 localhost.localdomain nova-compute[190210]: ERROR oslo_messaging.rpc.server File "/opt/stack/nova/nova/compute/manager.py", line 7048, in _detach_volume
Jul 16 17:26:53 localhost.localdomain nova-compute[190210]: ERROR oslo_messaging.rpc.server attachment_id=attachment_id, destroy_bdm=destroy_bdm)
Jul 16 17:26:53 localhost.localdomain nova-compute[190210]: ERROR oslo_messaging.rpc.server File "/opt/stack/nova/nova/virt/block_device.py", line 477, in detach
Jul 16 17:26:53 localhost.localdomain nova-compute[190210]: ERROR oslo_messaging.rpc.server attachment_id, destroy_bdm)
Jul 16 17:26:53 localhost.localdomain nova-compute[190210]: ERROR oslo_messaging.rpc.server File "/opt/stack/nova/nova/virt/block_device.py", line 408, in _do_detach
Jul 16 17:26:53 localhost.localdomain nova-compute[190210]: ERROR oslo_messaging.rpc.server self.driver_detach(context, instance, volume_api, virt_driver)
Jul 16 17:26:53 localhost.localdomain nova-compute[190210]: ERROR oslo_messaging.rpc.server File "/opt/stack/nova/nova/virt/block_device.py", line 347, in driver_detach
Jul 16 17:26:53 localhost.localdomain nova-compute[190210]: ERROR oslo_messaging.rpc.server volume_api.roll_detaching(context, volume_id)
Jul 16 17:26:53 localhost.localdomain nova-compute[190210]: ERROR oslo_messaging.rpc.server File "/usr/local/lib/python3.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
Jul 16 17:26:53 localhost.localdomain nova-compute[190210]: ERROR oslo_messaging.rpc.server self.force_reraise()
Jul 16 17:26:53 localhost.localdomain nova-compute[190210]: ERROR oslo_messaging.rpc.server File "/usr/local/lib/python3.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise
Jul 16 17:26:53 localhost.localdomain nova-compute[190210]: ERROR oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb)
Jul 16 17:26:53 localhost.localdomain nova-compute[190210]: ERROR oslo_messaging.rpc.server File "/usr/local/lib/python3.7/site-packages/six.py", line 703, in reraise
Jul 16 17:26:53 localhost.localdomain nova-compute[190210]: ERROR oslo_messaging.rpc.server raise value
Jul 16 17:26:53 localhost.localdomain nova-compute[190210]: ERROR oslo_messaging.rpc.server File "/opt/stack/nova/nova/virt/block_device.py", line 329, in driver_detach
Jul 16 17:26:53 localhost.localdomain nova-compute[190210]: ERROR oslo_messaging.rpc.server encryption=encryption)
Jul 16 17:26:53 localhost.localdomain nova-compute[190210]: ERROR oslo_messaging.rpc.server File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 2019, in detach_volume
Jul 16 17:26:53 localhost.localdomain nova-compute[190210]: ERROR oslo_messaging.rpc.server live=live)
Jul 16 17:26:53 localhost.localdomain nova-compute[190210]: ERROR oslo_messaging.rpc.server File "/opt/stack/nova/nova/virt/libvirt/guest.py", line 425, in detach_device_with_retry
Jul 16 17:26:53 localhost.localdomain nova-compute[190210]: ERROR oslo_messaging.rpc.server _try_detach_device(conf, persistent, live)
Jul 16 17:26:53 localhost.localdomain nova-compute[190210]: ERROR oslo_messaging.rpc.server File "/opt/stack/nova/nova/virt/libvirt/guest.py", line 414, in _try_detach_device
Jul 16 17:26:53 localhost.localdomain nova-compute[190210]: ERROR oslo_messaging.rpc.server ctx.reraise = True
Jul 16 17:26:53 localhost.localdomain nova-compute[190210]: ERROR oslo_messaging.rpc.server File "/usr/local/lib/python3.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
Jul 16 17:26:53 localhost.localdomain nova-compute[190210]: ERROR oslo_messaging.rpc.server self.force_reraise()
Jul 16 17:26:53 localhost.localdomain nova-compute[190210]: ERROR oslo_messaging.rpc.server File "/usr/local/lib/python3.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise
Jul 16 17:26:53 localhost.localdomain nova-compute[190210]: ERROR oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb)
Jul 16 17:26:53 localhost.localdomain nova-compute[190210]: ERROR oslo_messaging.rpc.server File "/usr/local/lib/python3.7/site-packages/six.py", line 703, in reraise
Jul 16 17:26:53 localhost.localdomain nova-compute[190210]: ERROR oslo_messaging.rpc.server raise value
Jul 16 17:26:53 localhost.localdomain nova-compute[190210]: ERROR oslo_messaging.rpc.server File "/opt/stack/nova/nova/virt/libvirt/guest.py", line 387, in _try_detach_device
Jul 16 17:26:53 localhost.localdomain nova-compute[190210]: ERROR oslo_messaging.rpc.server self.detach_device(conf, persistent=persistent, live=live)
Jul 16 17:26:53 localhost.localdomain nova-compute[190210]: ERROR oslo_messaging.rpc.server File "/opt/stack/nova/nova/virt/libvirt/guest.py", line 475, in detach_device
Jul 16 17:26:53 localhost.localdomain nova-compute[190210]: ERROR oslo_messaging.rpc.server self._domain.detachDeviceFlags(device_xml, flags=flags)
Jul 16 17:26:53 localhost.localdomain nova-compute[190210]: ERROR oslo_messaging.rpc.server File "/usr/local/lib/python3.7/site-packages/eventlet/tpool.py", line 190, in doit
Jul 16 17:26:53 localhost.localdomain nova-compute[190210]: ERROR oslo_messaging.rpc.server result = proxy_call(self._autowrap, f, *args, **kwargs)
Jul 16 17:26:53 localhost.localdomain nova-compute[190210]: ERROR oslo_messaging.rpc.server File "/usr/local/lib/python3.7/site-packages/eventlet/tpool.py", line 148, in proxy_call
Jul 16 17:26:53 localhost.localdomain nova-compute[190210]: ERROR oslo_messaging.rpc.server rv = execute(f, *args, **kwargs)
Jul 16 17:26:53 localhost.localdomain nova-compute[190210]: ERROR oslo_messaging.rpc.server File "/usr/local/lib/python3.7/site-packages/eventlet/tpool.py", line 129, in execute
Jul 16 17:26:53 localhost.localdomain nova-compute[190210]: ERROR oslo_messaging.rpc.server six.reraise(c, e, tb)
Jul 16 17:26:53 localhost.localdomain nova-compute[190210]: ERROR oslo_messaging.rpc.server File "/usr/local/lib/python3.7/site-packages/six.py", line 703, in reraise
Jul 16 17:26:53 localhost.localdomain nova-compute[190210]: ERROR oslo_messaging.rpc.server raise value
Jul 16 17:26:53 localhost.localdomain nova-compute[190210]: ERROR oslo_messaging.rpc.server File "/usr/local/lib/python3.7/site-packages/eventlet/tpool.py", line 83, in tworker
Jul 16 17:26:53 localhost.localdomain nova-compute[190210]: ERROR oslo_messaging.rpc.server rv = meth(*args, **kwargs)
Jul 16 17:26:53 localhost.localdomain nova-compute[190210]: ERROR oslo_messaging.rpc.server File "/usr/local/lib64/python3.7/site-packages/libvirt.py", line 1309, in detachDeviceFlags
Jul 16 17:26:53 localhost.localdomain nova-compute[190210]: ERROR oslo_messaging.rpc.server if ret == -1: raise libvirtError ('virDomainDetachDeviceFlags() failed', dom=self)
Jul 16 17:26:53 localhost.localdomain nova-compute[190210]: ERROR oslo_messaging.rpc.server libvirt.libvirtError: device not found: no target device vdb
Jul 16 17:26:53 localhost.localdomain nova-compute[190210]: ERROR oslo_messaging.rpc.server

Changed in nova:
status: New → In Progress
Lee Yarwood (lyarwood)
description: updated
melanie witt (melwitt)
Changed in nova:
importance: Undecided → Medium
Changed in nova:
assignee: Lee Yarwood (lyarwood) → melanie witt (melwitt)
melanie witt (melwitt)
Changed in nova:
assignee: melanie witt (melwitt) → Lee Yarwood (lyarwood)
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to nova (master)

Reviewed: https://review.opendev.org/741561
Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=902f09af251d2b2e56fb2f2900a3510baf38a508
Submitter: Zuul
Branch: master

commit 902f09af251d2b2e56fb2f2900a3510baf38a508
Author: Lee Yarwood <email address hidden>
Date: Fri Jul 17 00:45:10 2020 +0100

    libvirt: Handle VIR_ERR_DEVICE_MISSING when detaching devices

    Introduced in libvirt v4.1.0 [1] this error code replaces the previously
    raised VIR_ERR_INVALID_ARG, VIR_ERR_OPERATION_FAILED and
    VIR_ERR_INVALID_ARG codes [2][3].

    VIR_ERR_OPERATION_FAILED was introduced and tested as an
    active/live/hot unplug config device detach error code in
    I131aaf28d2f5d5d964d4045e3d7d62207079cfb0.

    VIR_ERR_INTERNAL_ERROR was introduced and tested as an
    active/live/hot unplug config device detach error code in
    I3055cd7641de92ab188de73733ca9288a9ca730a.

    VIR_ERR_INVALID_ARG was introduced and tested as an
    inactive/persistent/cold unplug config device detach error code in
    I09230fc47b0950aa5a3db839a070613c9c817576.

    This change introduces support for the new VIR_ERR_DEVICE_MISSING error
    code while also retaining coverage for these codes until
    MIN_LIBVIRT_VERSION is bumped past v4.1.0.

    The majority of this change is test code motion with the existing tests
    being modified to run against either the active or inactive versions of
    the above error codes for the time being.

    test_detach_device_with_retry_operation_internal and
    test_detach_device_with_retry_invalid_argument_no_live have been removed
    as they duplicate the logic within the now refactored
    _test_detach_device_with_retry_second_detach_failure.

    [1] https://libvirt.org/git/?p=libvirt.git;a=commit;h=bb189c8e8c93f115c13fa3bfffdf64498f3f0ce1
    [2] https://libvirt.org/git/?p=libvirt.git;a=commit;h=126db34a81bc9f9f9710408f88cceaa1e34bbbd7
    [3] https://libvirt.org/git/?p=libvirt.git;a=commit;h=2f54eab7c7c618811de23c60a51e910274cf30de

    Closes-Bug: #1887946
    Change-Id: I7eb86edc130d186a66c04b229d46347ec5c0b625

Changed in nova:
status: In Progress → Fix Released
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to nova (stable/ussuri)

Fix proposed to branch: stable/ussuri
Review: https://review.opendev.org/742414

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to nova (stable/train)

Fix proposed to branch: stable/train
Review: https://review.opendev.org/742415

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to nova (stable/stein)

Fix proposed to branch: stable/stein
Review: https://review.opendev.org/742416

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to nova (stable/rocky)

Fix proposed to branch: stable/rocky
Review: https://review.opendev.org/742417

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to nova (stable/queens)

Fix proposed to branch: stable/queens
Review: https://review.opendev.org/742424

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to nova (stable/ussuri)

Reviewed: https://review.opendev.org/742414
Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=93058ae1b8bc1b1728f08b9e606b68318751fc3b
Submitter: Zuul
Branch: stable/ussuri

commit 93058ae1b8bc1b1728f08b9e606b68318751fc3b
Author: Lee Yarwood <email address hidden>
Date: Fri Jul 17 00:45:10 2020 +0100

    libvirt: Handle VIR_ERR_DEVICE_MISSING when detaching devices

    Introduced in libvirt v4.1.0 [1] this error code replaces the previously
    raised VIR_ERR_INVALID_ARG, VIR_ERR_OPERATION_FAILED and
    VIR_ERR_INVALID_ARG codes [2][3].

    VIR_ERR_OPERATION_FAILED was introduced and tested as an
    active/live/hot unplug config device detach error code in
    I131aaf28d2f5d5d964d4045e3d7d62207079cfb0.

    VIR_ERR_INTERNAL_ERROR was introduced and tested as an
    active/live/hot unplug config device detach error code in
    I3055cd7641de92ab188de73733ca9288a9ca730a.

    VIR_ERR_INVALID_ARG was introduced and tested as an
    inactive/persistent/cold unplug config device detach error code in
    I09230fc47b0950aa5a3db839a070613c9c817576.

    This change introduces support for the new VIR_ERR_DEVICE_MISSING error
    code while also retaining coverage for these codes until
    MIN_LIBVIRT_VERSION is bumped past v4.1.0.

    The majority of this change is test code motion with the existing tests
    being modified to run against either the active or inactive versions of
    the above error codes for the time being.

    test_detach_device_with_retry_operation_internal and
    test_detach_device_with_retry_invalid_argument_no_live have been removed
    as they duplicate the logic within the now refactored
    _test_detach_device_with_retry_second_detach_failure.

    [1] https://libvirt.org/git/?p=libvirt.git;a=commit;h=bb189c8e8c93f115c13fa3bfffdf64498f3f0ce1
    [2] https://libvirt.org/git/?p=libvirt.git;a=commit;h=126db34a81bc9f9f9710408f88cceaa1e34bbbd7
    [3] https://libvirt.org/git/?p=libvirt.git;a=commit;h=2f54eab7c7c618811de23c60a51e910274cf30de

    Closes-Bug: #1887946
    Change-Id: I7eb86edc130d186a66c04b229d46347ec5c0b625
    (cherry picked from commit 902f09af251d2b2e56fb2f2900a3510baf38a508)

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to nova (stable/train)

Reviewed: https://review.opendev.org/742415
Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=863d6ef7601302901fa3368ea8457b3564eeb501
Submitter: Zuul
Branch: stable/train

commit 863d6ef7601302901fa3368ea8457b3564eeb501
Author: Lee Yarwood <email address hidden>
Date: Fri Jul 17 00:45:10 2020 +0100

    libvirt: Handle VIR_ERR_DEVICE_MISSING when detaching devices

    Introduced in libvirt v4.1.0 [1] this error code replaces the previously
    raised VIR_ERR_INVALID_ARG, VIR_ERR_OPERATION_FAILED and
    VIR_ERR_INVALID_ARG codes [2][3].

    VIR_ERR_OPERATION_FAILED was introduced and tested as an
    active/live/hot unplug config device detach error code in
    I131aaf28d2f5d5d964d4045e3d7d62207079cfb0.

    VIR_ERR_INTERNAL_ERROR was introduced and tested as an
    active/live/hot unplug config device detach error code in
    I3055cd7641de92ab188de73733ca9288a9ca730a.

    VIR_ERR_INVALID_ARG was introduced and tested as an
    inactive/persistent/cold unplug config device detach error code in
    I09230fc47b0950aa5a3db839a070613c9c817576.

    This change introduces support for the new VIR_ERR_DEVICE_MISSING error
    code while also retaining coverage for these codes until
    MIN_LIBVIRT_VERSION is bumped past v4.1.0.

    The majority of this change is test code motion with the existing tests
    being modified to run against either the active or inactive versions of
    the above error codes for the time being.

    test_detach_device_with_retry_operation_internal and
    test_detach_device_with_retry_invalid_argument_no_live have been removed
    as they duplicate the logic within the now refactored
    _test_detach_device_with_retry_second_detach_failure.

    [1] https://libvirt.org/git/?p=libvirt.git;a=commit;h=bb189c8e8c93f115c13fa3bfffdf64498f3f0ce1
    [2] https://libvirt.org/git/?p=libvirt.git;a=commit;h=126db34a81bc9f9f9710408f88cceaa1e34bbbd7
    [3] https://libvirt.org/git/?p=libvirt.git;a=commit;h=2f54eab7c7c618811de23c60a51e910274cf30de

    Closes-Bug: #1887946
    Change-Id: I7eb86edc130d186a66c04b229d46347ec5c0b625
    (cherry picked from commit 902f09af251d2b2e56fb2f2900a3510baf38a508)
    (cherry picked from commit 93058ae1b8bc1b1728f08b9e606b68318751fc3b)

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to nova (stable/stein)

Reviewed: https://review.opendev.org/742416
Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=76428c1a6a7796391957a3e83207f85cfe924505
Submitter: Zuul
Branch: stable/stein

commit 76428c1a6a7796391957a3e83207f85cfe924505
Author: Lee Yarwood <email address hidden>
Date: Fri Jul 17 00:45:10 2020 +0100

    libvirt: Handle VIR_ERR_DEVICE_MISSING when detaching devices

    Introduced in libvirt v4.1.0 [1] this error code replaces the previously
    raised VIR_ERR_INVALID_ARG, VIR_ERR_OPERATION_FAILED and
    VIR_ERR_INVALID_ARG codes [2][3].

    VIR_ERR_OPERATION_FAILED was introduced and tested as an
    active/live/hot unplug config device detach error code in
    I131aaf28d2f5d5d964d4045e3d7d62207079cfb0.

    VIR_ERR_INTERNAL_ERROR was introduced and tested as an
    active/live/hot unplug config device detach error code in
    I3055cd7641de92ab188de73733ca9288a9ca730a.

    VIR_ERR_INVALID_ARG was introduced and tested as an
    inactive/persistent/cold unplug config device detach error code in
    I09230fc47b0950aa5a3db839a070613c9c817576.

    This change introduces support for the new VIR_ERR_DEVICE_MISSING error
    code while also retaining coverage for these codes until
    MIN_LIBVIRT_VERSION is bumped past v4.1.0.

    The majority of this change is test code motion with the existing tests
    being modified to run against either the active or inactive versions of
    the above error codes for the time being.

    test_detach_device_with_retry_operation_internal and
    test_detach_device_with_retry_invalid_argument_no_live have been removed
    as they duplicate the logic within the now refactored
    _test_detach_device_with_retry_second_detach_failure.

    [1] https://libvirt.org/git/?p=libvirt.git;a=commit;h=bb189c8e8c93f115c13fa3bfffdf64498f3f0ce1
    [2] https://libvirt.org/git/?p=libvirt.git;a=commit;h=126db34a81bc9f9f9710408f88cceaa1e34bbbd7
    [3] https://libvirt.org/git/?p=libvirt.git;a=commit;h=2f54eab7c7c618811de23c60a51e910274cf30de

    Closes-Bug: #1887946
    Change-Id: I7eb86edc130d186a66c04b229d46347ec5c0b625
    (cherry picked from commit 902f09af251d2b2e56fb2f2900a3510baf38a508)
    (cherry picked from commit 93058ae1b8bc1b1728f08b9e606b68318751fc3b)
    (cherry picked from commit 863d6ef7601302901fa3368ea8457b3564eeb501)

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to nova (stable/rocky)

Reviewed: https://review.opendev.org/742417
Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=74b053f47a659a0250d051020d6c8b4e3c256e7d
Submitter: Zuul
Branch: stable/rocky

commit 74b053f47a659a0250d051020d6c8b4e3c256e7d
Author: Lee Yarwood <email address hidden>
Date: Fri Jul 17 00:45:10 2020 +0100

    libvirt: Handle VIR_ERR_DEVICE_MISSING when detaching devices

    Introduced in libvirt v4.1.0 [1] this error code replaces the previously
    raised VIR_ERR_INVALID_ARG, VIR_ERR_OPERATION_FAILED and
    VIR_ERR_INVALID_ARG codes [2][3].

    VIR_ERR_OPERATION_FAILED was introduced and tested as an
    active/live/hot unplug config device detach error code in
    I131aaf28d2f5d5d964d4045e3d7d62207079cfb0.

    VIR_ERR_INTERNAL_ERROR was introduced and tested as an
    active/live/hot unplug config device detach error code in
    I3055cd7641de92ab188de73733ca9288a9ca730a.

    VIR_ERR_INVALID_ARG was introduced and tested as an
    inactive/persistent/cold unplug config device detach error code in
    I09230fc47b0950aa5a3db839a070613c9c817576.

    This change introduces support for the new VIR_ERR_DEVICE_MISSING error
    code while also retaining coverage for these codes until
    MIN_LIBVIRT_VERSION is bumped past v4.1.0.

    The majority of this change is test code motion with the existing tests
    being modified to run against either the active or inactive versions of
    the above error codes for the time being.

    test_detach_device_with_retry_operation_internal and
    test_detach_device_with_retry_invalid_argument_no_live have been removed
    as they duplicate the logic within the now refactored
    _test_detach_device_with_retry_second_detach_failure.

    [1] https://libvirt.org/git/?p=libvirt.git;a=commit;h=bb189c8e8c93f115c13fa3bfffdf64498f3f0ce1
    [2] https://libvirt.org/git/?p=libvirt.git;a=commit;h=126db34a81bc9f9f9710408f88cceaa1e34bbbd7
    [3] https://libvirt.org/git/?p=libvirt.git;a=commit;h=2f54eab7c7c618811de23c60a51e910274cf30de

    Closes-Bug: #1887946
    Change-Id: I7eb86edc130d186a66c04b229d46347ec5c0b625
    (cherry picked from commit 902f09af251d2b2e56fb2f2900a3510baf38a508)
    (cherry picked from commit 93058ae1b8bc1b1728f08b9e606b68318751fc3b)
    (cherry picked from commit 863d6ef7601302901fa3368ea8457b3564eeb501)
    (cherry picked from commit 76428c1a6a7796391957a3e83207f85cfe924505)

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix included in openstack/nova queens-eol

This issue was fixed in the openstack/nova queens-eol release.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix included in openstack/nova rocky-eol

This issue was fixed in the openstack/nova rocky-eol release.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.