Unable to remove snapshots after an instance is unshelved when using the rbd imagebackend

Bug #1653953 reported by Lee Yarwood on 2017-01-04
46
This bug affects 6 people
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
Medium
Lee Yarwood
Queens
Medium
Lee Yarwood
Rocky
Medium
Lee Yarwood
Stein
Medium
Lee Yarwood

Bug Description

Description
===========

I'm not entirely convinced that this is a bug but wanted to document and discuss this upstream.

When using the rbd imagebackend, snapshots used to shelve an instance cannot be removed after unshelving as they are cloned and as a result are now the parents of the recreated instance disks.

This is in line with the behaviour of the imagebackend when initially spawning an instance from an image but has caused confusion for operators downstream who assume that the snapshot can be removed once the instance has been unshelved.

We could flatten the instance disk when spawning during an unshelve but to do so would mean extending the imagebackend to handle yet another corner case for rbd.

Steps to reproduce
==================

$ nova boot --image cirros-raw --flavor 1 test-shelve
[..]
$ nova shelve test-shelve
[..]
$ nova unshelve test-shelve
[..]
$ sudo rbd -p vms ls -l
NAME SIZE PARENT FMT PROT LOCK
4c843671-879d-4ba6-b4e8-8eefdced5393_disk 1024M images/df96af36-5a97-4f47-a79f-f3f3c85a21d9@snap 2
$ glance image-delete df96af36-5a97-4f47-a79f-f3f3c85a21d9
Unable to delete image 'df96af36-5a97-4f47-a79f-f3f3c85a21d9' because it is in use.

We can easily workaround this by manually flattening the instance disk :

$ nova stop test-shelve
$ sudo rbd -p vms flatten 4c843671-879d-4ba6-b4e8-8eefdced5393_disk
Image flatten: 100% complete...done.
$ nova start test-shelve
$ glance image-delete df96af36-5a97-4f47-a79f-f3f3c85a21d9

Expected result
===============
Able to remove the shelved snapshot from Glance after unshelve.

Actual result
=============
Unable to remove the shelved snapshot from Glance after unshelve.

Environment
===========
1. Exact version of OpenStack you are running. See the following
   list for all releases: http://docs.openstack.org/releases/

   $ pwd
   /opt/stack/nova
   $ git rev-parse HEAD
   d768bfa2c2fb774154a5268f58b28537f7b39f69

2. Which hypervisor did you use?
   (For example: Libvirt + KVM, Libvirt + XEN, Hyper-V, PowerKVM, ...)
   What's the version of that?

   libvirt + kvm

2. Which storage type did you use?
   (For example: Ceph, LVM, GPFS, ...)
   What's the version of that?

   ceph

3. Which networking type did you use?
   (For example: nova-network, Neutron with OpenVSwitch, ...)

   N/A

Logs & Configs
==============

melanie witt (melwitt) wrote :
Download full text (5.1 KiB)

My first thought on this is that the cloned image is flattened as part of direct_snapshot, so I'm interested in finding out why that's not sufficient for the image-delete to work.

To investigate, I tried to repro this myself in devstack and am hitting multiple other bugs. I can't get it to complete the snapshot successfully and the instance does not get shelved.

The first trace I get is:

2017-01-05 01:14:50.499 ERROR nova.virt.libvirt.driver [req-a00f11f9-344f-40b7-b2ef-27b46825d9d6 admin admin] Failed to snapshot image
2017-01-05 01:14:50.499 TRACE nova.virt.libvirt.driver Traceback (most recent call last):
2017-01-05 01:14:50.499 TRACE nova.virt.libvirt.driver File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 1559, in snapshot
2017-01-05 01:14:50.499 TRACE nova.virt.libvirt.driver instance.image_ref)
2017-01-05 01:14:50.499 TRACE nova.virt.libvirt.driver File "/opt/stack/nova/nova/virt/libvirt/imagebackend.py", line 990, in direct_snapshot
2017-01-05 01:14:50.499 TRACE nova.virt.libvirt.driver self.driver.clone(location, image_id, dest_pool=parent_pool)
2017-01-05 01:14:50.499 TRACE nova.virt.libvirt.driver File "/opt/stack/nova/nova/virt/libvirt/storage/rbd_utils.py", line 234, in clone
2017-01-05 01:14:50.499 TRACE nova.virt.libvirt.driver with RADOSClient(self, dest_pool) as dest_client:
2017-01-05 01:14:50.499 TRACE nova.virt.libvirt.driver File "/opt/stack/nova/nova/virt/libvirt/storage/rbd_utils.py", line 105, in __init__
2017-01-05 01:14:50.499 TRACE nova.virt.libvirt.driver self.cluster, self.ioctx = driver._connect_to_rados(pool)
2017-01-05 01:14:50.499 TRACE nova.virt.libvirt.driver File "/opt/stack/nova/nova/virt/libvirt/storage/rbd_utils.py", line 138, in _connect_to_rados
2017-01-05 01:14:50.499 TRACE nova.virt.libvirt.driver ioctx = client.open_ioctx(pool_to_open)
2017-01-05 01:14:50.499 TRACE nova.virt.libvirt.driver File "/usr/lib/python2.7/dist-packages/rados.py", line 662, in open_ioctx
2017-01-05 01:14:50.499 TRACE nova.virt.libvirt.driver raise TypeError('the name of the pool must be a string')
2017-01-05 01:14:50.499 TRACE nova.virt.libvirt.driver TypeError: the name of the pool must be a string

which I remedied locally by s/dest_pool/str(dest_pool)/ in the clone function in rbd_utils.py. In my environment, dest_pool is of type unicode.

After that, I hit the next trace:

2017-01-05 01:20:31.675 ERROR nova.virt.libvirt.driver [req-152c9895-39cc-464a-ae38-0e6f613278b5 admin admin] Failed to snapshot image
2017-01-05 01:20:31.675 TRACE nova.virt.libvirt.driver Traceback (most recent call last):
2017-01-05 01:20:31.675 TRACE nova.virt.libvirt.driver File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 1559, in snapshot
2017-01-05 01:20:31.675 TRACE nova.virt.libvirt.driver instance.image_ref)
2017-01-05 01:20:31.675 TRACE nova.virt.libvirt.driver File "/opt/stack/nova/nova/virt/libvirt/imagebackend.py", line 995, in direct_snapshot
2017-01-05 01:20:31.675 TRACE nova.virt.libvirt.driver self.cleanup_direct_snapshot(location)
2017-01-05 01:20:31.675 TRACE nova.virt.libvirt.driver File "/opt/stack/nova/nova/virt/libvirt/imagebackend.py", line 1017, in cleanup_d...

Read more...

Lee Yarwood (lyarwood) on 2017-01-05
tags: added: ceph
Lee Yarwood (lyarwood) wrote :

> My first thought on this is that the cloned image is flattened as part
> of direct_snapshot, so I'm interested in finding out why that's not
> sufficient for the image-delete to work.

That's the case when we shelve the instance but during unshelve we are just calling spawn and as a result we call for a clone on the imagebackend. Just the same as when we initially create an instance from an image.

> To investigate, I tried to repro this myself in devstack and am hitting
> multiple other bugs. I can't get it to complete the snapshot successfully
> and the instance does not get shelved.

Odd, I didn't see either of these in my F24 based env, I'll retry again today with the ref you used.

Lee Yarwood (lyarwood) wrote :

Just to be complete this still reproduces for me without any issues on F24 and 1bcf3b55 :

$ cat /etc/redhat-release
Fedora release 24 (Twenty Four)
$ cat local.conf
[[local|localrc]]
LOGDAYS=1
LOGFILE=$DEST/logs/stack.sh.log
SCREEN_LOGDIR=$DEST/logs/screen
ADMIN_PASSWORD=redhat
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
SERVICE_TOKEN=6192fc3b-8156-4871-8979-8a8482d6c7d1

disable_service horizon
enable_plugin devstack-plugin-ceph git://git.openstack.org/openstack/devstack-plugin-ceph

$ cd /opt/stack/nova
$ git rev-parse HEAD
7463e1eec8f1d4b486703fbfd8d1eb755e0c5b0c
$ git branch --contains 1bcf3b553ae8665dc6308d1bd11898efb75d3a41
* master

$ wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
$ qemu-img convert -f qcow2 -O raw cirros-0.3.4-x86_64-disk.img cirros-0.3.4-x86_64-disk.img.raw
$ glance image-create --name cirros-raw --disk-format raw --container-format bare --file cirros-0.3.4-x86_64-disk.img.raw
$ nova boot --image cirros-raw --flavor 1 test-shelve
$ nova shelve test-shelve
$ nova unshelve test-shelve
$ glance image-delete 6bd46bd6-d85d-4f9d-b9b9-7a7a3ae7ed41
Unable to delete image '6bd46bd6-d85d-4f9d-b9b9-7a7a3ae7ed41' because it is in use.
$ sudo rbd -p vms ls -l
NAME SIZE PARENT FMT PROT LOCK
0cab4b03-0d53-404c-b097-892e0d35bfc1_disk 1024M images/6bd46bd6-d85d-4f9d-b9b9-7a7a3ae7ed41@snap 2

melanie witt (melwitt) wrote :

Thanks for all that info. I'm using ubuntu trusty, so maybe there are some issues there. Will try again with fedora.

melanie witt (melwitt) wrote :

I first tried with CentOS7 and that didn't work with stack.sh failing during install of the ceph-* packages. Tried a third time with Fedora 24 and was able to get shelve/unshelve to work. But, I wasn't able to reproduce the bug. The shelved image gets removed from glance and the backend when I unshelve.

$ git rev-parse HEAD
966446553bc8aea79692d0bc0bacae81d6a201df

After shelving, I can see the shelved image in glance:

$ glance image-list
+--------------------------------------+---------------------------------+
| ID | Name |
+--------------------------------------+---------------------------------+
| 806c183b-80ce-4792-b19f-32e97b8b180b | cirros-0.3.4-x86_64-uec |
| 9d594345-c0a5-450a-9cc2-2c05a2950fcf | cirros-0.3.4-x86_64-uec-kernel |
| 07d825e6-b022-4eb2-8c56-73d75427541c | cirros-0.3.4-x86_64-uec-ramdisk |
| 9b23e7b7-3262-4f4b-9da0-3185e4d85d44 | hi-shelved |
+--------------------------------------+---------------------------------+

$ sudo rbd -p images ls -l
NAME SIZE PARENT FMT PROT LOCK
07d825e6-b022-4eb2-8c56-73d75427541c 3652k 2
07d825e6-b022-4eb2-8c56-73d75427541c@snap 3652k 2 yes
806c183b-80ce-4792-b19f-32e97b8b180b 24576k 2
806c183b-80ce-4792-b19f-32e97b8b180b@snap 24576k 2 yes
8fd1626b-562b-43cc-a5b3-53a3a2cfa3a9 24576k 2
8fd1626b-562b-43cc-a5b3-53a3a2cfa3a9@snap 24576k 2 yes
9d594345-c0a5-450a-9cc2-2c05a2950fcf 4862k 2
9d594345-c0a5-450a-9cc2-2c05a2950fcf@snap 4862k 2 yes

But after unshelving, the shelved image is gone from glance:

$ glance image-list
+--------------------------------------+---------------------------------+
| ID | Name |
+--------------------------------------+---------------------------------+
| 806c183b-80ce-4792-b19f-32e97b8b180b | cirros-0.3.4-x86_64-uec |
| 9d594345-c0a5-450a-9cc2-2c05a2950fcf | cirros-0.3.4-x86_64-uec-kernel |
| 07d825e6-b022-4eb2-8c56-73d75427541c | cirros-0.3.4-x86_64-uec-ramdisk |
+--------------------------------------+---------------------------------+

$ sudo rbd -p images ls -l
NAME SIZE PARENT FMT PROT LOCK
07d825e6-b022-4eb2-8c56-73d75427541c 3652k 2
07d825e6-b022-4eb2-8c56-73d75427541c@snap 3652k 2 yes
806c183b-80ce-4792-b19f-32e97b8b180b 24576k 2
806c183b-80ce-4792-b19f-32e97b8b180b@snap 24576k 2 yes
9d594345-c0a5-450a-9cc2-2c05a2950fcf 4862k 2
9d594345-c0a5-450a-9cc2-2c05a2950fcf@snap 4862k 2 yes

Lee Yarwood (lyarwood) wrote :

Yeah apologies, I didn't explicitly call it out in the description but this requires RAW images to be used otherwise we fallback to libvirt_utils.fetch_image.

https://github.com/openstack/nova/blob/77b9e288a1dcd6b967b53aab818fcbac7d9105f6/nova/virt/libvirt/driver.py#L3153-L3159

melanie witt (melwitt) wrote :

Argh, sorry, that was my bad. Tried again with the RAW image and I see the same problem now, with this trace in n-cpu.log:

Something wrong happened when trying to delete snapshot
from shelved instance.
Traceback (most recent call last):
  File "/opt/stack/nova/nova/compute/manager.py", line 2509, in _delete_snapshot_of_shelved_instance
    self.image_api.delete(context, snapshot_id)
  File "/opt/stack/nova/nova/image/api.py", line 141, in delete
    return session.delete(context, image_id)
  File "/opt/stack/nova/nova/image/glance.py", line 763, in delete
    self._client.call(context, 2, 'delete', image_id)
  File "/opt/stack/nova/nova/image/glance.py", line 168, in call
    result = getattr(controller, method)(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/glanceclient/v2/images.py", line 222, in delete
    self.http_client.delete(url)
  File "/usr/lib/python2.7/site-packages/glanceclient/common/http.py", line 299, in delete
    return self._request('DELETE', url, **kwargs)
  File "/usr/lib/python2.7/site-packages/glanceclient/common/http.py", line 279, in _request
    resp, body_iter = self._handle_response(resp)
  File "/usr/lib/python2.7/site-packages/glanceclient/common/http.py", line 107, in _handle_response
    raise exc.from_response(resp, resp.content)
HTTPConflict: 409 Conflict
Image 3c8ce6fe-269f-4e95-bf09-1f6173aaaf9a could not be deleted because it is in use: The image cannot be deleted because it is in use through the backend store outside of Glance.
    (HTTP 409)

I think this is a bug because nova is supposed to delete the snapshot during unshelve and fails when it tries, and thus leaks images.

Lee Yarwood (lyarwood) wrote :

I wouldn't say that we leak images here, we still have references to the snapshot in Glance and can remove these later once the instance disk is flattened. We also end up in this situation when initially booting an instance from a rbd image, thus my reluctance to call this a genuine bug :

$ nova boot --image cirros-raw --flavor 1 test
[..]
$ sudo rbd -p vms ls -l
NAME SIZE PARENT FMT PROT LOCK
1987d04b-10ce-4724-80bc-3ba699a45ded_disk 1024M images/a906bdce-47e2-4eb1-abeb-5f390fc8493d@snap 2
$ glance image-delete a906bdce-47e2-4eb1-abeb-5f390fc8493d
Unable to delete image 'a906bdce-47e2-4eb1-abeb-5f390fc8493d' because it is in use.

Ritesh Paiboina (rsritesh) wrote :

I have tried to reproduce this issue today but could not able to reproduce

Is this issue dependent on Ceph RBD ? following is my system configuration, can anyone guide me how to reproduce this issue

Devstack based setup

$ cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=14.04
DISTRIB_CODENAME=trusty
DISTRIB_DESCRIPTION="Ubuntu 14.04.5 LTS"

$ cat local.conf
[[local|localrc]]
ADMIN_PASSWORD=secret
DATABASE_PASSWORD=openstack
RABBIT_PASSWORD=openstack
SERVICE_PASSWORD=openstack

enable_plugin designate https://git.openstack.org/openstack/designate

enable_service designate,designate-central,designate-api,designate-pool-manager,designate-zone-manager,designate-mdns

enable_service tempest

$ git rev-parse HEAD
ab44e75fb387847d3c679979007e7c6202dd8afd

$ nova boot --image cirros-0.3.5-x86_64-disk --flavor 1 test-shelve
+--------------------------------------+-------------+--------+------------+-------------+---------------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+-------------+--------+------------+-------------+---------------------------------+
| 1435f84a-bb87-47c1-ae94-1aa9864d4460 | test-shelve | BUILD | spawning | NOSTATE | public=2001:db8::3, 172.24.4.11 |
+--------------------------------------+-------------+--------+------------+-------------+---------------------------------+

$ nova shelve test-shelve

$ glance image-list
+--------------------------------------+--------------------------+
| ID | Name |
+--------------------------------------+--------------------------+
| 241f16d5-42fb-4c75-8537-82a5fd5a0d83 | cirros-0.3.5-x86_64-disk |
| adf2c16c-fb4e-4648-8959-a771bb179e6b | test-shelve-shelved |
+--------------------------------------+--------------------------+

$ nova unshelve test-shelve

$ glance image-list
+--------------------------------------+--------------------------+
| ID | Name |
+--------------------------------------+--------------------------+
| 241f16d5-42fb-4c75-8537-82a5fd5a0d83 | cirros-0.3.5-x86_64-disk |
+--------------------------------------+--------------------------+

Changed in nova:
assignee: nobody → Ritesh (rsritesh)
Ritesh Paiboina (rsritesh) wrote :

Yes.. it is reproduce able with Ceph RBD only.

Changed in nova:
importance: Undecided → Medium
Changed in nova:
status: New → In Progress
Changed in nova:
status: In Progress → Fix Committed
Lee Yarwood (lyarwood) wrote :

Changed in nova:
status: In Progress → Fix Committed

I think you're confusing the nova bugfix with the tempest workaround here.

Changed in nova:
status: Fix Committed → In Progress
Ritesh Paiboina (rsritesh) wrote :

There was no tempest workaround here.
Nova fix has been this : https://review.openstack.org/#/c/457886/

Changed in nova:
assignee: Ritesh (rsritesh) → Vladyslav Drok (vdrok)
Changed in nova:
assignee: Vladyslav Drok (vdrok) → melanie witt (melwitt)
Changed in nova:
assignee: melanie witt (melwitt) → Lee Yarwood (lyarwood)
Changed in nova:
assignee: Lee Yarwood (lyarwood) → Vladyslav Drok (vdrok)
Changed in nova:
assignee: Vladyslav Drok (vdrok) → Lee Yarwood (lyarwood)
Matt Riedemann (mriedem) on 2019-06-27
no longer affects: nova/pike

Reviewed: https://review.opendev.org/457886
Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=d89e7d7857e0ab56c3b088338272c24d0618c07f
Submitter: Zuul
Branch: master

commit d89e7d7857e0ab56c3b088338272c24d0618c07f
Author: rsritesh <email address hidden>
Date: Wed Apr 19 12:02:30 2017 +0530

    libvirt: flatten rbd images when unshelving an instance

    Previously attempts to remove the shelved snapshot of an unshelved
    instance when using the rbd backends for both Nova and Glance would
    fail. This was due to the instance disk being cloned from and still
    referencing the shelved snapshot image in Glance, blocking any attempt
    to remove this image later in the unshelve process.

    After much debate this change attempts to fix this issue by flattening
    the instance disk while the instance is being spawned as part of an
    unshelve. For the rbd imagebackend this removes any reference to the
    shelved snapshot in Glance allowing this image to be removed. For all
    other imagebackends the call to flatten the image is currently a no-op.

    Co-Authored-By: Lee Yarwood <email address hidden>
    Co-Authored-By: Vladyslav Drok <email address hidden>

    Closes-Bug: #1653953
    Change-Id: If3c9d1de3ce0fe394405bd1e1f0fa08ce2baeda8

Changed in nova:
status: In Progress → Fix Released

Reviewed: https://review.opendev.org/668118
Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=e802ede4b30b21c7590620abc142300a57bcf349
Submitter: Zuul
Branch: stable/stein

commit e802ede4b30b21c7590620abc142300a57bcf349
Author: rsritesh <email address hidden>
Date: Wed Apr 19 12:02:30 2017 +0530

    libvirt: flatten rbd images when unshelving an instance

    Previously attempts to remove the shelved snapshot of an unshelved
    instance when using the rbd backends for both Nova and Glance would
    fail. This was due to the instance disk being cloned from and still
    referencing the shelved snapshot image in Glance, blocking any attempt
    to remove this image later in the unshelve process.

    After much debate this change attempts to fix this issue by flattening
    the instance disk while the instance is being spawned as part of an
    unshelve. For the rbd imagebackend this removes any reference to the
    shelved snapshot in Glance allowing this image to be removed. For all
    other imagebackends the call to flatten the image is currently a no-op.

    Co-Authored-By: Lee Yarwood <email address hidden>
    Co-Authored-By: Vladyslav Drok <email address hidden>

    Closes-Bug: #1653953
    Change-Id: If3c9d1de3ce0fe394405bd1e1f0fa08ce2baeda8
    (cherry picked from commit d89e7d7857e0ab56c3b088338272c24d0618c07f)

Reviewed: https://review.opendev.org/668119
Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=e93bc57a73d8642012f759a4ffbe5289112ba490
Submitter: Zuul
Branch: stable/rocky

commit e93bc57a73d8642012f759a4ffbe5289112ba490
Author: rsritesh <email address hidden>
Date: Wed Apr 19 12:02:30 2017 +0530

    libvirt: flatten rbd images when unshelving an instance

    Previously attempts to remove the shelved snapshot of an unshelved
    instance when using the rbd backends for both Nova and Glance would
    fail. This was due to the instance disk being cloned from and still
    referencing the shelved snapshot image in Glance, blocking any attempt
    to remove this image later in the unshelve process.

    After much debate this change attempts to fix this issue by flattening
    the instance disk while the instance is being spawned as part of an
    unshelve. For the rbd imagebackend this removes any reference to the
    shelved snapshot in Glance allowing this image to be removed. For all
    other imagebackends the call to flatten the image is currently a no-op.

    Co-Authored-By: Lee Yarwood <email address hidden>
    Co-Authored-By: Vladyslav Drok <email address hidden>

    Closes-Bug: #1653953
    Change-Id: If3c9d1de3ce0fe394405bd1e1f0fa08ce2baeda8
    (cherry picked from commit d89e7d7857e0ab56c3b088338272c24d0618c07f)
    (cherry picked from commit e802ede4b30b21c7590620abc142300a57bcf349)

Reviewed: https://review.opendev.org/668123
Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=d2f91755ab74372a966b0ebae710beccabef0821
Submitter: Zuul
Branch: stable/queens

commit d2f91755ab74372a966b0ebae710beccabef0821
Author: rsritesh <email address hidden>
Date: Wed Apr 19 12:02:30 2017 +0530

    libvirt: flatten rbd images when unshelving an instance

    Previously attempts to remove the shelved snapshot of an unshelved
    instance when using the rbd backends for both Nova and Glance would
    fail. This was due to the instance disk being cloned from and still
    referencing the shelved snapshot image in Glance, blocking any attempt
    to remove this image later in the unshelve process.

    After much debate this change attempts to fix this issue by flattening
    the instance disk while the instance is being spawned as part of an
    unshelve. For the rbd imagebackend this removes any reference to the
    shelved snapshot in Glance allowing this image to be removed. For all
    other imagebackends the call to flatten the image is currently a no-op.

    Co-Authored-By: Lee Yarwood <email address hidden>
    Co-Authored-By: Vladyslav Drok <email address hidden>

    NOTE(lyarwood): Test conflicts due to Ie3130e104d7ca80289f1bd9f0fee9a7a198c263c
    and I407034374fe17c4795762aa32575ba72d3a46fe8 not being present in
    stable/queens. Note that the latter was backported but then reverted via
    Ibf2b5eeafd962e93ae4ab6290015d58c33024132 resulting in this conflict.

    Conflicts:
        nova/tests/unit/virt/libvirt/test_driver.py

    Closes-Bug: #1653953
    Change-Id: If3c9d1de3ce0fe394405bd1e1f0fa08ce2baeda8
    (cherry picked from commit d89e7d7857e0ab56c3b088338272c24d0618c07f)
    (cherry picked from commit e802ede4b30b21c7590620abc142300a57bcf349)
    (cherry picked from commit e93bc57a73d8642012f759a4ffbe5289112ba490)

This issue was fixed in the openstack/nova 19.0.2 release.

This issue was fixed in the openstack/nova 18.2.2 release.

This issue was fixed in the openstack/nova 17.0.12 release.

To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Duplicates of this bug

Other bug subscribers