Resize fails to create ephemeral drive if /dev/vdb used by volume

Bug #1823594 reported by Denis Lujanski
12
This bug affects 2 people
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
Confirmed
Medium
Nalini Varshney

Bug Description

Description:

When resizing an instance from a flavour with no ephemeral drive to a flavour with an ephemeral drive, an ephemeral drive is never created if /dev/vdb is used by a volume attachment at the time of the resize. No errors are produced during this process. Instance will hard/soft reboot without triggering an error. If the volume is ever detached from /dev/vdb at a later date, the instance will fail to reboot as it will expect an ephemeral drive.

Steps to reproduce:

Launch instance using flavour with 0 ephemeral storage
Create cinder volume
Attach cinder volume to instance as /dev/vdb (which happens by default)
Resize instance to a flavour with > 0 ephemeral storage

Expected result:

Option A. The volume is detached, an ephemeral drive is created and mapped to /dev/vdb, then a volume is re-attached as /dev/vdc

Option B. An ephemeral drive is created and is mapped to /dev/vdc

Actual result:

An ephemeral drive is never created. It is not visible inside the instance or from the hypervisor. The volume remains attached and mapped to /dev/vdb.

Workaround:

Detach volume from /dev/vdb
Hard reboot instance, which results in error but tells us the name of the ephemeral drive it is expecting
Manually create ephemeral drive in the backend
Hard reboot instance
Re-attach volume as /dev/vdc

Environment:

Nova packages:

nova-common=2:17.0.7.81.gfa00aa3+xenial-1
nova-compute=2:17.0.7.81.gfa00aa3+xenial-1
nova-compute-kvm=2:17.0.7.81.gfa00aa3+xenial-1
nova-compute-libvirt=2:17.0.7.81.gfa00aa3+xenial-1
python-nova=2:17.0.7.81.gfa00aa3+xenial-1

Hypervisor:

Libvirt-KVM

libvirt-bin=4.0.0-1ubuntu8.8~cloud0
libvirt-clients=4.0.0-1ubuntu8.8~cloud0
libvirt-daemon=4.0.0-1ubuntu8.8~cloud0
libvirt-daemon-system=4.0.0-1ubuntu8.8~cloud0
libvirt0:amd64=4.0.0-1ubuntu8.8~cloud0
python-libvirt=4.0.0-1ubuntu8.8~cloud0
nova-compute-libvirt=2:17.0.7.81.gfa00aa3+xenial-1
qemu-kvm=1:2.11+dfsg-1ubuntu7.10~cloud0
nova-compute-kvm=2:17.0.7.81.gfa00aa3+xenial-1

Storage type:

Local storage for root/ ephemeral drives
Ceph storage for volumes

Ceph server version: 13.2.5 Mimic Stable
Ceph client version: 12.2.11 Luminous Stable

Tags: resize
Revision history for this message
Denis Lujanski (dlujanski) wrote :
Revision history for this message
Denis Lujanski (dlujanski) wrote :

Addition some more information in case helpful...

Revision history for this message
Balazs Gibizer (balazs-gibizer) wrote :

I can reproduce the problem on the current master in devstack.

Changed in nova:
status: New → Confirmed
importance: Undecided → Medium
Changed in nova:
assignee: nobody → Nalini Varshney (varshneyg)
Revision history for this message
Nalini Varshney (varshneyg) wrote :

I have tried to reproduce this issue but in the time of resizing instances status is still in RESIZE state and task state is resize_prep, that mean resize is not completed successfully.

please find below for the same:

[root@osc ~(keystone_admin)]# nova list
+--------------------------------------+-------------+--------+-------------+-------------+-------------------------------------------------------------------------------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+-------------+--------+-------------+-------------+-------------------------------------------------------------------------------------------------+
| c33db3e7-1fb3-4efb-9c14-76a31f84c44c | test_resize | RESIZE | resize_prep | Running | vlan1510=15.0.0.7 |

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.