Resize fails to create ephemeral drive if /dev/vdb used by volume
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
OpenStack Compute (nova) |
Confirmed
|
Medium
|
Nalini Varshney |
Bug Description
Description:
When resizing an instance from a flavour with no ephemeral drive to a flavour with an ephemeral drive, an ephemeral drive is never created if /dev/vdb is used by a volume attachment at the time of the resize. No errors are produced during this process. Instance will hard/soft reboot without triggering an error. If the volume is ever detached from /dev/vdb at a later date, the instance will fail to reboot as it will expect an ephemeral drive.
Steps to reproduce:
Launch instance using flavour with 0 ephemeral storage
Create cinder volume
Attach cinder volume to instance as /dev/vdb (which happens by default)
Resize instance to a flavour with > 0 ephemeral storage
Expected result:
Option A. The volume is detached, an ephemeral drive is created and mapped to /dev/vdb, then a volume is re-attached as /dev/vdc
Option B. An ephemeral drive is created and is mapped to /dev/vdc
Actual result:
An ephemeral drive is never created. It is not visible inside the instance or from the hypervisor. The volume remains attached and mapped to /dev/vdb.
Workaround:
Detach volume from /dev/vdb
Hard reboot instance, which results in error but tells us the name of the ephemeral drive it is expecting
Manually create ephemeral drive in the backend
Hard reboot instance
Re-attach volume as /dev/vdc
Environment:
Nova packages:
nova-common=
nova-compute=
nova-compute-
nova-compute-
python-
Hypervisor:
Libvirt-KVM
libvirt-
libvirt-
libvirt-
libvirt-
libvirt0:
python-
nova-compute-
qemu-kvm=
nova-compute-
Storage type:
Local storage for root/ ephemeral drives
Ceph storage for volumes
Ceph server version: 13.2.5 Mimic Stable
Ceph client version: 12.2.11 Luminous Stable
Changed in nova: | |
assignee: | nobody → Nalini Varshney (varshneyg) |
Addition some more information in case helpful...