boot from wrong volume
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
OpenStack Compute (nova) |
Invalid
|
Undecided
|
Unassigned |
Bug Description
In Kilo (and possibly others) there are times when an instance with an attached volume is rebooted (or stopped/started) it will boot from vdb (the attached volume) instead of vda (the ephemeral boot volume.)
It doesn't appear to matter whether cinder has the attached volume marked bootable or not. Moreover, it doesn't matter if the partition is marked bootable or not.
Details:
KiloNove, Kilo Cinder, CEPH based ephemeral, Ceph based Cinder.
Repro:
1. Create an instance.
2. Create a volume from an existing instance (snapshot and then volume create).
3. Attach that volume. It will attach as vdb by default (and that remains consistent, the volume order doesnt' change).
4. Reboot the instance
SOMETIMES or more accurately SOME INSTANCES now boot from vdb1 as /. It seems consistent once it happens. The only way I've been able to force booting back to vda is to detach the volume (which I can safely re-attach once the instance is booted.)
I've found no other work around.
Real details:
nova-compute 1:2015.
cinder-common 1:2015.
python-cinder 1:2015.
python-cinderclient 1:1.1.1-
qemu 1:2.2+dfsg-
qemu-system 1:2.2+dfsg-
libvirt-bin 1.2.12-
libvirt-dev 1.2.12-
libvirt0 1.2.12-
Have not tried to repro in DevStack or with Kilo.2
There could be a relationship to bug 1440762 which solved a bug in Kilo (2015.1.2).