Active VMs with No Bootable Device error

Bug #1491952 reported by Timur Nurlygayanov
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Mirantis OpenStack
Invalid
Undecided
MOS Nova

Bug Description

Note: Reproduced on MOS 7.0 ISO #265.

Steps To Reproduce:
1. Deploy cluster with 5 controller nodes, 3 separate Ceph nodes, and 13 compute nodes (without Ceph storage on them)
2. Upload Ubuntu 14.04 image into Glance
3. Boot Cirros image, check VNC console.
4. Boot VM from the Ubuntu image. Check VNC console of VM.

Observed Result:
VM successfully created and state of VM became to Active, but on VNC console of VM we can see the following error: No Bootable Device Found.

Ceph cluster is online:
root@node-3:~# ceph osd tree
# id weight type name up/down reweight
-1 2.73 root default
-2 0.91 host node-8
0 0.91 osd.0 up 1
-3 0.91 host node-6
1 0.91 osd.1 up 1
-4 0.91 host node-7
2 0.91 osd.2 up 1

We can't see any errors in Nova logs on controller and compute nodes, but on compute node we can see in nova-all.log:
<183>Sep 3 16:53:55 node-16 nova-compute 2015-09-03 16:53:55.706 8974 DEBUG nova.virt.libvirt.driver [req-e777b263-0964-45a3-af8d-9bd62b778185 - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info /usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py:6101
<183>Sep 3 16:53:55 node-16 nova-compute 2015-09-03 16:53:55.707 8974 DEBUG nova.virt.libvirt.driver [req-e777b263-0964-45a3-af8d-9bd62b778185 - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info /usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py:6101
<183>Sep 3 16:53:55 node-16 nova-compute 2015-09-03 16:53:55.708 8974 DEBUG nova.virt.libvirt.driver [req-e777b263-0964-45a3-af8d-9bd62b778185 - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info /usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py:6101

Changed in mos:
assignee: nobody → MOS Nova (mos-nova)
importance: Undecided → Critical
milestone: none → 7.0
status: New → Confirmed
Changed in mos:
status: Confirmed → New
importance: Critical → Undecided
description: updated
Revision history for this message
Roman Podoliaka (rpodolyaka) wrote :

Please provide the information about the MOS ISO. We'll try to reproduce on the latest one in the meantime.

Changed in mos:
status: New → Incomplete
Revision history for this message
Timur Nurlygayanov (tnurlygayanov) wrote :

Hi Roman, the issue reproduced on the MOS 7.0 ISO #265.

Changed in mos:
status: Incomplete → Confirmed
description: updated
Revision history for this message
Timur Nurlygayanov (tnurlygayanov) wrote :

Environment with the issue was prepared for MOS Nova team, Roman Podoliaka is looking into it right now.

Revision history for this message
Roman Podoliaka (rpodolyaka) wrote :

cirros image works like a charm. ubuntu14.04 does not. Maybe it's a problem with the image itself. Now downloading it to my machine to try it locally.

Changed in mos:
status: Confirmed → New
status: New → Incomplete
Revision history for this message
Timur Nurlygayanov (tnurlygayanov) wrote :

Roman, this is because Cirros doesn't use HDD, we can create VMs with Cirros and flavor with 0 Gb HDD. And it will work.

You can upload any images on my cloud, I have triet two cloud Ubuntu 14.04 LTS images and these images dont' work.

Of course, probably we have some issues with Glance storage aslo, but looks like we have some issues with VMs configuration in Nova too.

Revision history for this message
Roman Podoliaka (rpodolyaka) wrote :

Heh, it turned out the ubuntu14.04 was uploaded with incorrect disk format type set - RAW instead of QCOW2 - qemu can't find a bootable device, because it's looking for a different disk format:

http://paste.openstack.org/show/444593/

Re-uploaded with the correct disk format, everything works as expected:

http://paste.openstack.org/show/444595/

Changed in mos:
status: Incomplete → Invalid
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.