libvirt selects wrong root device name

Bug #1560965 reported by eblock@nde.ag on 2016-03-23
106
This bug affects 21 people
Affects Status Importance Assigned to Milestone
Cinder
Undecided
Unassigned
OpenStack Compute (nova)
Low
Unassigned
OpenStack Dashboard (Horizon)
Low
Eric Desrochers

Bug Description

Referring to Liberty, Compute runs with xen hypervisor:

When trying to boot an instance from volume via Horizon, the VM fails to spawn because of an invalid block device mapping. I found a cause for that in a default initial "device_name=vda" in the file /srv/www/openstack-dashboard/openstack_dashboard/dashboards/project/instances/workflows/create_instance.py.
Log file nova-compute.log reports
"Ignoring supplied device name: /dev/vda"

, but in the next step it uses it anyway and says

"Booting with blank volume at /dev/vda".

To test my assumption, I blanked the device_name and edited the array dev_mapping_2 to only append device_name if it's not empty. That works perfectly for Booting from Horizon and could be one way to fix this.
But if you use nova boot command, you can still provide (multiple) device names, for example if you launch an instance and attach a blank volume.

It seems that libvirt is indeed ignoring the supplied device name, but only if it's not the root device. If I understand correctly, a user-supplied device_name should also be nulled out for root_device_name and picked by libvirt, if it's not valid. And also the default value for device_name in Horizon dashboard should be None. If there is one supplied, it should be processed and probably validated by libvirt.

Steps to reproduce from Horizon:
Use Xen as hypervisor

1. Go to the Horizon dashboard and launch an instance
2. Select "Boot from image (creates a new volume)" as Instance Boot Source

Expected result:
Instance starts with /dev/xvda as root device.

Actual result:
Build of instance fails, nova-compute.log reports
"BuildAbortException: Build of instance c15f3344-f9e3-4853-9c18-ea8741563205 aborted: Block Device Mapping is Invalid"

Steps to reproduce from nova cli:

1. Launch an instance from command line via
nova boot --flavor 1 --block-device source=image,id=IMAGE_ID,dest=volume,size=1,shutdown=remove,bootindex=0,device=vda --block-device source=blank,dest=volume,size=1,shutdown=remove,device=vdb VM

Expected result:
Instance starts with /dev/xvda as root device.

Actual result:
Build of instance fails, device name for vdb is ignored and replaced correctly, but the root device is not.

eblock@nde.ag (eblock) wrote :

Find attached a diff on my changes for Horizon, this worked for me, but I'm not sure about any side effects or if this is the way to go here.

Changed in nova:
status: New → Confirmed
importance: Undecided → Low
tags: added: libvirt
Paul Bourke (pauldbourke) wrote :

Hi eblock, I have just come across the same problem, thanks for your useful analysis. Some questions:

1) Should this bug not be logged against Horizon rather than Nova?
2) Is setting the initial device_name in Horizon to blank not enough to make libvirt autoselect the boot device? Or do you explicitly need to remove it from the request dict as you have done in your patch?

Paul Bourke (pauldbourke) wrote :

3) You mentioned in your mailing list post that you set the "Device Name" field in Horizon to visible - could you let me know how you did this?

eblock@nde.ag (eblock) wrote :

Hi Paul,

> 1) Should this bug not be logged against Horizon rather than Nova?

In my opinion, the faulty behavior in Horizon is only a result of the wrong device name selection by libvirt. If Horizon provides the wrong device name it should be checked within libvirt (nova) and then pick a correct name. But I'm hoping for the professionals here to correct me as I'm quite new to Openstack.

> 2) Is setting the initial device_name in Horizon to blank not enough to make libvirt autoselect the boot device?

By default, you can't provide your own device name as the respective field is blank. So if you launched a VM from volume via Horizon, you would always have the default device_name "vda" passed to nova, which would lead to the termination of that VM. I only made that field visible for testing purposes because I don't want users to bother about device names. That's why I set the default device_name to blank.

> 3) you set the "Device Name" field in Horizon to visible - could you let me know how you did this?

Sure:
control1:~ # diff -u /srv/www/openstack-dashboard/openstack_dashboard/local/local_settings.py /srv/www/openstack-dashboard/openstack_dashboard/local/local_settings.py.dist
--- /srv/www/openstack-dashboard/openstack_dashboard/local/local_settings.py 2016-03-29 14:49:07.583429716 +0200
+++ /srv/www/openstack-dashboard/openstack_dashboard/local/local_settings.py.dist 2016-03-22 11:50:52.832377033 +0100
@@ -203,7 +203,7 @@
 # can_set_mount_point to True will add the option to set the mount point
 # from the UI.
 OPENSTACK_HYPERVISOR_FEATURES = {
- 'can_set_mount_point': False,
+ 'can_set_mount_point': True,
     'can_set_password': False,
     'requires_keypair': False,
 }

eblock@nde.ag (eblock) wrote :

I have to add another note on your second question:

If you don't remove device_name from the dict if it's empty, apache will report an error like this:

   "Invalid input for field/attribute device_name. Value: None. None is not of type 'string'"

That's why I removed it.

eblock@nde.ag (eblock) wrote :
Download full text (3.2 KiB)

I upgraded my cloud env to Mitaka and was facing the same issue as described. Unfortunately, my first patch mentioned here had no effect. I started digging once again and noticed that the respective code in create_instance.py is not executed anymore. So it seems that now it's the code in /srv/www/openstack-dashboard/static/dashboard/project/workflow/launch-instance/launch-instance-model.service.js which is responsible for the faulty device names. Here's what I did to let nova choose the device name:

---cut here---
control1:~ # diff -u /srv/www/openstack-dashboard/static/dashboard/project/workflow/launch-instance/launch-instance-model.service.js.dist /srv/www/openstack-dashboard/static/dashboard/project/workflow/launch-instance/launch-instance-model.service.js
--- /srv/www/openstack-dashboard/static/dashboard/project/workflow/launch-instance/launch-instance-model.service.js.dist 2016-04-24 00:57:26.000000000 +0200
+++ /srv/www/openstack-dashboard/static/dashboard/project/workflow/launch-instance/launch-instance-model.service.js 2016-05-13 16:12:39.915755258 +0200
@@ -164,8 +164,8 @@
         source: [],
         // REQUIRED for JS logic
         vol_create: false,
- // May be null
- vol_device_name: 'vda',
+ // Not set by default to let nova choose autom.
+ vol_device_name: 'NOTSET',
         vol_delete_on_instance_delete: false,
         vol_size: 1
       };
@@ -546,17 +546,32 @@
         // Specify null to get Autoselection (not empty string)
         var deviceName = finalSpec.vol_device_name ? finalSpec.vol_device_name : null;
         finalSpec.block_device_mapping_v2 = [];
- finalSpec.block_device_mapping_v2.push(
- {
- 'device_name': deviceName,
- 'source_type': SOURCE_TYPE_IMAGE,
- 'destination_type': SOURCE_TYPE_VOLUME,
- 'delete_on_termination': finalSpec.vol_delete_on_instance_delete,
- 'uuid': finalSpec.source_id,
- 'boot_index': '0',
- 'volume_size': finalSpec.vol_size
- }
- );
+ if (finalSpec.vol_device_name=='NOTSET') {
+ finalSpec.block_device_mapping_v2.push(
+ {
+// 'device_name': deviceName,
+ 'source_type': SOURCE_TYPE_IMAGE,
+ 'destination_type': SOURCE_TYPE_VOLUME,
+ 'delete_on_termination': finalSpec.vol_delete_on_instance_delete,
+ 'uuid': finalSpec.source_id,
+ 'boot_index': '0',
+ 'volume_size': finalSpec.vol_size
+ }
+ );
+ }
+ else {
+ finalSpec.block_device_mapping_v2.push(
+ {
+ 'device_name': deviceName,
+ 'source_type': SOURCE_TYPE_IMAGE,
+ 'destination_type': SOURCE_TYPE_VOLUME,
+ 'delete_on_termination': finalSpec.vol_delete_on_instance_delete,
+ 'uuid': finalSpec.source_id,
+ 'boot_index': '0',
+ 'volume_size': finalSpec.vol_size
+ }
+ );
+ }
         finalSpec.source_id = null;
       }
     }
---cut here---

I also added horizon to the affected projects of this bug, I'm not s...

Read more...

I think bug 1217874 is an older bug which describes the same issue.

eblock@nde.ag (eblock) wrote :

Actually, I don't see that as a duplicate.
What I'm describing is the failure to launch an instance at all if you choose to boot it from volume on Xen hypervisor. Attaching volumes to running instances works fine, so it's not related to cinder.
The cause is a default device name 'vda' in the file launch-instance-model.service.js. To let nova choose the device name it should be empty as default, which works as you can see in my tests. I will remove the duplicate mark from this bug.

Tomasz Głuch (tomekg) wrote :

On Mitaka with KVM hypervisor this bug also is visible.
When booting instance from volume created from image with "hw_scsi_model=virtio-scsi" metadata, device names should be in /dev/sd* format. Horizon enforces /dev/vda for first disk, so it is named /dev/vda in cinder & nova metadata, but /dev/sda in libvirt and guest system.

Further bug caused by this issue is an impossibility to attach another volume, as nova volume-attach tries to use seemingly free name /dev/sda for next device. This causes conflict in building XML in libvirt:
2016-10-20 17:38:41.780 13818 ERROR oslo_messaging.rpc.dispatcher libvirtError: internal error: unable to execute QEMU command 'device_add': Duplicate ID 'scsi0-0-0-0' for device

eblock@nde.ag (eblock) wrote :

This issue is developing. Just to summarize:
first it was the python file /srv/www/openstack-dashboard/openstack_dashboard/dashboards/project/instances/workflows/create_instance.py that had a wrong default device name "vda". I had my workaround and then the workflow changed to use the javascript file in /srv/www/openstack-dashboard/static/dashboard/project/workflow/launch-instance/launch-instance-model.service.js, my workaround had become unusable. So I made some changes to that file that I had to patch it every time I updated my control node. And recently I found that my patch doesn't work anymore. I searched again and found out that now it seems that Horizon is using

/srv/www/openstack-dashboard/openstack_dashboard/dashboards/project/static/dashboard/project/workflow/launch-instance/launch-instance-model.service.js

instead of

/srv/www/openstack-dashboard/static/dashboard/project/workflow/launch-instance/launch-instance-model.service.js.

These files are identical, so I applied the same patch to that file and now it works again with the block devices.

I hope someone will fix this, it's kind of annoying for users (and admins).

@eblock - can you please paste a diff of your create_instance.py file !

eblock@nde.ag (eblock) wrote :

I can't provide a diff from that file, It's the distributed file I'm using since my changes from March didn't have any effects anymore. The only differences regarding Horizon are in the files /srv/www/openstack-dashboard/openstack_dashboard/dashboards/project/static/dashboard/project/workflow/launch-instance/launch-instance-model.service.js and
/srv/www/openstack-dashboard/static/dashboard/project/workflow/launch-instance/launch-instance-model.service.js, see comment #6 for the diff.

Changed in horizon:
status: New → Confirmed

Can confirm that what you provided in #6 fixes the issue (with Mitaka, looks the same for Newton), and it makes complete sense to let the image metadata, or the hypervisor software (ie. xen/kvm) dictate what the primary disk is going to be labeled.

I also had the same issue, where we had set "disk_prefix = sd" in nova.conf, along with "hw_scsi_model=virtio-scsi" and "hw_disk_bus=scsi" on images (as we're using Ceph), however the the primary disk would appear as /dev/vda in Horzion, and /dev/sda in the VM.

Consequent disk's being added from there would appear as /dev/sd[b/d/c] in horizon, however they would not mount due to the device being in use and cause all sorts of issues from there (to the point that the VM will not boot if restarted).

After applying the patch (from #6), disks appear as expected in order (eg. /dev/sd[a/b/c]).

Fix proposed to branch: master
Review: https://review.openstack.org/400463

Changed in horizon:
assignee: nobody → Matthew Taylor (matthew-taylor-f)
status: Confirmed → In Progress

Needs more work for Newton. this'll be a WIP for me for a while.

Paul Bourke (pauldbourke) wrote :

Testing on Mitaka and confirming this is still very messy. I'd be interested to know why the relevant code seems to have moved from create_instance.py to Javascript as eblock has pointed out.

For some reason I haven't figured out yet, eblock's latest patch is not working for me, though a similar patch to the function below (setFinalSpecBootFromVolumeDevice) works for the volume->volume case (attached).

Here is some relevant info while it's fresh in my mind:

* Both image->volume and volume->volume appear to work via the API but not Horizon, so this is absolutely a Horizon problem imo.

* Reading the docs on Nova BDM specifiying the device name as Horizon currently does (vda) is not recommended, so eblock's patch should be the correct approach.

* setFinalSpecBootFromVolumeDevice seems to be hardcoded to use BDM v1, I'm not sure if there's a reason for this, but updating it to use v2 as in my patch seems to work ok for both KVM and Xen.

Change abandoned by David Lyle (<email address hidden>) on branch: master
Review: https://review.openstack.org/400463
Reason: This review is > 4 weeks without comment, and failed Jenkins the last time it was checked. We are abandoning this for now. Feel free to reactivate the review by pressing the restore button and leaving a 'recheck' comment to get fresh test results.

Changed in horizon:
assignee: Matthew Taylor (matthew-taylor-f) → nobody
status: In Progress → New
status: New → Confirmed
Sean Dague (sdague) wrote :

Automatically discovered version liberty in description. If this is incorrect, please update the description to include 'nova version: ...'

tags: added: openstack-version.liberty
eblock@nde.ag (eblock) wrote :

I don't know if I was supposed to do that, but since this description also applies to Mitaka, I added the respective tag.

tags: added: openstack-version.mitaka
Stefanos Koffas (skoffas) wrote :

Hello,

I had the same problem in Openstack Newton. I had exactly the same problem as Tomasz Głuch (tomekg) wrote in #9. I applied the patch that was proposed in #6 but when I tried to create a volume from the image and then boot a VM from that volume the problem still existed. As Paul Bourke suggested in #16 I changed (setFinalSpecBootFromVolumeDevice) but in a different way. Now everything works as expected. My patch is attached and I hope it helps.

Maximiliano (massimo-6) wrote :

Hi Guys,

Any news about this bug ? it still happen in Ocata.
If you use nova along with "hw_scsi_model=virtio-scsi" and "hw_disk_bus=scsi" to launch your instances, you can't attach a new cinder volume then, because of the error "libvirtError: internal error: unable to execute QEMU command 'device_add': Duplicate ID 'scsi0-0-0-0' for device", and as a result of this the / volume turns in read-only state.
Nova uses the same name for both the / volume and the new cinder one, both are /dev/sda.

Stefanos Koffas (skoffas) wrote :

Hi Maximiliano,

Have you tried the patch that I attached in my comment above (#20). I think that it
solves the problem. You can find the file that is used by the Openstack dashboard
(horizon) in /var/lib/openstack-dashboard/static/dashboard/js/ . Please give it a try
and if you have any problems tell me.

PS: I know that if I believe that my patch solves the problem I should make a proper PR
and I will give it a try when find some free time :)

Annie Melen (anniemelen) wrote :

Hi, guys!

I've just утсщгтthe same bug as you in Pike Release. Stefanos's patch helped me to solve the problem with wrong device mapping but even after that my instances wont't started caused of 'No bootable device' error.
Dumpxml output:

<devices>
    <emulator>/usr/bin/kvm-spice</emulator>
    <disk type='network' device='disk'>
      <driver name='qemu' type='raw' cache='writeback' discard='unmap'/>
      <auth username='cinder_std1'>
        <secret type='ceph' uuid='0a2cc1e3-ff05-4be7-8701-162015231f3b'/>
      </auth>
      <source protocol='rbd' name='cinder_pool/volume-5014d7c3-d60d-4b6d-850f-3823cea46557'>
        <host name='10.0.230.101' port='6789'/>
        <host name='10.0.230.107' port='6789'/>
        <host name='10.0.230.113' port='6789'/>
      </source>
      <backingStore/>
      <target dev='sda' bus='scsi'/>
      <serial>5014d7c3-d60d-4b6d-850f-3823cea46557</serial>
      <alias name='scsi0-0-0-1'/>
      <address type='drive' controller='0' bus='0' target='0' unit='1'/>
    </disk>
    <disk type='network' device='disk'>
      <driver name='qemu' type='raw' cache='writeback' discard='unmap'/>
      <auth username='cinder_std1'>
        <secret type='ceph' uuid='0a2cc1e3-ff05-4be7-8701-162015231f3b'/>
      </auth>
      <source protocol='rbd' name='nova_pool/c3813586-84fa-438d-8f7f-64f6cb188d05_disk.swap'>
        <host name='10.0.230.101' port='6789'/>
        <host name='10.0.230.107' port='6789'/>
        <host name='10.0.230.113' port='6789'/>
      </source>
      <backingStore/>
      <target dev='sdb' bus='scsi'/>
      <alias name='scsi0-0-0-0'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
    <controller type='scsi' index='0' model='virtio-scsi'>
      <alias name='scsi0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </controller>

I've tried images like Ubuntu-Xenial-Сloudimg_amd64, CentOS-7-x86_64-GenericCloud-180, Fedora-Cloud-Base-27-1.6-x86_64...
Anyone could help me please?

eblock@nde.ag (eblock) wrote :

Hi Annie,

have you been able to launch any instances in your cloud yet?
There's one thing I check before I upload any images to glance, I make sure that the image has a partition table. You can find more details in [0].

Depending on your hypervisor and the images it's possible that the image is missing kernel modules. I had to deal with that when trying to migrate Xen instances to KVM [1].

You could also create your own image by installing from an iso and see if that image works [2].

There's also a mailing list for these kind of questions [3], I believe the chances for help are better here.

And last but not least visit [4], also very helpful.

[0] http://heiterbiswolkig.blogs.nde.ag/2018/02/06/obstacles-for-openstack/
[1] http://heiterbiswolkig.blogs.nde.ag/2017/08/10/migrate-from-xen-to-kvm/
[2] https://docs.openstack.org/mitaka/user-guide/cli_nova_launch_instance_using_ISO_image.html
[3] http://lists.openstack.org/pipermail/openstack/
[4] https://ask.openstack.org/en/questions/scope:all/sort:activity-desc/page:1/

Annie Melen (anniemelen) wrote :

Thanks for response!

Of course, I'm able to launch instances from the same images when bus is virtio, it's working fine. Also, there are virtio_scsi kernel modules in all images I use for testing.
What is really interesting, when I tried to define new instance from the same XML-file but manually changed scsi to virtio, instance was booted normally... As far as I can see, there is a problem in hypervisor configuration (I use nova-compute-kvm), but obviously I miss something important.

eblock@nde.ag (eblock) wrote :

Have you removed/added the respective image-properties?

openstack image set --property hw_scsi_model=virtio-scsi --property hw_disk_bus=scsi --property hw_qemu_guest_agent=yes --property os_require_quiesce=yes 8a143d74-565f-47d8-b10d-0dd5842976aa

or

openstack image set --remove-property hw_scsi_model --remove-property hw_disk_bus --remove-property hw_qemu_guest_agent --remove-property os_require_quiesce 8a143d74-565f-47d8-b10d-0dd5842976aa

Annie Melen (anniemelen) wrote :

Actually, I've created two identical images from one source and updated properties on one of them:

# openstack image show -f value -c properties 404dc2d6-9f7e-4888-914f-12c77bea3b41
direct_url='rbd://d2baf031-486e-4a8d-a6ed-9ac4438ddb99/vm_images_pool/404dc2d6-9f7e-4888-914f-12c77bea3b41/snap', hw_disk_bus='scsi', hw_qemu_guest_agent='yes', hw_scsi_model='virtio-scsi', os_require_quiesce='yes'

Instances from this image don't work properly ('No bootable device' in console output).

Another image has no properties except direct_url:

# openstack image show -f value -c properties b9549d48-8dc9-4400-86d9-4b8b11d3a1e9
direct_url='rbd://d2baf031-486e-4a8d-a6ed-9ac4438ddb99/vm_images_pool/b9549d48-8dc9-4400-86d9-4b8b11d3a1e9/snap'

Instances from this image are booting fine.

eblock@nde.ag (eblock) wrote :

I still think the image is missing some kernel modules, could you paste the output of

find /lib/modules/<KERNEL>/ -name *scsi*

Annie Melen (anniemelen) wrote :

Here, take a look
root@test:~# find /lib/modules/ -name *scsi*
/lib/modules/4.4.0-109-generic/kernel/drivers/message/fusion/mptscsih.ko
/lib/modules/4.4.0-109-generic/kernel/drivers/scsi
/lib/modules/4.4.0-109-generic/kernel/drivers/scsi/device_handler/scsi_dh_alua.ko
/lib/modules/4.4.0-109-generic/kernel/drivers/scsi/device_handler/scsi_dh_emc.ko
/lib/modules/4.4.0-109-generic/kernel/drivers/scsi/device_handler/scsi_dh_rdac.ko
/lib/modules/4.4.0-109-generic/kernel/drivers/scsi/device_handler/scsi_dh_hp_sw.ko
/lib/modules/4.4.0-109-generic/kernel/drivers/scsi/virtio_scsi.ko
/lib/modules/4.4.0-109-generic/kernel/drivers/scsi/iscsi_boot_sysfs.ko
/lib/modules/4.4.0-109-generic/kernel/drivers/scsi/scsi_transport_spi.ko
/lib/modules/4.4.0-109-generic/kernel/drivers/scsi/scsi_transport_fc.ko
/lib/modules/4.4.0-109-generic/kernel/drivers/scsi/scsi_transport_iscsi.ko
/lib/modules/4.4.0-109-generic/kernel/drivers/scsi/libiscsi_tcp.ko
/lib/modules/4.4.0-109-generic/kernel/drivers/scsi/iscsi_tcp.ko
/lib/modules/4.4.0-109-generic/kernel/drivers/scsi/libiscsi.ko
/lib/modules/4.4.0-109-generic/kernel/drivers/scsi/scsi_transport_sas.ko
/lib/modules/4.4.0-109-generic/kernel/drivers/scsi/vmw_pvscsi.ko
/lib/modules/4.4.0-109-generic/kernel/drivers/xen/xen-scsiback.ko
/lib/modules/4.4.0-109-generic/kernel/drivers/target/target_core_pscsi.ko
/lib/modules/4.4.0-109-generic/kernel/drivers/target/iscsi
/lib/modules/4.4.0-109-generic/kernel/drivers/target/iscsi/iscsi_target_mod.ko
/lib/modules/4.4.0-109-generic/kernel/drivers/vhost/vhost_scsi.ko
/lib/modules/4.4.0-109-generic/kernel/drivers/firmware/iscsi_ibft.ko

Deepa (dpaclt) wrote :

Same issue persists on Ocata version too ,we are using Dell Unity 300F.

>>>>>>>>>>>>>>>>

WARNING nova.virt.libvirt.driver [req-aefe05a7-7166-4079-bd88-7aa798443886 8e116148546e419299be6beaaab14b2c 466f7fb19ab44416b0
973e83ccaabe71 - - -] [instance: 64cc8284-c182-4a9c-9cbb-ff9f822af71c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
2018-02-22 06:23:31.411 3937535 INFO nova.virt.block_device [req-aefe05a7-7166-4079-bd88-7aa798443886 8e116148546e419299be6beaaab14b2c 466f7fb19ab44416b0973e8
3ccaabe71 - - -] [instance: 64cc8284-c182-4a9c-9cbb-ff9f822af71c] Booting with blank volume at /dev/vda

>>>>>>>>>>>>>>>>>>>>>>>>>

Akihiro Motoki (amotoki) wrote :

As my hat of horizon team, I would like to clarify the current situation. This bug is now associated with horizon, nova and cinder.

Regarding the boot from volume, horizon talks with only Nova API, so the problem can be split into two parts:
(a) create-server parameters which passed from horizon to nova
(b) nova and cinder behaviors around create-server with boot-from-volume

The following are questions from horizon side:

(1) Is the problem only (a) above?
(2) If (1) is yes, Apart from test code, does a patch attached n #20 solve the problem?
(3) If (1) is no, can we use this bug on focusing (a) and split (b) as a separate bug?

We don't have Xen hypervisors, so we cannot actually test it. I think this is the main reason horizon team failed to address this for a long time. Hopefully, someone who can test Xen hypervisors can propose a fix.

Akihiro Motoki (amotoki) on 2018-02-23
Changed in horizon:
importance: Undecided → High
milestone: none → rocky-1
eblock@nde.ag (eblock) wrote :

We switched from Xen to KVM a couple of months ago, so I won't be able to test any changes, at least not easily. I can check if we have a spare compute node for testing purposes.

Someone already confirmed this for Ocata, too, but I just ran another test in our environment. I don't know if I can really clarify anything because the used source code has changed in the last two years and in the meantime my own patch doesn't even work anymore, the patch from #20 also fails.

I have a base image with these properties:
'hw_scsi_model': 'virtio-scsi', 'hw_disk_bus': 'scsi'

I launch an instance from that image to a volume and the instance gets it right:

boot-from-vol:~ # lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 11G 0 disk
├─sda1 8:1 0 501M 0 part /boot
└─sda2 8:2 0 9,5G 0 part
  ├─vg0-usr 254:0 0 3G 0 lvm /usr
  ├─vg0-root 254:1 0 1,4G 0 lvm /
  ├─vg0-tmp 254:2 0 1G 0 lvm /tmp
  └─vg0-var 254:3 0 1,6G 0 lvm /var

But Horizon shows /dev/vda as the attached volume.
Attaching a second (empty) volume is correctly processed as sdb within the VM:

boot-from-vol:~ # lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 11G 0 disk
├─sda1 8:1 0 501M 0 part /boot
└─sda2 8:2 0 9,5G 0 part
  ├─vg0-usr 254:0 0 3G 0 lvm /usr
  ├─vg0-root 254:1 0 1,4G 0 lvm /
  ├─vg0-tmp 254:2 0 1G 0 lvm /tmp
  └─vg0-var 254:3 0 1,6G 0 lvm /var
sdb 8:16 0 1G 0 disk

but from Cinder/Horizon perspective it's attached as /dev/sda. A reboot of the VM succeeds without failures.

The patch doesn't work anymore (openSUSE Leap 42.3). I'm not able to tell if a split of the issue(s) is recommended, I would leave that for the developers to decide.
I guess I didn't really clear anything up, but maybe this will get fixed anyway.

eblock@nde.ag (eblock) wrote :

Another comment for Annie:

I believe it's the same user, but there are two issues reported on ask.openstack.org:

https://ask.openstack.org/en/question/112521/how-to-launch-a-vm-that-use-devvda-and-busvirtio-using-nova/

https://ask.openstack.org/en/question/112488/libvirt-not-allocating-cpu-and-disk-to-vms-after-the-os-update/

It seems to be related to libvirt updates, but it's not resolved yet.

Annie Melen (anniemelen) wrote :

Akihiro,

I believe this bug nis now longer applies to horizon after patching from comment #20. The nova-compute log shows me correct volume names when the image properties above are applied:

2018-02-26 16:12:19.403 40927 INFO nova.compute.claims [req-70006123-feaa-4f3e-9605-ad28fdbe60cd 33fd8dd8d96f4d17ac5db0c3f600a51e df71265b6e0444a1837f51a14d4c19a8 - default default] [instance: 86c27905-a993-4e21-a1a5-50928922c21e] Claim successful on node compute01.demo.infra.cloud.loc
2018-02-26 16:12:19.774 40927 WARNING nova.virt.libvirt.driver [req-70006123-feaa-4f3e-9605-ad28fdbe60cd 33fd8dd8d96f4d17ac5db0c3f600a51e df71265b6e0444a1837f51a14d4c19a8 - default default] [instance: 86c27905-a993-4e21-a1a5-50928922c21e] Ignoring supplied device name: /dev/sda. Libvirt can't honour user-supplied dev names
2018-02-26 16:12:19.892 40927 INFO nova.virt.block_device [req-70006123-feaa-4f3e-9605-ad28fdbe60cd 33fd8dd8d96f4d17ac5db0c3f600a51e df71265b6e0444a1837f51a14d4c19a8 - default default] [instance: 86c27905-a993-4e21-a1a5-50928922c21e] Booting with blank volume at /dev/sda

So if we split out the bug into two parts - (a) and (b), we drop into (b) and conclude that bug is affected somewhere in nova or libvirt.

Annie Melen (anniemelen) wrote :

Eblock,

can you tell me what's your version of KVM, libvirt, nova?

eblock@nde.ag (eblock) wrote :

Sure:

- qemu-kvm-2.9.0-30.1.x86_64
- libvirt-3.3.0-4.11.x86_64
- openstack-nova-15.0.8~dev8-1.2.noarch

Annie Melen (anniemelen) wrote :
Download full text (3.3 KiB)

Well, I didn't find any differences among releases which might be critical.

 - nova-compute 16.0.4-0ubuntu1~cloud0
 - libvirt-daemon 3.6.0-1ubuntu6.2~cloud0
 - qemu-kvm 1:2.10+dfsg-0ubuntu3.4~cloud0

And still I'm not getting what's going wrong... Maybe I missed something, could you check my config, please?

*compute* nova.conf
[DEFAULT]
instance_usage_audit_period = hour
vif_plugging_is_fatal = False
vif_plugging_timeout = 0
force_raw_images = true
instances_path = /var/lib/nova/instances
instance_usage_audit = True
block_device_allocate_retries = 200
block_device_allocate_retries_interval = 3
my_ip = 172.31.206.25
transport_url = rabbit://openstack:6bba27c4ae7411e7a3ca833899985ce3@control01:5672,openstack:6bba27c4ae7411e7a3ca833899985ce3@control02:5672
[api]
auth_strategy = keystone
[api_database]
[barbican]
[cache]
memcache_servers = control01:11211,control02:11211
[cells]
[cinder]
os_region_name = RegionOne
[compute]
[conductor]
[console]
[consoleauth]
[cors]
[crypto]
[database]
[ephemeral_storage_encryption]
[filter_scheduler]
[glance]
api_servers = http://control:9292
[guestfs]
[healthcheck]
[hyperv]
[ironic]
[key_manager]
[keystone_authtoken]
auth_uri = http://control:5000
region_name = RegionOne
memcached_servers = control01:11211,control02:11211
auth_url = http://control:35357
project_domain_name = default
user_domain_name = default
project_name = service
auth_type = password
username = nova
password = 4b0ade06afeb11e792290be7cec06e49
[libvirt]
live_migration_inbound_addr = 172.31.208.25
disk_cachemodes = "network=writeback"
images_type = rbd
images_rbd_pool = nova_pool
images_rbd_ceph_conf = /etc/ceph/ceph_std1.conf
hw_disk_discard = unmap
rbd_user = cinder_std1
rbd_secret_uuid = 0db560bd-cb7d-495d-8759-e7212942e8cb
[matchmaker_redis]
[metrics]
[mks]
[neutron]
url = http://control:9696
region_name = RegionOne
service_metadata_proxy = true
metadata_proxy_shared_secret = d282e034b4ca11e7b694af91a5b5367f
auth_type = password
auth_url = http://control:35357
project_name = service
project_domain_name = default
username = neutron
user_domain_name = default
password = 5e590ddcb4c511e7a7765b246550c69a
[notifications]
notify_on_state_change = vm_and_task_state
[osapi_v21]
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
driver = messagingv2
topics = notifications,watcher_notifications
retry =-1
[oslo_messaging_rabbit]
rabbit_retry_interval = 1
rabbit_retry_backoff = 2
rabbit_ha_queues = true
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[pci]
[placement]
os_region_name = RegionOne
auth_type = password
auth_url = http://control:35357/v3
project_name = service
project_domain_name = default
username = placement
user_domain_name = default
password = e7d9d0b8afee11e7a118779924174357
[quota]
[rdp]
[remote_debug]
[scheduler]
[serial_console]
[service_user]
[spice]
[trusted_computing]
[upgrade_levels]
[vendordata_dynamic_auth]
[vmware]
vnc_port = 5900
[vnc]
enabled = true
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = compute01
novncproxy_base_url = http://control:6080/vnc_auto.html
novncproxy_host = compute01
[workarounds]
[wsgi]
[xenserver]
[xvp...

Read more...

eblock@nde.ag (eblock) wrote :

I have to revoke my statement about the failing patch, it does work if it's image -> volume. Unfortunately, none of the suggested patches (in #16, #20) for volume -> volume work for me, I have tried it in different ways and also in combination, nothing works and nova-api reports an error like this:

HTTP exception thrown: Can not find requested image

My current setFinalSpecBootFromVolumeDevice:

---cut here---
    function setFinalSpecBootFromVolumeDevice(finalSpec, sourceType) {
      if (finalSpec.vol_create) {
        // Specify null to get Autoselection (not empty string)
        var deviceName = finalSpec.vol_device_name ? finalSpec.vol_device_name : null;
        finalSpec.block_device_mapping_v2 = [];
        if (finalSpec.vol_device_name==='NOTSET'){
          finalSpec.block_device_mapping_v2.push(
            {
              'source_type': bootSourceTypes.VOLUME,
              'destination_type': bootSourceTypes.VOLUME,
              'delete_on_termination': finalSpec.vol_delete_on_instance_delete,
              'uuid': finalSpec.source_id,
              'boot_index': '0',
              'volume_size': finalSpec.vol_size
            }
          );
        }
        else {
          finalSpec.block_device_mapping_v2.push(
            {
              'device_name': deviceName,
              'source_type': bootSourceTypes.VOLUME,
              'destination_type': bootSourceTypes.VOLUME,
              'delete_on_termination': finalSpec.vol_delete_on_instance_delete,
              'uuid': finalSpec.source_id,
              'boot_index': '0',
              'volume_size': finalSpec.vol_size
            }
          );
      }
      finalSpec.source_id = null;
    }
  }
---cut here---

I'm not sure what I have to change here to make it work. At least image -> volume works again.

eblock@nde.ag (eblock) wrote :

Ok, after some trial & error it seems like this works for volume -> volume:

---cut here---
    function setFinalSpecBootFromVolumeDevice(finalSpec, sourceType) {
      var deviceName = finalSpec.vol_device_name ? finalSpec.vol_device_name : null;
      if (finalSpec.vol_device_name==='NOTSET'){
        finalSpec.block_device_mapping_v2 = [];
        finalSpec.block_device_mapping_v2.push(
          {
            'source_type': bootSourceTypes.VOLUME,
            'destination_type': bootSourceTypes.VOLUME,
            'delete_on_termination': finalSpec.vol_delete_on_instance_delete,
            'uuid': finalSpec.source_id,
            'boot_index': '0'
          }
        );
      }
      else {
        finalSpec.block_device_mapping = {};
        finalSpec.block_device_mapping[finalSpec.vol_device_name] = [
          finalSpec.source_id,
          ':',
          sourceType,
          '::',
          finalSpec.vol_delete_on_instance_delete
        ].join('');
      }
      finalSpec.source_id = '';
    }
---cut here---

In case there is a device name provided it will use the legacy bdm, but if the default is "NOTSET" it will create a new bdm. I'm not sure about the legacy part, which consequences there could be, I just tried it without changing that part.
Maybe it should be changed to bdm_v2, too. Anyway, with these changes I could launch an instance from volume with a non-default device name (/dev/sda), also attaching another volume results in a correct device name in horizon/cinder and from within the VM (/dev/sdb).

David Worth (throwsb) wrote :
Download full text (3.4 KiB)

Hi,

I am a newbie to Openstack and I am setting up a lab environment using Openstack-ansible (OSA)for deployment of Pike. I am not sure if what I am running into is related to this bug. I originally thought I was hitting an iscsi issue, but as I went through this thread, it seems to be related.

On initial deploy, I was able to spin up a new instances from image. However things stopped working. The only thing I can think of is something was updated during OSA redeployment.

Some of what I am seeing in the nova-compute log are as follows:

Mar 2 23:29:33 helos-lab02 nova-compute: 2018-03-02 23:29:28.441 21738 WARNING nova.virt.libvirt.driver [req-1242c3fb-0d01-47a3-b7e8-8771844b2563 579c0308bcae43c0a4ca35d6d74bfd83 d861d0fa304647a083da0ae95c2241fe - default default] [instance: 3d457dd7-2992-4d0f-9fbd-2806c71cb654] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Mar 2 23:29:33 helos-lab02 nova-compute: 2018-03-02 23:29:28.592 21738 INFO nova.virt.block_device [req-1242c3fb-0d01-47a3-b7e8-8771844b2563 579c0308bcae43c0a4ca35d6d74bfd83 d861d0fa304647a083da0ae95c2241fe - default default] [instance: 3d457dd7-2992-4d0f-9fbd-2806c71cb654] Booting with blank volume at /dev/vda
Mar 2 23:29:33 helos-lab02 nova-compute: 2018-03-02 23:29:30.990 21738 INFO nova.virt.libvirt.driver [req-1242c3fb-0d01-47a3-b7e8-8771844b2563 579c0308bcae43c0a4ca35d6d74bfd83 d861d0fa304647a083da0ae95c2241fe - default default] [instance: 3d457dd7-2992-4d0f-9fbd-2806c71cb654] Creating image
Mar 2 23:29:33 helos-lab02 nova-compute: 2018-03-02 23:29:31.001 21738 INFO os_brick.initiator.connectors.iscsi [req-1242c3fb-0d01-47a3-b7e8-8771844b2563 579c0308bcae43c0a4ca35d6d74bfd83 d861d0fa304647a083da0ae95c2241fe - default default] Trying to connect to iSCSI portal 192.168.60.12:3260
Mar 2 23:29:33 helos-lab02 nova-compute: 2018-03-02 23:29:31.055 21738 WARNING os_brick.initiator.connectors.iscsi [req-1242c3fb-0d01-47a3-b7e8-8771844b2563 579c0308bcae43c0a4ca35d6d74bfd83 d861d0fa304647a083da0ae95c2241fe - default default] Couldn't find iscsi sessions because iscsiadm err: iscsiadm: No active sessions.
Mar 2 23:29:53 helos-lab02 nova-compute: 2018-03-02 23:29:44.941 21738 INFO os_brick.initiator.connectors.iscsi [req-1242c3fb-0d01-47a3-b7e8-8771844b2563 579c0308bcae43c0a4ca35d6d74bfd83 d861d0fa304647a083da0ae95c2241fe - default default] Trying to connect to iSCSI portal 192.168.60.12:3260
Mar 2 23:29:53 helos-lab02 nova-compute: 2018-03-02 23:29:44.993 21738 WARNING os_brick.initiator.connectors.iscsi [req-1242c3fb-0d01-47a3-b7e8-8771844b2563 579c0308bcae43c0a4ca35d6d74bfd83 d861d0fa304647a083da0ae95c2241fe - default default] Couldn't find iscsi sessions because iscsiadm err: iscsiadm: No active sessions.
Mar 2 23:29:53 helos-lab02 nova-compute: : VolumeDeviceNotFound: Volume device not found at .

The task would die and I would get a host not found error in horizon. I tried using the patch in 20 as well as eblocks update above, and neither worked. I saw the js file at 2 locations for horizons:

/openstack/venvs/horizon-16.0.8/lib/python2.7/site-packages/openstack_dashboard/dashboards/project/static/dashboard/project/workflow/launch-in...

Read more...

Akihiro Motoki (amotoki) on 2018-04-22
Changed in horizon:
milestone: rocky-1 → rocky-2
Urbansh Xie (urbanshxie) wrote :

Hi,I also met this bug under Pike(seems Ocata don't have this problem).I choose image below "Select Boot Source" and can create instance normally if choose Create New Volume with no.However,if choose yes with Create New Volume and type a number for saying 10.Then will got error Boot failed not a bootable disk in console and can see in the log with this error similar with #40.Also,I'm sure cinder is correctly configured because this 10G can be created and show in-use with that instance.
Also,if I first create an instance then create volume in dashboard or ssh,can attach this volume to the instance successfully.

lixiaolong (kong62) wrote :
Download full text (7.2 KiB)

Hi,I also met this bug under Queens

when i use nova reboot --hard uuid ,instance can be rebooted ,bug nova-compute.log have these:

2018-09-29 17:21:56.348 6074 INFO nova.compute.manager [req-c1fb79c2-d47a-4b0b-b9e7-0745518deb9b 4cc1c945837c45ed995f54922fdaf9bd 989c7f5435af432dabd4de1e59d42c65 - default default] [instance: 9ce3016b-3df2-43c7-aa78-6442feb8187b] Rebooting instance
2018-09-29 17:21:57.407 6074 WARNING nova.compute.manager [req-c1fb79c2-d47a-4b0b-b9e7-0745518deb9b 4cc1c945837c45ed995f54922fdaf9bd 989c7f5435af432dabd4de1e59d42c65 - default default] [instance: 9ce3016b-3df2-43c7-aa78-6442feb8187b] trying to reboot a non-running instance: (state: 0 expected: 1)
2018-09-29 17:21:57.512 6074 INFO nova.virt.libvirt.driver [req-c1fb79c2-d47a-4b0b-b9e7-0745518deb9b 4cc1c945837c45ed995f54922fdaf9bd 989c7f5435af432dabd4de1e59d42c65 - default default] [instance: 9ce3016b-3df2-43c7-aa78-6442feb8187b] Instance destroyed successfully.
2018-09-29 17:21:57.513 6074 INFO os_vif [req-c1fb79c2-d47a-4b0b-b9e7-0745518deb9b 4cc1c945837c45ed995f54922fdaf9bd 989c7f5435af432dabd4de1e59d42c65 - default default] Successfully unplugged vif VIFBridge(active=True,address=fa:16:3e:44:c5:42,bridge_name='brq435fb16f-77',has_traffic_filtering=True,id=8a1c11a1-3822-4c92-a677-6fdafd402f97,network=Network(435fb16f-77de-4aa7-a987-c78976458515),plugin='linux_bridge',port_profile=<?>,preserve_on_delete=True,vif_name='tap8a1c11a1-38')
2018-09-29 17:21:57.515 6074 INFO oslo.privsep.daemon [req-c1fb79c2-d47a-4b0b-b9e7-0745518deb9b 4cc1c945837c45ed995f54922fdaf9bd 989c7f5435af432dabd4de1e59d42c65 - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--privsep_context', 'os_brick.privileged.default', '--privsep_sock_path', '/tmp/tmp1iYREs/privsep.sock']
2018-09-29 17:21:58.033 6074 INFO oslo.privsep.daemon [req-c1fb79c2-d47a-4b0b-b9e7-0745518deb9b 4cc1c945837c45ed995f54922fdaf9bd 989c7f5435af432dabd4de1e59d42c65 - default default] Spawned new privsep daemon via rootwrap
2018-09-29 17:21:57.993 30402 INFO oslo.privsep.daemon [req-c1fb79c2-d47a-4b0b-b9e7-0745518deb9b 4cc1c945837c45ed995f54922fdaf9bd 989c7f5435af432dabd4de1e59d42c65 - default default] privsep daemon starting
2018-09-29 17:21:57.996 30402 INFO oslo.privsep.daemon [req-c1fb79c2-d47a-4b0b-b9e7-0745518deb9b 4cc1c945837c45ed995f54922fdaf9bd 989c7f5435af432dabd4de1e59d42c65 - default default] privsep process running with uid/gid: 0/0
2018-09-29 17:21:57.998 30402 INFO oslo.privsep.daemon [req-c1fb79c2-d47a-4b0b-b9e7-0745518deb9b 4cc1c945837c45ed995f54922fdaf9bd 989c7f5435af432dabd4de1e59d42c65 - default default] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
2018-09-29 17:21:57.998 30402 INFO oslo.privsep.daemon [req-c1fb79c2-d47a-4b0b-b9e7-0745518deb9b 4cc1c945837c45ed995f54922fdaf9bd 989c7f5435af432dabd4de1e59d42c65 - default default] privsep daemon running as pid 30402
2018-09-29 17:21:58.091 6074 WARNING os_brick.initiator.connectors.iscsi [req-c1fb79c2-d47a-4b0b-b9e7-0745518deb9b 4cc1c945837c45ed995f54922fdaf9bd 989c7f5...

Read more...

Eric Desrochers (slashd) wrote :

Any update on this bug ? I saw a commit which seems to be 'abadonned'

Akihiro Motoki (amotoki) wrote :

Removing milestone target in horizon and lowering the priority to Low (same as in Nova).

Changed in horizon:
milestone: rocky-2 → none
importance: High → Low
Eric Desrochers (slashd) wrote :

Akihiro Motoki (amotoki)

Do you know what happened with that bug ? I saw a abandoned fix (which I tried on top or 'Rocky') and didn't work... is there any other progress I haven't seen ?

Eric Desrochers (slashd) wrote :

I tried the proposal 'abandoned' patch on top of 'Rocky' (which still exhibit the issue) and it totally broke the 'launch instance' creation... it doesn't seems to work anymore with the more recent Openstack release.

I'll try to dig and see what I can come up with.

Kai Wembacher (ktwe) wrote :

I'm facing the same issue in Rocky when launching images with hw_disk_bus=scsi and hw_scsi_model=virtio-scsi using a flavor that provides a swap disk. In this case the device_name should be sda, but is set to vda by Horizon instead. As a result nova attaches the root disk as vda and the swap disk as sda which destroys the boot order.

I did some work to port the whole block device mapping part to v2 removing the vda default. This currently requires can_set_mount_point=False (which I guess is the default) because otherwise the vda_device_name will be mapped to a text field in the instance creation workflow which still has to be handled in some way. Also the fix lacks of an update to the spec file.

You can find the current state of my change here:
https://github.com/ktwe/horizon/commit/ef2428e504cec67a177d9c4815e733b9bdba7db2

I will try to complete my work and submit it to the project, but if someone wants to build on my work in the meantime, please feel free to so.

Eric Desrochers (slashd) wrote :

Kai Wembacher (ktwe)

I was able to test your proposal fix on top of Rocky with an image with the following properties:
hw_disk_bus='scsi', hw_scsi_model='virtio-scsi'

$ openstack volume list --long
+--------------------------------------+------+-----------+------+------+----------+-----------------------------------+--------------------+
| ID | Name | Status | Size | Type | Bootable | Attached to | Properties |
+--------------------------------------+------+-----------+------+------+----------+-----------------------------------+--------------------+
| 39ffefeb-d993-46b4-a829-a5e9852629ce | | in-use | 20 | None | true | Attached to testfix1 on /dev/sda | attached_mode='rw' |

Horizon no longer display /dev/vda but /dev/sda as it should when the above properties are set.

Any plan to submit the change to the project anytime soon ?

Thanks
Eric

Eric Desrochers (slashd) wrote :

Kai Wembacher (ktwe)

Any progress on you submitting the patch upstream for the horizon project ?

Kai Wembacher (ktwe) wrote :

Hi Eric,

sorry for the delay. Unfortunately there is still some work to do, as my solution won't work only with can_set_mount_point=False. I will do my best to get it submitted asap.

Best regards,
Kai

Eric Desrochers (slashd) wrote :

Thanks Kai. Please keep me posted of any progress you make and if you need any help.

- Eric

Eric Desrochers (slashd) wrote :

Hi Kai,

Any progress ? Anything I can help you with ?

Eric Desrochers (slashd) wrote :

Hi Kai,

Me again. Are you close to submit the patch upstream ?

Eric Desrochers (slashd) wrote :

I have instrumented horizon and compare the calls when creating a new instance with boot volume and when simply attaching a second volume.

Attaching a second volume doesn't pass any device value, which I make me think leaves the decision to cinder to give the volume name.

But when we launch a VM via horizon with a boot volume because of this 'vda' value being hardcorded, it send to cinder the device value 'vda'.

So setting vol_device_name to 'null' is IMHO the right thing to do as it'll be replicating what adding additional volume does already.

Eric Desrochers (slashd) wrote :

'''
@@ -205,7 +205,6 @@
// REQUIRED for JS logic
vol_create: false,
// May be null
- vol_device_name: 'vda',
vol_delete_on_instance_delete: false,
vol_size: 1
};
@@ -747,11 +746,9 @@
function setFinalSpecBootImageToVolume(finalSpec) {
if (finalSpec.vol_create) {
// Specify null to get Autoselection (not empty string)
- var deviceName = finalSpec.vol_device_name ? finalSpec.vol_device_name : null;
finalSpec.block_device_mapping_v2 = [];
finalSpec.block_device_mapping_v2.push(
{
- 'device_name': deviceName,
'source_type': bootSourceTypes.IMAGE,
'destination_type': bootSourceTypes.VOLUME,
'delete_on_termination': finalSpec.vol_delete_on_instance_delete,
'''

Forcing horizon not pushing device_name, seems to force cinder to take the decision, just like when adding additional volume to an existing VM where horizon doesn't passes devicename, and leave cinder to do the math instead.

With the above I was able to :

* VM boot volume without scsi meta data decoration:
Volumes Attached
Attached To
0a0cd660-7ce3-4033-9983-2e1099edc5f0 on /dev/vda

* VM boot volume with scsi meta data decoration:
Volumes Attached
Attached To
91f50dbc-8bdf-4293-84ea-fc5df27b5ee4 on /dev/sda

Eric Desrochers (slashd) wrote :

Seems like it might not be necessary for Horizon to pick a device_name if cinder take over in the case of device_name absence ^

thoughts ?

Fix proposed to branch: master
Review: https://review.openstack.org/644982

Changed in horizon:
assignee: nobody → Eric Desrochers (slashd)
status: Confirmed → In Progress
Eric Desrochers (slashd) wrote :

As agreed with amotoki,

We will fix this in two phases,

Phase 1: My current fix address the instance creation using image with new volume in function setFinalSpecBootImageToVolume() by leaving the decision to nova to determine the device name.

Phase 2: Address the instance creation using an existing volume or snapshot (which still default to 'vda' using another function setFinalSpecBootFromVolumeDevice() which uses BDMv1 (legacy) instead of BDMv2. Maybe as we fix that part in a second phase, we should consider to take the time to upgrade to code to BDMv2.

So this is the plan right now. Phase 1 is cover here: https://review.openstack.org/644982 and waiting for Code-Review.

- Eric

Changed in nova:
status: Confirmed → Invalid
Changed in cinder:
status: New → Invalid
Akihiro Motoki (amotoki) wrote :

(comment on horizon)
As of stein/train release, this issue does not affect VM spawning. It just affects volume information of the instance detail page (and the output of "nova volume-attachments <server-id>"). Eric's fix in horizon will address it. Even if "nova volume-attachments <server-id>" and the instance detail page say /dev/vda, a volume is attached as /dev/sda inside a VM and the VM successfully launched (I tested with cirros image from devstack). I guess this is because of the fact "Note that as of the 12.0.0 Liberty release, the Nova libvirt driver no longer honors a user-supplied device name" (as described in the Nova API reference [1]).

[1] https://developer.openstack.org/api-ref/compute/?expanded=create-server-detail

Akihiro Motoki (amotoki) wrote :

The similar visible issue in the volume information in the instance detail page happens when creating a new server using boot from volume with an existing volume or an existing volume snapshot. For example, when a volume is created based on an image (with hw_disk_bus=scsi and hw_scsi_model=virtio-scsi), these information is stored in cinder volume metadata. When a new server is created from this cinder volume, these metadata is honored. horizon specified 'vda' as device_name, ao a VM itself works well (with /dev/sda) from the same reason described in the previous comment (#59), but the instance detail page says the volume is attached as '/dev/vda'. I will send a fix for this case separately as a follow-up patch of Eric's fix.

Fix proposed to branch: master
Review: https://review.openstack.org/648328

Changed in horizon:
assignee: Eric Desrochers (slashd) → Akihiro Motoki (amotoki)
Eric Desrochers (slashd) on 2019-03-28
Changed in horizon:
assignee: Akihiro Motoki (amotoki) → Eric Desrochers (slashd)
Eric Desrochers (slashd) on 2019-03-28
Changed in horizon:
status: In Progress → Fix Released

Change abandoned by Eric Desrochers (<email address hidden>) on branch: stable/rocky
Review: https://review.openstack.org/649028
Reason: Not Cherry Picked from master like I did for others. Will re-submit like I did for others.

Change abandoned by Eric Desrochers (<email address hidden>) on branch: stable/rocky
Review: https://review.openstack.org/649028
Reason: Need to be cherry pick from branch/stein -> branch/rocky

Ivan Kolodyazhny (e0ne) on 2019-04-03
Changed in horizon:
milestone: none → stein-rc2

Reviewed: https://review.openstack.org/649026
Committed: https://git.openstack.org/cgit/openstack/horizon/commit/?id=96ad636f868ddd6350d452e0f0212abcc505e67f
Submitter: Zuul
Branch: stable/stein

commit 96ad636f868ddd6350d452e0f0212abcc505e67f
Author: Eric Desrochers <email address hidden>
Date: Wed Mar 20 14:56:02 2019 -0400

    Not implicitly set vol_device_name to vda

    Using a scsi decorated image with:
    hw_disk_bus='scsi'
    hw_scsi_model='virtio-scsi'

    This solve the case where an instance is
    launched with 'image' selected as boot
    source with a new volume. This will result
    in /dev/vda instead of /dev/sda as it
    should.

    Not specifying device name in
    setFinalSpecBootImageToVolume() leaves the
    decision to nova to determine it.

    Example:
    -------
    VM boot volume without scsi meta data decoration:
    Attached To
    0a0cd660-7ce3-4033-9983-2e1099edc5f0 on /dev/vda

    VM boot volume with scsi meta data decoration:
    Attached To
    91f50dbc-8bdf-4293-84ea-fc5df27b5ee4 on /dev/sda
    --------

    Note: This commit doesn't address cases for
    where instances are launched using existing
    volume and snapshot, this will involve more
    work to migrate the code from BDMv1 to BDMv2.

    Closes-Bug #1560965

    Change-Id: I9d114c2c2e6736a8f1a8092afa568f930b656f09
    (cherry picked from commit 4788c4d2f59b8aa08e5f599a6d2c327b6004dc0c)

tags: added: in-stable-stein

Reviewed: https://review.openstack.org/649028
Committed: https://git.openstack.org/cgit/openstack/horizon/commit/?id=79a4c3bd1aac1473f0d524d98c4cb96475cde498
Submitter: Zuul
Branch: stable/rocky

commit 79a4c3bd1aac1473f0d524d98c4cb96475cde498
Author: Eric Desrochers <email address hidden>
Date: Wed Mar 20 14:56:02 2019 -0400

    Not implicitly set vol_device_name to vda

    Using a scsi decorated image with:
    hw_disk_bus='scsi'
    hw_scsi_model='virtio-scsi'

    This solve the case where an instance is
    launched with 'image' selected as boot
    source with a new volume. This will result
    in /dev/vda instead of /dev/sda as it
    should.

    Not specifying device name in
    setFinalSpecBootImageToVolume() leaves the
    decision to nova to determine it.

    Example:
    -------
    VM boot volume without scsi meta data decoration:
    Attached To
    0a0cd660-7ce3-4033-9983-2e1099edc5f0 on /dev/vda

    VM boot volume with scsi meta data decoration:
    Attached To
    91f50dbc-8bdf-4293-84ea-fc5df27b5ee4 on /dev/sda
    --------

    Note: This commit doesn't address cases for
    where instances are launched using existing
    volume and snapshot, this will involve more
    work to migrate the code from BDMv1 to BDMv2.

    Closes-Bug #1560965

    Change-Id: I9d114c2c2e6736a8f1a8092afa568f930b656f09
    (cherry picked from commit 4788c4d2f59b8aa08e5f599a6d2c327b6004dc0c)

tags: added: in-stable-rocky

Reviewed: https://review.openstack.org/649027
Committed: https://git.openstack.org/cgit/openstack/horizon/commit/?id=0df60f4dba64e45486d9ef4ea5a24e0ee4f05355
Submitter: Zuul
Branch: stable/queens

commit 0df60f4dba64e45486d9ef4ea5a24e0ee4f05355
Author: Eric Desrochers <email address hidden>
Date: Wed Mar 20 14:56:02 2019 -0400

    Not implicitly set vol_device_name to vda

    Using a scsi decorated image with:
    hw_disk_bus='scsi'
    hw_scsi_model='virtio-scsi'

    This solve the case where an instance is
    launched with 'image' selected as boot
    source with a new volume. This will result
    in /dev/vda instead of /dev/sda as it
    should.

    Not specifying device name in
    setFinalSpecBootImageToVolume() leaves the
    decision to nova to determine it.

    Example:
    -------
    VM boot volume without scsi meta data decoration:
    Attached To
    0a0cd660-7ce3-4033-9983-2e1099edc5f0 on /dev/vda

    VM boot volume with scsi meta data decoration:
    Attached To
    91f50dbc-8bdf-4293-84ea-fc5df27b5ee4 on /dev/sda
    --------

    Note: This commit doesn't address cases for
    where instances are launched using existing
    volume and snapshot, this will involve more
    work to migrate the code from BDMv1 to BDMv2.

    Closes-Bug #1560965

    Change-Id: I9d114c2c2e6736a8f1a8092afa568f930b656f09
    (cherry picked from commit 79a4c3bd1aac1473f0d524d98c4cb96475cde498)

tags: added: in-stable-queens

Reviewed: https://review.openstack.org/649029
Committed: https://git.openstack.org/cgit/openstack/horizon/commit/?id=eba87483eaea45905b4f260130f319404aa17fa5
Submitter: Zuul
Branch: stable/pike

commit eba87483eaea45905b4f260130f319404aa17fa5
Author: Eric Desrochers <email address hidden>
Date: Wed Mar 20 14:56:02 2019 -0400

    Not implicitly set vol_device_name to vda

    Using a scsi decorated image with:
    hw_disk_bus='scsi'
    hw_scsi_model='virtio-scsi'

    This solve the case where an instance is
    launched with 'image' selected as boot
    source with a new volume. This will result
    in /dev/vda instead of /dev/sda as it
    should.

    Not specifying device name in
    setFinalSpecBootImageToVolume() leaves the
    decision to nova to determine it.

    Example:
    -------
    VM boot volume without scsi meta data decoration:
    Attached To
    0a0cd660-7ce3-4033-9983-2e1099edc5f0 on /dev/vda

    VM boot volume with scsi meta data decoration:
    Attached To
    91f50dbc-8bdf-4293-84ea-fc5df27b5ee4 on /dev/sda
    --------

    Note: This commit doesn't address cases for
    where instances are launched using existing
    volume and snapshot, this will involve more
    work to migrate the code from BDMv1 to BDMv2.

    Closes-Bug #1560965

    Change-Id: I9d114c2c2e6736a8f1a8092afa568f930b656f09
    (cherry picked from commit 0df60f4dba64e45486d9ef4ea5a24e0ee4f05355)

tags: added: in-stable-pike

Reviewed: https://review.opendev.org/648328
Committed: https://git.openstack.org/cgit/openstack/horizon/commit/?id=5ab15c49da8f85f5855f949ad9b1a4c95a50e714
Submitter: Zuul
Branch: master

commit 5ab15c49da8f85f5855f949ad9b1a4c95a50e714
Author: Akihiro Motoki <email address hidden>
Date: Thu Mar 28 15:33:51 2019 +0900

    Do not specify device_name when creating server with BFV

    When creating a server using an existing volume or volume snapshot,
    horizon specified a device_name 'vda'.
    However the device name depends on bus type or something and
    this causes incorrect volume information in the instance detail page
    (and nova volume-attachments CLI output).
    For detail background, see bug 1560965.

    block_device_mapping v1 required device_name but it is no longer
    required to specify device_name 'vda' in block_device_mapping v2
    in Nova create-server API.

    This commit changes to use block_device_mapping v2 so that
    we can avoid specifying device_name.

    Change-Id: Ice1a6bb2dcddab0a89c99f345d1f2cd101955b28
    Partial-Bug: #1560965

To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Duplicates of this bug

Other bug subscribers

Bug attachments