libvirt selects wrong root device name
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Cinder |
Invalid
|
Undecided
|
Unassigned | ||
OpenStack Compute (nova) |
Invalid
|
Low
|
Unassigned | ||
OpenStack Dashboard (Horizon) |
Fix Released
|
Low
|
Eric Desrochers |
Bug Description
Referring to Liberty, Compute runs with xen hypervisor:
When trying to boot an instance from volume via Horizon, the VM fails to spawn because of an invalid block device mapping. I found a cause for that in a default initial "device_name=vda" in the file /srv/www/
Log file nova-compute.log reports
"Ignoring supplied device name: /dev/vda"
, but in the next step it uses it anyway and says
"Booting with blank volume at /dev/vda".
To test my assumption, I blanked the device_name and edited the array dev_mapping_2 to only append device_name if it's not empty. That works perfectly for Booting from Horizon and could be one way to fix this.
But if you use nova boot command, you can still provide (multiple) device names, for example if you launch an instance and attach a blank volume.
It seems that libvirt is indeed ignoring the supplied device name, but only if it's not the root device. If I understand correctly, a user-supplied device_name should also be nulled out for root_device_name and picked by libvirt, if it's not valid. And also the default value for device_name in Horizon dashboard should be None. If there is one supplied, it should be processed and probably validated by libvirt.
Steps to reproduce from Horizon:
Use Xen as hypervisor
1. Go to the Horizon dashboard and launch an instance
2. Select "Boot from image (creates a new volume)" as Instance Boot Source
Expected result:
Instance starts with /dev/xvda as root device.
Actual result:
Build of instance fails, nova-compute.log reports
"BuildAbortExce
Steps to reproduce from nova cli:
1. Launch an instance from command line via
nova boot --flavor 1 --block-device source=
Expected result:
Instance starts with /dev/xvda as root device.
Actual result:
Build of instance fails, device name for vdb is ignored and replaced correctly, but the root device is not.
eblock@nde.ag (eblock) wrote : | #1 |
Changed in nova: | |
status: | New → Confirmed |
importance: | Undecided → Low |
tags: | added: libvirt |
Paul Bourke (pauldbourke) wrote : | #2 |
Hi eblock, I have just come across the same problem, thanks for your useful analysis. Some questions:
1) Should this bug not be logged against Horizon rather than Nova?
2) Is setting the initial device_name in Horizon to blank not enough to make libvirt autoselect the boot device? Or do you explicitly need to remove it from the request dict as you have done in your patch?
Paul Bourke (pauldbourke) wrote : | #3 |
3) You mentioned in your mailing list post that you set the "Device Name" field in Horizon to visible - could you let me know how you did this?
eblock@nde.ag (eblock) wrote : | #4 |
Hi Paul,
> 1) Should this bug not be logged against Horizon rather than Nova?
In my opinion, the faulty behavior in Horizon is only a result of the wrong device name selection by libvirt. If Horizon provides the wrong device name it should be checked within libvirt (nova) and then pick a correct name. But I'm hoping for the professionals here to correct me as I'm quite new to Openstack.
> 2) Is setting the initial device_name in Horizon to blank not enough to make libvirt autoselect the boot device?
By default, you can't provide your own device name as the respective field is blank. So if you launched a VM from volume via Horizon, you would always have the default device_name "vda" passed to nova, which would lead to the termination of that VM. I only made that field visible for testing purposes because I don't want users to bother about device names. That's why I set the default device_name to blank.
> 3) you set the "Device Name" field in Horizon to visible - could you let me know how you did this?
Sure:
control1:~ # diff -u /srv/www/
--- /srv/www/
+++ /srv/www/
@@ -203,7 +203,7 @@
# can_set_mount_point to True will add the option to set the mount point
# from the UI.
OPENSTACK_
- 'can_set_
+ 'can_set_
'can_
'requires_
}
eblock@nde.ag (eblock) wrote : | #5 |
I have to add another note on your second question:
If you don't remove device_name from the dict if it's empty, apache will report an error like this:
"Invalid input for field/attribute device_name. Value: None. None is not of type 'string'"
That's why I removed it.
eblock@nde.ag (eblock) wrote : | #6 |
I upgraded my cloud env to Mitaka and was facing the same issue as described. Unfortunately, my first patch mentioned here had no effect. I started digging once again and noticed that the respective code in create_instance.py is not executed anymore. So it seems that now it's the code in /srv/www/
---cut here---
control1:~ # diff -u /srv/www/
--- /srv/www/
+++ /srv/www/
@@ -164,8 +164,8 @@
source: [],
// REQUIRED for JS logic
- // May be null
- vol_device_name: 'vda',
+ // Not set by default to let nova choose autom.
+ vol_device_name: 'NOTSET',
vol_size: 1
};
@@ -546,17 +546,32 @@
// Specify null to get Autoselection (not empty string)
var deviceName = finalSpec.
- finalSpec.
- {
- 'device_name': deviceName,
- 'source_type': SOURCE_TYPE_IMAGE,
- 'destination_type': SOURCE_TYPE_VOLUME,
- 'delete_
- 'uuid': finalSpec.
- 'boot_index': '0',
- 'volume_size': finalSpec.vol_size
- }
- );
+ if (finalSpec.
+ finalSpec.
+ {
+// 'device_name': deviceName,
+ 'source_type': SOURCE_TYPE_IMAGE,
+ 'destination_type': SOURCE_TYPE_VOLUME,
+ 'delete_
+ 'uuid': finalSpec.
+ 'boot_index': '0',
+ 'volume_size': finalSpec.vol_size
+ }
+ );
+ }
+ else {
+ finalSpec.
+ {
+ 'device_name': deviceName,
+ 'source_type': SOURCE_TYPE_IMAGE,
+ 'destination_type': SOURCE_TYPE_VOLUME,
+ 'delete_
+ 'uuid': finalSpec.
+ 'boot_index': '0',
+ 'volume_size': finalSpec.vol_size
+ }
+ );
+ }
}
}
---cut here---
I also added horizon to the affected projects of this bug, I'm not s...
Markus Zoeller (markus_z) (mzoeller) wrote : | #7 |
I think bug 1217874 is an older bug which describes the same issue.
eblock@nde.ag (eblock) wrote : | #8 |
Actually, I don't see that as a duplicate.
What I'm describing is the failure to launch an instance at all if you choose to boot it from volume on Xen hypervisor. Attaching volumes to running instances works fine, so it's not related to cinder.
The cause is a default device name 'vda' in the file launch-
Tomasz Głuch (tomekg) wrote : | #9 |
On Mitaka with KVM hypervisor this bug also is visible.
When booting instance from volume created from image with "hw_scsi_
Further bug caused by this issue is an impossibility to attach another volume, as nova volume-attach tries to use seemingly free name /dev/sda for next device. This causes conflict in building XML in libvirt:
2016-10-20 17:38:41.780 13818 ERROR oslo_messaging.
eblock@nde.ag (eblock) wrote : | #10 |
This issue is developing. Just to summarize:
first it was the python file /srv/www/
/srv/www/
instead of
/srv/www/
These files are identical, so I applied the same patch to that file and now it works again with the block devices.
I hope someone will fix this, it's kind of annoying for users (and admins).
Matthew Taylor (matthew-taylor-f) wrote : | #11 |
@eblock - can you please paste a diff of your create_instance.py file !
eblock@nde.ag (eblock) wrote : | #12 |
I can't provide a diff from that file, It's the distributed file I'm using since my changes from March didn't have any effects anymore. The only differences regarding Horizon are in the files /srv/www/
/srv/www/
Changed in horizon: | |
status: | New → Confirmed |
Matthew Taylor (matthew-taylor-f) wrote : | #13 |
Can confirm that what you provided in #6 fixes the issue (with Mitaka, looks the same for Newton), and it makes complete sense to let the image metadata, or the hypervisor software (ie. xen/kvm) dictate what the primary disk is going to be labeled.
I also had the same issue, where we had set "disk_prefix = sd" in nova.conf, along with "hw_scsi_
Consequent disk's being added from there would appear as /dev/sd[b/d/c] in horizon, however they would not mount due to the device being in use and cause all sorts of issues from there (to the point that the VM will not boot if restarted).
After applying the patch (from #6), disks appear as expected in order (eg. /dev/sd[a/b/c]).
OpenStack Infra (hudson-openstack) wrote : Fix proposed to horizon (master) | #14 |
Fix proposed to branch: master
Review: https:/
Changed in horizon: | |
assignee: | nobody → Matthew Taylor (matthew-taylor-f) |
status: | Confirmed → In Progress |
Matthew Taylor (matthew-taylor-f) wrote : | #15 |
Needs more work for Newton. this'll be a WIP for me for a while.
Paul Bourke (pauldbourke) wrote : | #16 |
- 0001-Use-block-device-mapping-v2-for-boot-from-vol.patch Edit (2.7 KiB, text/plain)
Testing on Mitaka and confirming this is still very messy. I'd be interested to know why the relevant code seems to have moved from create_instance.py to Javascript as eblock has pointed out.
For some reason I haven't figured out yet, eblock's latest patch is not working for me, though a similar patch to the function below (setFinalSpecBo
Here is some relevant info while it's fresh in my mind:
* Both image->volume and volume->volume appear to work via the API but not Horizon, so this is absolutely a Horizon problem imo.
* Reading the docs on Nova BDM specifiying the device name as Horizon currently does (vda) is not recommended, so eblock's patch should be the correct approach.
* setFinalSpecBoo
OpenStack Infra (hudson-openstack) wrote : Change abandoned on horizon (master) | #17 |
Change abandoned by David Lyle (<email address hidden>) on branch: master
Review: https:/
Reason: This review is > 4 weeks without comment, and failed Jenkins the last time it was checked. We are abandoning this for now. Feel free to reactivate the review by pressing the restore button and leaving a 'recheck' comment to get fresh test results.
Changed in horizon: | |
assignee: | Matthew Taylor (matthew-taylor-f) → nobody |
status: | In Progress → New |
status: | New → Confirmed |
Sean Dague (sdague) wrote : | #18 |
Automatically discovered version liberty in description. If this is incorrect, please update the description to include 'nova version: ...'
tags: | added: openstack-version.liberty |
eblock@nde.ag (eblock) wrote : | #19 |
I don't know if I was supposed to do that, but since this description also applies to Mitaka, I added the respective tag.
tags: | added: openstack-version.mitaka |
Stefanos Koffas (skoffas) wrote : | #20 |
- 1560965.patch Edit (3.9 KiB, text/plain)
Hello,
I had the same problem in Openstack Newton. I had exactly the same problem as Tomasz Głuch (tomekg) wrote in #9. I applied the patch that was proposed in #6 but when I tried to create a volume from the image and then boot a VM from that volume the problem still existed. As Paul Bourke suggested in #16 I changed (setFinalSpecBo
Maximiliano (massimo-6) wrote : | #21 |
Hi Guys,
Any news about this bug ? it still happen in Ocata.
If you use nova along with "hw_scsi_
Nova uses the same name for both the / volume and the new cinder one, both are /dev/sda.
Stefanos Koffas (skoffas) wrote : | #22 |
Hi Maximiliano,
Have you tried the patch that I attached in my comment above (#20). I think that it
solves the problem. You can find the file that is used by the Openstack dashboard
(horizon) in /var/lib/
and if you have any problems tell me.
PS: I know that if I believe that my patch solves the problem I should make a proper PR
and I will give it a try when find some free time :)
Annie Melen (anniemelen) wrote : | #23 |
Hi, guys!
I've just утсщгтthe same bug as you in Pike Release. Stefanos's patch helped me to solve the problem with wrong device mapping but even after that my instances wont't started caused of 'No bootable device' error.
Dumpxml output:
<devices>
<emulator>
<disk type='network' device='disk'>
<driver name='qemu' type='raw' cache='writeback' discard='unmap'/>
<auth username=
<secret type='ceph' uuid='0a2cc1e3-
</auth>
<source protocol='rbd' name='cinder_
<host name='10.0.230.101' port='6789'/>
<host name='10.0.230.107' port='6789'/>
<host name='10.0.230.113' port='6789'/>
</source>
<
<target dev='sda' bus='scsi'/>
<
<alias name='scsi0-
<address type='drive' controller='0' bus='0' target='0' unit='1'/>
</disk>
<disk type='network' device='disk'>
<driver name='qemu' type='raw' cache='writeback' discard='unmap'/>
<auth username=
<secret type='ceph' uuid='0a2cc1e3-
</auth>
<source protocol='rbd' name='nova_
<host name='10.0.230.101' port='6789'/>
<host name='10.0.230.107' port='6789'/>
<host name='10.0.230.113' port='6789'/>
</source>
<
<target dev='sdb' bus='scsi'/>
<alias name='scsi0-
<address type='drive' controller='0' bus='0' target='0' unit='0'/>
</disk>
<controller type='scsi' index='0' model='
<alias name='scsi0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</controller>
I've tried images like Ubuntu-
Anyone could help me please?
eblock@nde.ag (eblock) wrote : | #24 |
Hi Annie,
have you been able to launch any instances in your cloud yet?
There's one thing I check before I upload any images to glance, I make sure that the image has a partition table. You can find more details in [0].
Depending on your hypervisor and the images it's possible that the image is missing kernel modules. I had to deal with that when trying to migrate Xen instances to KVM [1].
You could also create your own image by installing from an iso and see if that image works [2].
There's also a mailing list for these kind of questions [3], I believe the chances for help are better here.
And last but not least visit [4], also very helpful.
[0] http://
[1] http://
[2] https:/
[3] http://
[4] https:/
Annie Melen (anniemelen) wrote : | #25 |
Thanks for response!
Of course, I'm able to launch instances from the same images when bus is virtio, it's working fine. Also, there are virtio_scsi kernel modules in all images I use for testing.
What is really interesting, when I tried to define new instance from the same XML-file but manually changed scsi to virtio, instance was booted normally... As far as I can see, there is a problem in hypervisor configuration (I use nova-compute-kvm), but obviously I miss something important.
eblock@nde.ag (eblock) wrote : | #26 |
Have you removed/added the respective image-properties?
openstack image set --property hw_scsi_
or
openstack image set --remove-property hw_scsi_model --remove-property hw_disk_bus --remove-property hw_qemu_guest_agent --remove-property os_require_quiesce 8a143d74-
Annie Melen (anniemelen) wrote : | #27 |
Actually, I've created two identical images from one source and updated properties on one of them:
# openstack image show -f value -c properties 404dc2d6-
direct_
Instances from this image don't work properly ('No bootable device' in console output).
Another image has no properties except direct_url:
# openstack image show -f value -c properties b9549d48-
direct_
Instances from this image are booting fine.
eblock@nde.ag (eblock) wrote : | #28 |
I still think the image is missing some kernel modules, could you paste the output of
find /lib/modules/
Annie Melen (anniemelen) wrote : | #29 |
Here, take a look
root@test:~# find /lib/modules/ -name *scsi*
/lib/modules/
/lib/modules/
/lib/modules/
/lib/modules/
/lib/modules/
/lib/modules/
/lib/modules/
/lib/modules/
/lib/modules/
/lib/modules/
/lib/modules/
/lib/modules/
/lib/modules/
/lib/modules/
/lib/modules/
/lib/modules/
/lib/modules/
/lib/modules/
/lib/modules/
/lib/modules/
/lib/modules/
/lib/modules/
Deepa (dpaclt) wrote : | #30 |
Same issue persists on Ocata version too ,we are using Dell Unity 300F.
>>>>>>>>>>>>>>>>
WARNING nova.virt.
973e83ccaabe71 - - -] [instance: 64cc8284-
2018-02-22 06:23:31.411 3937535 INFO nova.virt.
3ccaabe71 - - -] [instance: 64cc8284-
>>>>>>>
Akihiro Motoki (amotoki) wrote : | #31 |
As my hat of horizon team, I would like to clarify the current situation. This bug is now associated with horizon, nova and cinder.
Regarding the boot from volume, horizon talks with only Nova API, so the problem can be split into two parts:
(a) create-server parameters which passed from horizon to nova
(b) nova and cinder behaviors around create-server with boot-from-volume
The following are questions from horizon side:
(1) Is the problem only (a) above?
(2) If (1) is yes, Apart from test code, does a patch attached n #20 solve the problem?
(3) If (1) is no, can we use this bug on focusing (a) and split (b) as a separate bug?
We don't have Xen hypervisors, so we cannot actually test it. I think this is the main reason horizon team failed to address this for a long time. Hopefully, someone who can test Xen hypervisors can propose a fix.
Changed in horizon: | |
importance: | Undecided → High |
milestone: | none → rocky-1 |
eblock@nde.ag (eblock) wrote : | #32 |
We switched from Xen to KVM a couple of months ago, so I won't be able to test any changes, at least not easily. I can check if we have a spare compute node for testing purposes.
Someone already confirmed this for Ocata, too, but I just ran another test in our environment. I don't know if I can really clarify anything because the used source code has changed in the last two years and in the meantime my own patch doesn't even work anymore, the patch from #20 also fails.
I have a base image with these properties:
'hw_scsi_model': 'virtio-scsi', 'hw_disk_bus': 'scsi'
I launch an instance from that image to a volume and the instance gets it right:
boot-from-vol:~ # lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 11G 0 disk
├─sda1 8:1 0 501M 0 part /boot
└─sda2 8:2 0 9,5G 0 part
├─vg0-usr 254:0 0 3G 0 lvm /usr
├─vg0-root 254:1 0 1,4G 0 lvm /
├─vg0-tmp 254:2 0 1G 0 lvm /tmp
└─vg0-var 254:3 0 1,6G 0 lvm /var
But Horizon shows /dev/vda as the attached volume.
Attaching a second (empty) volume is correctly processed as sdb within the VM:
boot-from-vol:~ # lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 11G 0 disk
├─sda1 8:1 0 501M 0 part /boot
└─sda2 8:2 0 9,5G 0 part
├─vg0-usr 254:0 0 3G 0 lvm /usr
├─vg0-root 254:1 0 1,4G 0 lvm /
├─vg0-tmp 254:2 0 1G 0 lvm /tmp
└─vg0-var 254:3 0 1,6G 0 lvm /var
sdb 8:16 0 1G 0 disk
but from Cinder/Horizon perspective it's attached as /dev/sda. A reboot of the VM succeeds without failures.
The patch doesn't work anymore (openSUSE Leap 42.3). I'm not able to tell if a split of the issue(s) is recommended, I would leave that for the developers to decide.
I guess I didn't really clear anything up, but maybe this will get fixed anyway.
eblock@nde.ag (eblock) wrote : | #33 |
Another comment for Annie:
I believe it's the same user, but there are two issues reported on ask.openstack.org:
It seems to be related to libvirt updates, but it's not resolved yet.
Annie Melen (anniemelen) wrote : | #34 |
Akihiro,
I believe this bug nis now longer applies to horizon after patching from comment #20. The nova-compute log shows me correct volume names when the image properties above are applied:
2018-02-26 16:12:19.403 40927 INFO nova.compute.claims [req-70006123-
2018-02-26 16:12:19.774 40927 WARNING nova.virt.
2018-02-26 16:12:19.892 40927 INFO nova.virt.
So if we split out the bug into two parts - (a) and (b), we drop into (b) and conclude that bug is affected somewhere in nova or libvirt.
Annie Melen (anniemelen) wrote : | #35 |
Eblock,
can you tell me what's your version of KVM, libvirt, nova?
eblock@nde.ag (eblock) wrote : | #36 |
Sure:
- qemu-kvm-
- libvirt-
- openstack-
Annie Melen (anniemelen) wrote : | #37 |
Well, I didn't find any differences among releases which might be critical.
- nova-compute 16.0.4-
- libvirt-daemon 3.6.0-1ubuntu6.
- qemu-kvm 1:2.10+
And still I'm not getting what's going wrong... Maybe I missed something, could you check my config, please?
*compute* nova.conf
[DEFAULT]
instance_
vif_plugging_
vif_plugging_
force_raw_images = true
instances_path = /var/lib/
instance_
block_device_
block_device_
my_ip = 172.31.206.25
transport_url = rabbit:
[api]
auth_strategy = keystone
[api_database]
[barbican]
[cache]
memcache_servers = control01:
[cells]
[cinder]
os_region_name = RegionOne
[compute]
[conductor]
[console]
[consoleauth]
[cors]
[crypto]
[database]
[ephemeral_
[filter_scheduler]
[glance]
api_servers = http://
[guestfs]
[healthcheck]
[hyperv]
[ironic]
[key_manager]
[keystone_
auth_uri = http://
region_name = RegionOne
memcached_servers = control01:
auth_url = http://
project_domain_name = default
user_domain_name = default
project_name = service
auth_type = password
username = nova
password = 4b0ade06afeb11e
[libvirt]
live_migration_
disk_cachemodes = "network=writeback"
images_type = rbd
images_rbd_pool = nova_pool
images_
hw_disk_discard = unmap
rbd_user = cinder_std1
rbd_secret_uuid = 0db560bd-
[matchmaker_redis]
[metrics]
[mks]
[neutron]
url = http://
region_name = RegionOne
service_
metadata_
auth_type = password
auth_url = http://
project_name = service
project_domain_name = default
username = neutron
user_domain_name = default
password = 5e590ddcb4c511e
[notifications]
notify_
[osapi_v21]
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[oslo_messaging
[oslo_messaging
[oslo_messaging
driver = messagingv2
topics = notifications,
retry =-1
[oslo_messaging
rabbit_
rabbit_
rabbit_ha_queues = true
[oslo_messaging
[oslo_middleware]
[oslo_policy]
[pci]
[placement]
os_region_name = RegionOne
auth_type = password
auth_url = http://
project_name = service
project_domain_name = default
username = placement
user_domain_name = default
password = e7d9d0b8afee11e
[quota]
[rdp]
[remote_debug]
[scheduler]
[serial_console]
[service_user]
[spice]
[trusted_computing]
[upgrade_levels]
[vendordata_
[vmware]
vnc_port = 5900
[vnc]
enabled = true
vncserver_listen = 0.0.0.0
vncserver_
novncproxy_base_url = http://
novncproxy_host = compute01
[workarounds]
[wsgi]
[xenserver]
[xvp...
eblock@nde.ag (eblock) wrote : | #38 |
I have to revoke my statement about the failing patch, it does work if it's image -> volume. Unfortunately, none of the suggested patches (in #16, #20) for volume -> volume work for me, I have tried it in different ways and also in combination, nothing works and nova-api reports an error like this:
HTTP exception thrown: Can not find requested image
My current setFinalSpecBoo
---cut here---
function setFinalSpecBoo
if (finalSpec.
// Specify null to get Autoselection (not empty string)
var deviceName = finalSpec.
if (finalSpec.
{
}
);
}
else {
{
}
);
}
finalSpec
}
}
---cut here---
I'm not sure what I have to change here to make it work. At least image -> volume works again.
eblock@nde.ag (eblock) wrote : | #39 |
Ok, after some trial & error it seems like this works for volume -> volume:
---cut here---
function setFinalSpecBoo
var deviceName = finalSpec.
if (finalSpec.
{
'uuid': finalSpec.
}
);
}
else {
':',
'::',
].join('');
}
finalSpec
}
---cut here---
In case there is a device name provided it will use the legacy bdm, but if the default is "NOTSET" it will create a new bdm. I'm not sure about the legacy part, which consequences there could be, I just tried it without changing that part.
Maybe it should be changed to bdm_v2, too. Anyway, with these changes I could launch an instance from volume with a non-default device name (/dev/sda), also attaching another volume results in a correct device name in horizon/cinder and from within the VM (/dev/sdb).
David Worth (throwsb) wrote : | #40 |
Hi,
I am a newbie to Openstack and I am setting up a lab environment using Openstack-ansible (OSA)for deployment of Pike. I am not sure if what I am running into is related to this bug. I originally thought I was hitting an iscsi issue, but as I went through this thread, it seems to be related.
On initial deploy, I was able to spin up a new instances from image. However things stopped working. The only thing I can think of is something was updated during OSA redeployment.
Some of what I am seeing in the nova-compute log are as follows:
Mar 2 23:29:33 helos-lab02 nova-compute: 2018-03-02 23:29:28.441 21738 WARNING nova.virt.
Mar 2 23:29:33 helos-lab02 nova-compute: 2018-03-02 23:29:28.592 21738 INFO nova.virt.
Mar 2 23:29:33 helos-lab02 nova-compute: 2018-03-02 23:29:30.990 21738 INFO nova.virt.
Mar 2 23:29:33 helos-lab02 nova-compute: 2018-03-02 23:29:31.001 21738 INFO os_brick.
Mar 2 23:29:33 helos-lab02 nova-compute: 2018-03-02 23:29:31.055 21738 WARNING os_brick.
Mar 2 23:29:53 helos-lab02 nova-compute: 2018-03-02 23:29:44.941 21738 INFO os_brick.
Mar 2 23:29:53 helos-lab02 nova-compute: 2018-03-02 23:29:44.993 21738 WARNING os_brick.
Mar 2 23:29:53 helos-lab02 nova-compute: : VolumeDeviceNot
The task would die and I would get a host not found error in horizon. I tried using the patch in 20 as well as eblocks update above, and neither worked. I saw the js file at 2 locations for horizons:
/openstack/
Changed in horizon: | |
milestone: | rocky-1 → rocky-2 |
Urbansh Xie (urbanshxie) wrote : | #41 |
Hi,I also met this bug under Pike(seems Ocata don't have this problem).I choose image below "Select Boot Source" and can create instance normally if choose Create New Volume with no.However,if choose yes with Create New Volume and type a number for saying 10.Then will got error Boot failed not a bootable disk in console and can see in the log with this error similar with #40.Also,I'm sure cinder is correctly configured because this 10G can be created and show in-use with that instance.
Also,if I first create an instance then create volume in dashboard or ssh,can attach this volume to the instance successfully.
lixiaolong (kong62) wrote : | #42 |
Hi,I also met this bug under Queens
when i use nova reboot --hard uuid ,instance can be rebooted ,bug nova-compute.log have these:
2018-09-29 17:21:56.348 6074 INFO nova.compute.
2018-09-29 17:21:57.407 6074 WARNING nova.compute.
2018-09-29 17:21:57.512 6074 INFO nova.virt.
2018-09-29 17:21:57.513 6074 INFO os_vif [req-c1fb79c2-
2018-09-29 17:21:57.515 6074 INFO oslo.privsep.daemon [req-c1fb79c2-
2018-09-29 17:21:58.033 6074 INFO oslo.privsep.daemon [req-c1fb79c2-
2018-09-29 17:21:57.993 30402 INFO oslo.privsep.daemon [req-c1fb79c2-
2018-09-29 17:21:57.996 30402 INFO oslo.privsep.daemon [req-c1fb79c2-
2018-09-29 17:21:57.998 30402 INFO oslo.privsep.daemon [req-c1fb79c2-
2018-09-29 17:21:57.998 30402 INFO oslo.privsep.daemon [req-c1fb79c2-
2018-09-29 17:21:58.091 6074 WARNING os_brick.
Eric Desrochers (slashd) wrote : | #43 |
Any update on this bug ? I saw a commit which seems to be 'abadonned'
Akihiro Motoki (amotoki) wrote : | #44 |
Removing milestone target in horizon and lowering the priority to Low (same as in Nova).
Changed in horizon: | |
milestone: | rocky-2 → none |
importance: | High → Low |
Eric Desrochers (slashd) wrote : | #45 |
Akihiro Motoki (amotoki)
Do you know what happened with that bug ? I saw a abandoned fix (which I tried on top or 'Rocky') and didn't work... is there any other progress I haven't seen ?
Eric Desrochers (slashd) wrote : | #46 |
I tried the proposal 'abandoned' patch on top of 'Rocky' (which still exhibit the issue) and it totally broke the 'launch instance' creation... it doesn't seems to work anymore with the more recent Openstack release.
I'll try to dig and see what I can come up with.
Kai Wembacher (ktwe) wrote : | #47 |
I'm facing the same issue in Rocky when launching images with hw_disk_bus=scsi and hw_scsi_
I did some work to port the whole block device mapping part to v2 removing the vda default. This currently requires can_set_
You can find the current state of my change here:
https:/
I will try to complete my work and submit it to the project, but if someone wants to build on my work in the meantime, please feel free to so.
Eric Desrochers (slashd) wrote : | #48 |
Kai Wembacher (ktwe)
I was able to test your proposal fix on top of Rocky with an image with the following properties:
hw_disk_bus='scsi', hw_scsi_
$ openstack volume list --long
+------
| ID | Name | Status | Size | Type | Bootable | Attached to | Properties |
+------
| 39ffefeb-
Horizon no longer display /dev/vda but /dev/sda as it should when the above properties are set.
Any plan to submit the change to the project anytime soon ?
Thanks
Eric
Eric Desrochers (slashd) wrote : | #49 |
Kai Wembacher (ktwe)
Any progress on you submitting the patch upstream for the horizon project ?
Kai Wembacher (ktwe) wrote : | #50 |
Hi Eric,
sorry for the delay. Unfortunately there is still some work to do, as my solution won't work only with can_set_
Best regards,
Kai
Eric Desrochers (slashd) wrote : | #51 |
Thanks Kai. Please keep me posted of any progress you make and if you need any help.
- Eric
Eric Desrochers (slashd) wrote : | #52 |
Hi Kai,
Any progress ? Anything I can help you with ?
Eric Desrochers (slashd) wrote : | #53 |
Hi Kai,
Me again. Are you close to submit the patch upstream ?
Eric Desrochers (slashd) wrote : | #54 |
I have instrumented horizon and compare the calls when creating a new instance with boot volume and when simply attaching a second volume.
Attaching a second volume doesn't pass any device value, which I make me think leaves the decision to cinder to give the volume name.
But when we launch a VM via horizon with a boot volume because of this 'vda' value being hardcorded, it send to cinder the device value 'vda'.
So setting vol_device_name to 'null' is IMHO the right thing to do as it'll be replicating what adding additional volume does already.
Eric Desrochers (slashd) wrote : | #55 |
'''
@@ -205,7 +205,6 @@
// REQUIRED for JS logic
vol_create: false,
// May be null
- vol_device_name: 'vda',
vol_delete_
vol_size: 1
};
@@ -747,11 +746,9 @@
function setFinalSpecBoo
if (finalSpec.
// Specify null to get Autoselection (not empty string)
- var deviceName = finalSpec.
finalSpec.
finalSpec.
{
- 'device_name': deviceName,
'source_type': bootSourceTypes
'destination_type': bootSourceTypes
'delete_
'''
Forcing horizon not pushing device_name, seems to force cinder to take the decision, just like when adding additional volume to an existing VM where horizon doesn't passes devicename, and leave cinder to do the math instead.
With the above I was able to :
* VM boot volume without scsi meta data decoration:
Volumes Attached
Attached To
0a0cd660-
* VM boot volume with scsi meta data decoration:
Volumes Attached
Attached To
91f50dbc-
Eric Desrochers (slashd) wrote : | #56 |
Seems like it might not be necessary for Horizon to pick a device_name if cinder take over in the case of device_name absence ^
thoughts ?
OpenStack Infra (hudson-openstack) wrote : Fix proposed to horizon (master) | #57 |
Fix proposed to branch: master
Review: https:/
Changed in horizon: | |
assignee: | nobody → Eric Desrochers (slashd) |
status: | Confirmed → In Progress |
Eric Desrochers (slashd) wrote : | #58 |
As agreed with amotoki,
We will fix this in two phases,
Phase 1: My current fix address the instance creation using image with new volume in function setFinalSpecBoo
Phase 2: Address the instance creation using an existing volume or snapshot (which still default to 'vda' using another function setFinalSpecBoo
So this is the plan right now. Phase 1 is cover here: https:/
- Eric
Changed in nova: | |
status: | Confirmed → Invalid |
Changed in cinder: | |
status: | New → Invalid |
Akihiro Motoki (amotoki) wrote : | #59 |
(comment on horizon)
As of stein/train release, this issue does not affect VM spawning. It just affects volume information of the instance detail page (and the output of "nova volume-attachments <server-id>"). Eric's fix in horizon will address it. Even if "nova volume-attachments <server-id>" and the instance detail page say /dev/vda, a volume is attached as /dev/sda inside a VM and the VM successfully launched (I tested with cirros image from devstack). I guess this is because of the fact "Note that as of the 12.0.0 Liberty release, the Nova libvirt driver no longer honors a user-supplied device name" (as described in the Nova API reference [1]).
[1] https:/
Akihiro Motoki (amotoki) wrote : | #60 |
The similar visible issue in the volume information in the instance detail page happens when creating a new server using boot from volume with an existing volume or an existing volume snapshot. For example, when a volume is created based on an image (with hw_disk_bus=scsi and hw_scsi_
OpenStack Infra (hudson-openstack) wrote : | #61 |
Fix proposed to branch: master
Review: https:/
Changed in horizon: | |
assignee: | Eric Desrochers (slashd) → Akihiro Motoki (amotoki) |
Changed in horizon: | |
assignee: | Akihiro Motoki (amotoki) → Eric Desrochers (slashd) |
Changed in horizon: | |
status: | In Progress → Fix Released |
OpenStack Infra (hudson-openstack) wrote : Fix proposed to horizon (stable/stein) | #62 |
Fix proposed to branch: stable/stein
Review: https:/
OpenStack Infra (hudson-openstack) wrote : Fix proposed to horizon (stable/queens) | #63 |
Fix proposed to branch: stable/queens
Review: https:/
OpenStack Infra (hudson-openstack) wrote : Fix proposed to horizon (stable/rocky) | #64 |
Fix proposed to branch: stable/rocky
Review: https:/
OpenStack Infra (hudson-openstack) wrote : Fix proposed to horizon (stable/pike) | #65 |
Fix proposed to branch: stable/pike
Review: https:/
OpenStack Infra (hudson-openstack) wrote : Change abandoned on horizon (stable/rocky) | #66 |
Change abandoned by Eric Desrochers (<email address hidden>) on branch: stable/rocky
Review: https:/
Reason: Not Cherry Picked from master like I did for others. Will re-submit like I did for others.
OpenStack Infra (hudson-openstack) wrote : | #67 |
Change abandoned by Eric Desrochers (<email address hidden>) on branch: stable/rocky
Review: https:/
Reason: Need to be cherry pick from branch/stein -> branch/rocky
Changed in horizon: | |
milestone: | none → stein-rc2 |
OpenStack Infra (hudson-openstack) wrote : Fix merged to horizon (stable/stein) | #68 |
Reviewed: https:/
Committed: https:/
Submitter: Zuul
Branch: stable/stein
commit 96ad636f868ddd6
Author: Eric Desrochers <email address hidden>
Date: Wed Mar 20 14:56:02 2019 -0400
Not implicitly set vol_device_name to vda
Using a scsi decorated image with:
hw_
hw_
This solve the case where an instance is
launched with 'image' selected as boot
source with a new volume. This will result
in /dev/vda instead of /dev/sda as it
should.
Not specifying device name in
setFinalSpe
decision to nova to determine it.
Example:
-------
VM boot volume without scsi meta data decoration:
Attached To
0a0cd660-
VM boot volume with scsi meta data decoration:
Attached To
91f50dbc-
--------
Note: This commit doesn't address cases for
where instances are launched using existing
volume and snapshot, this will involve more
work to migrate the code from BDMv1 to BDMv2.
Closes-Bug #1560965
Change-Id: I9d114c2c2e6736
(cherry picked from commit 4788c4d2f59b8aa
tags: | added: in-stable-stein |
OpenStack Infra (hudson-openstack) wrote : Fix merged to horizon (stable/rocky) | #69 |
Reviewed: https:/
Committed: https:/
Submitter: Zuul
Branch: stable/rocky
commit 79a4c3bd1aac147
Author: Eric Desrochers <email address hidden>
Date: Wed Mar 20 14:56:02 2019 -0400
Not implicitly set vol_device_name to vda
Using a scsi decorated image with:
hw_
hw_
This solve the case where an instance is
launched with 'image' selected as boot
source with a new volume. This will result
in /dev/vda instead of /dev/sda as it
should.
Not specifying device name in
setFinalSpe
decision to nova to determine it.
Example:
-------
VM boot volume without scsi meta data decoration:
Attached To
0a0cd660-
VM boot volume with scsi meta data decoration:
Attached To
91f50dbc-
--------
Note: This commit doesn't address cases for
where instances are launched using existing
volume and snapshot, this will involve more
work to migrate the code from BDMv1 to BDMv2.
Closes-Bug #1560965
Change-Id: I9d114c2c2e6736
(cherry picked from commit 4788c4d2f59b8aa
tags: | added: in-stable-rocky |
OpenStack Infra (hudson-openstack) wrote : Fix merged to horizon (stable/queens) | #70 |
Reviewed: https:/
Committed: https:/
Submitter: Zuul
Branch: stable/queens
commit 0df60f4dba64e45
Author: Eric Desrochers <email address hidden>
Date: Wed Mar 20 14:56:02 2019 -0400
Not implicitly set vol_device_name to vda
Using a scsi decorated image with:
hw_
hw_
This solve the case where an instance is
launched with 'image' selected as boot
source with a new volume. This will result
in /dev/vda instead of /dev/sda as it
should.
Not specifying device name in
setFinalSpe
decision to nova to determine it.
Example:
-------
VM boot volume without scsi meta data decoration:
Attached To
0a0cd660-
VM boot volume with scsi meta data decoration:
Attached To
91f50dbc-
--------
Note: This commit doesn't address cases for
where instances are launched using existing
volume and snapshot, this will involve more
work to migrate the code from BDMv1 to BDMv2.
Closes-Bug #1560965
Change-Id: I9d114c2c2e6736
(cherry picked from commit 79a4c3bd1aac147
tags: | added: in-stable-queens |
OpenStack Infra (hudson-openstack) wrote : Fix merged to horizon (stable/pike) | #71 |
Reviewed: https:/
Committed: https:/
Submitter: Zuul
Branch: stable/pike
commit eba87483eaea459
Author: Eric Desrochers <email address hidden>
Date: Wed Mar 20 14:56:02 2019 -0400
Not implicitly set vol_device_name to vda
Using a scsi decorated image with:
hw_
hw_
This solve the case where an instance is
launched with 'image' selected as boot
source with a new volume. This will result
in /dev/vda instead of /dev/sda as it
should.
Not specifying device name in
setFinalSpe
decision to nova to determine it.
Example:
-------
VM boot volume without scsi meta data decoration:
Attached To
0a0cd660-
VM boot volume with scsi meta data decoration:
Attached To
91f50dbc-
--------
Note: This commit doesn't address cases for
where instances are launched using existing
volume and snapshot, this will involve more
work to migrate the code from BDMv1 to BDMv2.
Closes-Bug #1560965
Change-Id: I9d114c2c2e6736
(cherry picked from commit 0df60f4dba64e45
tags: | added: in-stable-pike |
OpenStack Infra (hudson-openstack) wrote : Fix merged to horizon (master) | #72 |
Reviewed: https:/
Committed: https:/
Submitter: Zuul
Branch: master
commit 5ab15c49da8f85f
Author: Akihiro Motoki <email address hidden>
Date: Thu Mar 28 15:33:51 2019 +0900
Do not specify device_name when creating server with BFV
When creating a server using an existing volume or volume snapshot,
horizon specified a device_name 'vda'.
However the device name depends on bus type or something and
this causes incorrect volume information in the instance detail page
(and nova volume-attachments CLI output).
For detail background, see bug 1560965.
block_
required to specify device_name 'vda' in block_device_
in Nova create-server API.
This commit changes to use block_device_
we can avoid specifying device_name.
Change-Id: Ice1a6bb2dcddab
Partial-Bug: #1560965
Find attached a diff on my changes for Horizon, this worked for me, but I'm not sure about any side effects or if this is the way to go here.