libvirt: user specified volume device names are not ignored

Bug #1486204 reported by Maxim Nestratov on 2015-08-18
66
This bug affects 15 people
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
Medium
Maxim Nestratov

Bug Description

After change I3ce12563846b2c34ac778d44e7582eef930ae4b0 was merged, change I76a7cfd995db6c04f7af48ff8c9acdd55750ed76 ceased to work for root volume device name.

Now we have a situation when we try to boot an instance from volume with specified device name e.g. like this:

nova boot --block-device device=vdc,source=volume,dest=volume,size=1,bootindex=0,id=433756c2-d4dc-4560-b247-e5cadb79a505 --flavor m1.tiny instance-volume
(with any virt_type)

  or like this

 nova boot ct-3 --flavor m1.small --block-device-mapping vda=b2ac7e52-6ad3-4c11-9178-c3bf52fd373f:::0
(for virt_type=parallels, which doesn't support virtio bus)

with libvirt driver , it will try to clear user specified device name and use driver default device name and we will actually see log warning for this:

2015-08-18 15:05:53.892 WARNING nova.virt.libvirt.driver [req-09b86d01-be81-42ac-aabb-71c427e80909 admin demo] [instance: 6943e226-8c23-4137-a897-bdbbcebcd31d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names

but nothing will actually happen. It is so because instance.root_device_name is set based on user specified device name in nova.compute.manager._default_block_device_names before driver's default_device_names_for_instance is called and user input is cleared, while bdm in nova.virt.libvirt.blockinfo.get_disk_mapping are updated taking into account instance.root_device_name.

The problem is also that code generating device names for secondary devices is dependent on specified root_device_name that is according to ignoring logic should not be taken into account.

affects: cinder → nova

Fix proposed to branch: master
Review: https://review.openstack.org/214314

Changed in nova:
assignee: nobody → Maxim Nestratov (mnestratov)
status: New → In Progress

Does the second case (with legacy mapping) fail? It uses vda as a device name, and Nova selects /dev/vda for root_device_name, and assigns /dev/vda for the volume. So what's wrong with that?

Feodor Tersin (ftersin) wrote :

IIANM it occures whith only root device. Names of other devices are ignored.
And with 'vda' root device name all do work properly.
So the only case is broken is that a user specifies not default root device name.

Feodor Tersin (ftersin) wrote :

See previous comments

Changed in nova:
status: In Progress → Incomplete
summary: - User specified volume device names are not ignored
+ libvirt: user specified volume device names are not ignored
Maxim Nestratov (mnestratov) wrote :

The broken default scenario is with parallels virt_type because it doesn't support virtio bus and uses sata instead which corresponds to sdx device names. Also it breaks scenarios when you specify any bus different from virtio, e.g. sata, iscsi, ide and specify a device name which doesn't correspond to selected bus.

description: updated
description: updated
Changed in nova:
status: Incomplete → In Progress
Martins Jakubovics (martins-k) wrote :

Have same issue, when try to boot from new volume and image which have hw_disk_bus = scsi property. Patch https://review.openstack.org/#/c/214314/4/nova/virt/libvirt/driver.py helps me.

Sean Dague (sdague) on 2016-02-17
Changed in nova:
importance: Undecided → Medium
Changed in nova:
assignee: Maxim Nestratov (mnestratov) → Mikhail Feoktistov (mfeoktistov)
Changed in nova:
assignee: Mikhail Feoktistov (mfeoktistov) → Maxim Nestratov (mnestratov)

Change abandoned by Michael Still (<email address hidden>) on branch: master
Review: https://review.openstack.org/214314
Reason: This patch has been sitting unchanged for more than 12 weeks. I am therefore going to abandon it to keep the review queue sane. Please feel free to restore the change if you're still working on it.

Albert Mikaelyan (tahvok) wrote :

While creating a new instance the "Create New Volume" checkbox in horizone now defaults to 'yes' - making this bug appear as soon as you create a new openstack infrastructure - therefore making it pretty serious, as you never get a working environment from the start. Will this ever be fixed?

Change abandoned by Stephen Finucane (<email address hidden>) on branch: master
Review: https://review.opendev.org/214314

To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Duplicates of this bug

Other bug subscribers