mountpoint doesn't work when a volume is attached to an instance

Bug #1004328 reported by Vincent Hou
58
This bug affects 10 people
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
Fix Released
High
Vish Ishaya

Bug Description

OS: ubuntu 12.04, 64 bit.
libvirt_type=qemu
I installed openstack via devstack.
When I try to attach a volume to a running instance, the mountpoint of this instance always starts from /dev/vdb, though I can specify the mountpoint, e.g. /dev/vdd.
It seems the mountpoint of the instance is taken in a certain order, from /dev/vdb, then /dev/vdc... to /dev/vdz. The mountpoint specified by the command does not take effect.

Revision history for this message
Chuck Short (zulcss) wrote :

Can you tell us how to step by step reproduce this?

Thakns
chuck

Changed in nova:
status: New → Incomplete
Revision history for this message
Vish Ishaya (vishvananda) wrote :

This is an issue that will require a fix in libvirt and perhaps the virtio driver. Libvirt/virtio doesn't pass enough information via the attach for the guest os to figure out where it should show up in the guest.

Revision history for this message
Vincent Hou (houshengbo) wrote :
Download full text (9.5 KiB)

Thank you for the explanations.

To reproduce:
Simply attach a volume to a VM.
Try command like:
nova volume-attach $VM_ID $VOL_ID /dev/vdc
Or attach a volume to a VM from the horizon UI, set the mountpoint to /dev/vdc

Expected result:
/dev/vdc should be found in the VM.

Actual result:
/dev/vdb is found in the VM. No /dev/vdc is there.

I have checked the log. Can anyone help me to analyze it a bit?

===========Log of nova-compute========================
2012-06-08 10:36:04 DEBUG nova.rpc.amqp [-] received {u'_context_roles': [u'admin'], u'_context_request_id': u'req-36c08116-7cfc-4254-
b896-44a1b6f058fb', u'_context_quota_class': None, u'args': {u'instance_uuid': u'ec4de940-0524-411c-859d-b8b2a12964a1', u'mountpoint':
 u'/dev/vdc', u'volume_id': u'7e7a36a2-8b51-43b7-b8b9-130e702fdb7a'}, u'_context_auth_token': '<SANITIZED>', u'_context_is_admin': Tru
e, u'version': u'1.0', u'_context_project_id': u'ce27e4b763244d54b665315121ad89f5', u'_context_timestamp': u'2012-06-08T02:36:04.30783
5', u'_context_read_deleted': u'no', u'_context_user_id': u'c2c604090e664f068bd9a1031a6ae4e9', u'method': u'attach_volume', u'_context
_remote_address': u'9.119.148.201'} from (pid=4955) _safe_log /opt/stack/nova/nova/rpc/common.py:198
2012-06-08 10:36:04 DEBUG nova.rpc.amqp [-] unpacked context: {'user_id': u'c2c604090e664f068bd9a1031a6ae4e9', 'roles': [u'admin'], 't
imestamp': u'2012-06-08T02:36:04.307835', 'auth_token': '<SANITIZED>', 'remote_address': u'9.119.148.201', 'quota_class': None, 'is_ad
min': True, 'request_id': u'req-36c08116-7cfc-4254-b896-44a1b6f058fb', 'project_id': u'ce27e4b763244d54b665315121ad89f5', 'read_delete
d': u'no'} from (pid=4955) _safe_log /opt/stack/nova/nova/rpc/common.py:198
2012-06-08 10:36:04 INFO nova.compute.manager [req-36c08116-7cfc-4254-b896-44a1b6f058fb c2c604090e664f068bd9a1031a6ae4e9 ce27e4b763244
d54b665315121ad89f5] [instance: ec4de940-0524-411c-859d-b8b2a12964a1] check_instance_lock: decorating: |<function attach_volume at 0x1
88faa0>|
2012-06-08 10:36:04 INFO nova.compute.manager [req-36c08116-7cfc-4254-b896-44a1b6f058fb c2c604090e664f068bd9a1031a6ae4e9 ce27e4b763244
d54b665315121ad89f5] [instance: ec4de940-0524-411c-859d-b8b2a12964a1] check_instance_lock: arguments: |<nova.compute.manager.ComputeMa
nager object at 0xd6ad10>| |<nova.rpc.amqp.RpcContext object at 0x35909d0>|
2012-06-08 10:36:04 DEBUG nova.compute.manager [req-36c08116-7cfc-4254-b896-44a1b6f058fb c2c604090e664f068bd9a1031a6ae4e9 ce27e4b76324
4d54b665315121ad89f5] [instance: ec4de940-0524-411c-859d-b8b2a12964a1] Getting locked state from (pid=4955) get_lock /opt/stack/nova/n
ova/compute/manager.py:1677
2012-06-08 10:36:04 INFO nova.compute.manager [req-36c08116-7cfc-4254-b896-44a1b6f058fb c2c604090e664f068bd9a1031a6ae4e9 ce27e4b763244
d54b665315121ad89f5] [instance: ec4de940-0524-411c-859d-b8b2a12964a1] check_instance_lock: locked: |False|
2012-06-08 10:36:04 INFO nova.compute.manager [req-36c08116-7cfc-4254-b896-44a1b6f058fb c2c604090e664f068bd9a1031a6ae4e9 ce27e4b763244
d54b665315121ad89f5] [instance: ec4de940-0524-411c-859d-b8b2a12964a1] check_instance_lock: admin: |True|
2012-06-08 10:36:04 INFO nova.compute.manager [req-36c08116-7cfc-4254-b896...

Read more...

Revision history for this message
Vincent Hou (houshengbo) wrote :

How to reproduce:
1. Start a VM $VM_ID from an image.
2. Create a volume with $VOL_ID.
3.Try command like: nova volume-attach $VM_ID $VOL_ID /dev/vdc
Or attach a volume to a VM from the horizon UI, set the mountpoint to /dev/vdc

Expected result:
/dev/vdc should be found in the VM. This can be checked after ssh this VM.

Actual result:
/dev/vdb is found in the VM. No /dev/vdc is there.

Thierry Carrez (ttx)
Changed in nova:
importance: Undecided → High
status: Incomplete → Confirmed
Revision history for this message
John Griffith (john-griffith) wrote :

This is something that from what I gather is NOT likely to change in libvirt. Wondering if it would be possible to switch to using /dev/disk/by-id?

Then we eliminate the confusion we have in this case, and we can auotmatically derive the mountpoint via the volume now that we've switched to uuid's?

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to nova (master)

Fix proposed to branch: master
Review: https://review.openstack.org/10908

Changed in nova:
assignee: nobody → Vish Ishaya (vishvananda)
status: Confirmed → In Progress
Changed in nova:
assignee: Vish Ishaya (vishvananda) → Anne Gentle (annegentle)
assignee: Anne Gentle (annegentle) → Vish Ishaya (vishvananda)
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to nova (master)

Reviewed: https://review.openstack.org/10908
Committed: http://github.com/openstack/nova/commit/e44751162b09c5b57557b89db27656b5bd23341c
Submitter: Jenkins
Branch: master

commit e44751162b09c5b57557b89db27656b5bd23341c
Author: Vishvananda Ishaya <email address hidden>
Date: Mon Aug 6 12:17:43 2012 -0700

    Allow nova to guess device if not passed to attach

    partial fix for bug 1004328

    Only the xen hypervisor actually respects the device name that
    is passed in attach_volume. For other hypervisors it makes much
    more sense to automatically generate a unique name.

    This patch generates a non-conflicting device name if one is not
    passed to attach_volume. It also validates the passed in volume
    name to make sure another device isn't already attached there.

    A corresponding change to novaclient and horizon will greatly
    improve the user experience of attaching a volume.

    It moves some common code out of metadata/base so that it can
    be used to get a list of block devices. The code was functionally
    tested as well and block device name generation works properly.

    This adds a new method to the rpcapi to validate a device name. It
    also adds server_id to the volumes extension, since it was omitted
    by mistake.

    The next step is to modify the libvirt driver to match the serial
    number of the device to the volume uuid so that the volume can
    always be found at /dev/disk/by-id/virtio-<uuid>.

    DocImpact

    Change-Id: I0b9454fc50a5c93b4aea38545dcee98f68d7e511

Changed in nova:
status: In Progress → Fix Committed
Thierry Carrez (ttx)
Changed in nova:
milestone: none → folsom-3
status: Fix Committed → Fix Released
Changed in nova:
status: Fix Released → In Progress
milestone: folsom-3 → folsom-rc1
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to nova (master)

Fix proposed to branch: master
Review: https://review.openstack.org/11492

Revision history for this message
OpenStack Infra (hudson-openstack) wrote :

Fix proposed to branch: master
Review: https://review.openstack.org/11493

Revision history for this message
Vincent Hou (houshengbo) wrote :

So many patches for this bug? Which one is THE ONE to be checked?

Revision history for this message
John Griffith (john-griffith) wrote :

This fix will most likely be isolated to Nova compute/libvirt code only, wait until this closes on the Nova side to determine if there is a needed patch for Cinder

Changed in cinder:
assignee: nobody → John Griffith (john-griffith)
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to nova (master)

Reviewed: https://review.openstack.org/11492
Committed: http://github.com/openstack/nova/commit/1e7769cf5587c1ce92f206b39fe646975b19fc95
Submitter: Jenkins
Branch: master

commit 1e7769cf5587c1ce92f206b39fe646975b19fc95
Author: Vishvananda Ishaya <email address hidden>
Date: Thu Aug 16 10:58:33 2012 -0700

    Adds support for serial to libvirt config disks.

    In order for users to find a volume that they have attached to
    a vm, it is valuable to be able to find it in a consistent
    location. A following patch wil accomplish this by setting
    the serial number of the device to the uuid of the volume.

    This patch prepares for that change by allowing serial numbers
    to be set in the libvirt config disk object.

    Prepares to fix bug 1004328

    Change-Id: Iecdfc17b45e1c38df50f844f127c0e95558ab22c

Changed in nova:
status: In Progress → Fix Committed
Revision history for this message
OpenStack Infra (hudson-openstack) wrote :

Reviewed: https://review.openstack.org/11493
Committed: http://github.com/openstack/nova/commit/3a47c02c58cefed0e230190b4bcef14527c82709
Submitter: Jenkins
Branch: master

commit 3a47c02c58cefed0e230190b4bcef14527c82709
Author: Vishvananda Ishaya <email address hidden>
Date: Thu Aug 16 11:28:27 2012 -0700

    Allows libvirt to set a serial number for a volume

    The serial number defaults to the volume_id of the volume being
    attached. We may expose a method in the future to set a different
    serial number when creating or attaching a volume.

    The purpose of this change is to give users a consistent place
    they can find their volume. It should show up now in most flavors
    of linux under /disk/by-id/virtio-<volume_uuid>

    Fixes bug 1004328

    Change-Id: Id1c56b5b23d799deb7da2d39ae57ecb48965c55f

no longer affects: cinder
Thierry Carrez (ttx)
Changed in nova:
status: Fix Committed → Fix Released
Thierry Carrez (ttx)
Changed in nova:
milestone: folsom-rc1 → 2012.2
Revision history for this message
Kevin Fox (kevin-fox-y) wrote :

This workaround does not work with heat templates that assume that the volumes they create can be attached with the device names they have assigned. This means some working EC2 templates will need to be substantially reworked to work on openstack.

Revision history for this message
Kevin Fox (kevin-fox-y) wrote :

In fact, this workaround doesn't seem to work for me in grizzly. I attached 4 volumes and only two of them got virtio serial numbers:

[root@mongodbreplicasetmembertest3 ~]# ls -l /dev/vd*
brw-rw----. 1 root disk 252, 0 Sep 26 16:00 /dev/vda
brw-rw----. 1 root disk 252, 1 Sep 26 16:00 /dev/vda1
brw-rw----. 1 root disk 252, 2 Sep 26 16:00 /dev/vda2
brw-rw----. 1 root disk 252, 16 Sep 26 16:00 /dev/vdb
brw-rw----. 1 root disk 252, 32 Sep 26 16:00 /dev/vdc
brw-rw----. 1 root disk 252, 48 Sep 26 16:00 /dev/vdd
brw-rw----. 1 root disk 252, 64 Sep 26 16:00 /dev/vde
[root@mongodbreplicasetmembertest3 ~]# ls -l /dev/disk/by-id/virtio-*
lrwxrwxrwx. 1 root root 9 Sep 26 16:00 /dev/disk/by-id/virtio-cef82203-43de-4915-8 -> ../../vdb
lrwxrwxrwx. 1 root root 9 Sep 26 16:00 /dev/disk/by-id/virtio-def5e6dd-c536-4667-9 -> ../../vdc

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.