stable/xena unable to attach multi-attach volume via "openstack nova add volume"

Bug #1982040 reported by Davide De Pasquale
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
Invalid
Undecided
Unassigned

Bug Description

Dear all

I have encountered an error with Openstack Heat but I think this is something that is also affecting (or even caused by) a possible issue with nova.

Here my post for HEAT team
https://storyboard.openstack.org/#!/story/2010157

I have installed stable/xena using openstack-ansible and I am operating several test vms in my ecosystem before to move to a production stage. Currently I am using Cinder integrated with Ceph (if you think it may be of any interest).

Just to summarize a possible test procedure (for convenience I simply use CLI but with Horizon there is another workaround that I presented in the storyboard's post above):
- create a multi-attach volume
- create 2 instances that potentially want to share the volume
- attach the volume to the first instance [it will work great!]
- attache the volume to a second instance... you should receive the following error:

openstack server add volume <serverID> <volumeID> --device /dev/vdb
Multiattach volumes are only supported starting with compute API version 2.60. (HTTP 400) (Request-ID: req-5a60e72a-b7e2-49d6-81b2-c15753e054a2)

BUT
If I use the following command on the second attach:

nova volume-attach <serverID> <volumeID> auto
+-----------------------+--------------------------------------+
| Property | Value |
+-----------------------+--------------------------------------+
| delete_on_termination | False |
| device | /dev/vdb |
| id | c292cdd5-7dc5-4714-bdb3-a309e8416232 |
| serverId | 7b6a0651-f4cd-4018-a042-58f695ab73aa |
| tag | - |
| volumeId | c292cdd5-7dc5-4714-bdb3-a309e8416232 |
+-----------------------+--------------------------------------+

it works properly!
Is it expected because of some implementation task or is it a potential bug?

Actually I am able to create and use multi-attach volumes but only by making manually the attachment with the commands reported above.

Thanks for your kind attention and looking forward to hear by you.
Best regards,
Davide

Revision history for this message
Uggla (rene-ribaud) wrote :

You probably need to pass the api version with --os-compute-api-version 2.60 to the openstack server add volume <serverID> <volumeID> --device /dev/vdb command.

Heat is probably using on older microversion as well. Please have a look if you can supply the API to use.

However, from a nova perspective, it seems not to be a bug as it works with the nova client.

Feel free to comment and change this bug to new again if you disagree or want to provide more info.

Changed in nova:
status: New → Incomplete
Revision history for this message
Davide De Pasquale (davidedepasquale) wrote :

Dear Uggla,

I can confirm that the following command works fine!
So it could be useful to have such a command reported somewhere in the documentation or in any useful point where the multiattach feature is presented.

For the future readers, here a possible success example:

$id_shared_volume=c292cdd5-7dc5-4714-bdb3-a309e8416232
$id_server1=43e808ec-635b-473c-8059-a37ecfac8929
$id_server2=4b0ef6bb-9d61-46d9-b2f6-1aca43818669

openstack server add volume $id_server1 $id_shared_volume --os-compute-api-version 2.60 --device /dev/vdb
openstack server add volume $id_server2 $id_shared_volume --os-compute-api-version 2.60 --device /dev/vdb

We can simply close this Bug as "invalid".
Thanks for your kind help and hope this will help also other people.
For sure I will point out this thread to Heat team.

Best regards,
Davide

Revision history for this message
Rajesh Tailor (ratailor) wrote :

Marking this bug Invalid, as it is resolved after providing microversion with the command.

Changed in nova:
status: Incomplete → Invalid
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.