volume_attach action registers volume attachment even on failure

Bug #1398588 reported by Patrick Crews
10
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Cinder
Invalid
Undecided
krishna
OpenStack Compute (nova)
Invalid
Undecided
Unassigned

Bug Description

When attaching volumes to instances, if the volume attachment fails, it is still noted as successful by the system in some cases.
This is the information reflected when requesting the details of a servers volume attachments
http://developer.openstack.org/api-ref-compute-v2-ext.html
/v2/​{tenant_id}​/servers/​{server_id}​/os-volume_attachments
Show volume attachment details

In the example, I have 2 test servers and 1 test volume.
I attach the volume to test_server1 and it is successful (though please see: https://bugs.launchpad.net/cinder/+bug/1398583)
Next, I try to attach the same volume to test_server2.
This call fails as expected, but the mountpoint / attachment is still registered.

To demonstrate, I repeat the previous call. It fails again, but this time due to the requested mountpoint being in-use vs. the volume being attached.

I next make a call to list the volume attachments for test_server2. It lists volume attachments even though there are none and the Cinder api server does not register this.

Revision history for this message
Patrick Crews (patrick-crews) wrote :
Download full text (4.4 KiB)

# Listing our test servers
 nova --os-auth-url=http://192.168.0.5:5000/v2.0 list
+--------------------------------------+--------------+--------+------------+-------------+------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+--------------+--------+------------+-------------+------------------+
| 9991ead8-8a88-45c3-83d0-a2ac5e7fb232 | test_server1 | ACTIVE | - | Running | private=10.0.0.2 |
| 44d13e4b-a218-46b8-a714-c96e9cff2066 | test_server2 | ACTIVE | - | Running | private=10.0.0.3 |
+--------------------------------------+--------------+--------+------------+-------------+------------------+

# Listing our test volume
pcrews@erlking-dev:~/git/rannsaka$ cinder list
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| 41dcbd0b-ac6b-477e-84f9-bbe62da29a6a | available | test_volume1 | 1 | lvmdriver-1 | false | |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+

# Attaching volume to test_server1 on /dev/vdc
pcrews@erlking-dev:~/git/rannsaka$ nova --os-auth-url=http://192.168.0.5:5000/v2.0 volume-attach 9991ead8-8a88-45c3-83d0-a2ac5e7fb232 41dcbd0b-ac6b-477e-84f9-bbe62da29a6a /dev/vdc
+----------+--------------------------------------+
| Property | Value |
+----------+--------------------------------------+
| device | /dev/vdc |
| id | 41dcbd0b-ac6b-477e-84f9-bbe62da29a6a |
| serverId | 9991ead8-8a88-45c3-83d0-a2ac5e7fb232 |
| volumeId | 41dcbd0b-ac6b-477e-84f9-bbe62da29a6a |
+----------+--------------------------------------+

# Attaching same volume to test_server2 on /dev/vdb
pcrews@erlking-dev:~/git/rannsaka$ nova --os-auth-url=http://192.168.0.5:5000/v2.0 volume-attach 44d13e4b-a218-46b8-a714-c96e9cff2066 41dcbd0b-ac6b-477e-84f9-bbe62da29a6a /dev/vdb
ERROR (BadRequest): Invalid volume: status must be 'available' (HTTP 400) (Request-ID: req-0eed3b96-e920-425d-9c7a-e1b6b78b3eea)

# Second request repeating the previous one - NOTE the device path is now in-use:
pcrews@erlking-dev:~/git/rannsaka$ nova --os-auth-url=http://192.168.0.5:5000/v2.0 volume-attach 44d13e4b-a218-46b8-a714-c96e9cff2066 41dcbd0b-ac6b-477e-84f9-bbe62da29a6a /dev/vdb
ERROR (Conflict): The supplied device path (/dev/vdb) is in use. (HTTP 409) (Request-ID: req-708dcc24-5fa8-4f3d-8880-ea399aecbfd1)

# Making a curl call to get detail volume attachments for test_server2 to demonstrate that the system now considers the volume to be mounted, even though cinder does not track the same information
pcrews@erlking-dev:~/git/rannsaka$ curl -i 'http://192.168.0.5:8774/v2/c8d54cca25a9496ab264be0c2d96e567/servers/44d13e4b-a218-46b8-a714-c96e9cff2066/os-volume_attachments' -X GET -H "Accept: application/json" -H...

Read more...

description: updated
Changed in cinder:
status: New → Confirmed
Changed in cinder:
status: Confirmed → New
Changed in cinder:
status: New → Confirmed
Revision history for this message
jichenjc (jichenjc) wrote :

hi , could you please let me know which version are you using? I am having a env setup about 1 month before
I tried but can't reproduce the issue you are having

+--------------------------------------+---------+---------+------------+-------------+-------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+---------+---------+------------+-------------+-------------------+
| 7a2242e6-95f6-481c-927c-c944d9b7e02c | j5 | ACTIVE | - | Shutdown | private=10.0.0.14 |
| 36d101fd-6f43-4ad6-840f-e1208c0dfc7c | jichen4 | ACTIVE | - | Running | private=10.0.0.13 |
+--------------------------------------+---------+---------+------------+-------------+-------------------+

+--------------------------------------+-----------+------+------+-------------+----------+--------------------------------------+
| ID | Status | Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+------+------+-------------+----------+--------------------------------------+
| 33327b41-7f89-4c33-a80d-4fd345bf8901 | available | None | 1 | lvmdriver-1 | false | |

[jichen@compute1 ~]$ nova volume-attach 7a2242e6-95f6-481c-927c-c944d9b7e02c 33327b41-7f89-4c33-a80d-4fd345bf8901 /dev/vdc
ERROR (Conflict): The supplied device path (/dev/vdc) is in use. (HTTP 409) (Request-ID: req-6cbd65d8-90fa-4dd7-a410-a0d8ee805a22)
[jichen@compute1 ~]$ nova volume-attach 7a2242e6-95f6-481c-927c-c944d9b7e02c 33327b41-7f89-4c33-a80d-4fd345bf8901 /dev/vdd
ERROR (BadRequest): Invalid volume: status must be 'available' (HTTP 400) (Request-ID: req-95482b33-51d6-4bc3-9479-c8825b047ed9)
[jichen@compute1 ~]$ nova volume-attach 7a2242e6-95f6-481c-927c-c944d9b7e02c 33327b41-7f89-4c33-a80d-4fd345bf8901 /dev/vdd
ERROR (BadRequest): Invalid volume: status must be 'available' (HTTP 400) (Request-ID: req-369dccca-ecb3-4dc6-84e7-a10a6292f909)

Revision history for this message
Patrick Crews (patrick-crews) wrote :

I would say that you did hit the bug otherwise how do you explain this output:
[jichen@compute1 ~]$ nova volume-attach 7a2242e6-95f6-481c-927c-c944d9b7e02c 33327b41-7f89-4c33-a80d-4fd345bf8901 /dev/vdc
ERROR (Conflict): The supplied device path (/dev/vdc) is in use. (HTTP 409) (Request-ID: req-6cbd65d8-90fa-4dd7-a410-a0d8ee805a22)
[jichen@compute1 ~]$ nova volume-attach 7a2242e6-95f6-481c-927c-c944d9b7e02c 33327b41-7f89-4c33-a80d-4fd345bf8901 /dev/vdd
ERROR (BadRequest): Invalid volume: status must be 'available' (HTTP 400) (Request-ID: req-95482b33-51d6-4bc3-9479-c8825b047ed9)

The first request failed, the second request failed due to the volume not being 'available', yet how did the volume become unavailable on a failed attach request?

Additionally, you should list the volume attachments for the server with id: 7a2242e6-95f6-481c-927c-c944d9b7e02c. The issue is that while Cinder does not register the volume attachment, the volume is still listed in the output from this call:
http://developer.openstack.org/api-ref-compute-v2-ext.html
/v2/​{tenant_id}​/servers/​{server_id}​/os-volume_attachments
List volume attachments

Lists the volume attachments for a specified server.

Revision history for this message
Mike Perez (thingee) wrote :
Download full text (3.6 KiB)

Reporter didn't provide a version. Using the latest from master, I was not able to reproduce this issue (see below). Since the reporter mentioned that Cinder does not know about this false attachment, but Nova does, I would bet something is being set on the Nova side.

ubuntu@mount-issue:~/devstack$ nova list
c+--------------------------------------+---------+--------+------------+-------------+------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+---------+--------+------------+-------------+------------------+
| 57304c45-101d-4ce0-8f4b-6b7ad853d135 | server1 | ACTIVE | - | Running | private=10.0.0.2 |
| 1cb270bd-2131-42fa-9f99-ed95cd077cde | server2 | ACTIVE | - | Running | private=10.0.0.3 |
+--------------------------------------+---------+--------+------------+-------------+------------------+
iubuntu@mount-issue:~/devstack$ cinder list
+--------------------------------------+-----------+------+------+-------------+----------+-------------+
| ID | Status | Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+------+------+-------------+----------+-------------+
| 2a57f161-0828-4b68-8f93-cd4493ff725b | available | None | 1 | lvmdriver-1 | false | |
+--------------------------------------+-----------+------+------+-------------+----------+-------------+
ubuntu@mount-issue:~/devstack$ nova volume-attach server1 2a57f161-0828-4b68-8f93-cd4493ff725b
+----------+--------------------------------------+
| Property | Value |
+----------+--------------------------------------+
| device | /dev/vdb |
| id | 2a57f161-0828-4b68-8f93-cd4493ff725b |
| serverId | 57304c45-101d-4ce0-8f4b-6b7ad853d135 |
| volumeId | 2a57f161-0828-4b68-8f93-cd4493ff725b |
+----------+--------------------------------------+
ubuntu@mount-issue:~/devstack$ nova volume-attach server2 2a57f161-0828-4b68-8f93-cd4493ff725b
ERROR (BadRequest): Invalid volume: volume '2a57f161-0828-4b68-8f93-cd4493ff725b' status must be 'available'. Currently in 'in-use' (HTTP 400) (Request-ID: req-bfa40f00-56f9-4535-a4d1-23a8cb69b908)
ubuntu@mount-issue:~/devstack$ nova volume-attach server2 2a57f161-0828-4b68-8f93-cd4493ff725b
ERROR (BadRequest): Invalid volume: volume '2a57f161-0828-4b68-8f93-cd4493ff725b' status must be 'available'. Currently in 'in-use' (HTTP 400) (Request-ID: req-83736950-ff20-4d93-b512-ab7ffc25c7bc)
ubuntu@mount-issue:~/devstack$ cinder list
+--------------------------------------+--------+------+------+-------------+----------+--------------------------------------+
| ID | Status | Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+--------+------+------+-------------+----------+--------------------------------------+
| 2a57f161-0828-4b68-8f93-cd4493ff725b | in-use | None | 1 | lvmdriver-1 | false | 57304c45-101d-4ce0-8f4b-6b7ad853d135 |
+-------------------------...

Read more...

Changed in cinder:
status: Confirmed → Invalid
status: Invalid → Incomplete
Revision history for this message
Patrick Crews (patrick-crews) wrote :

Can confirm that this can no longer be triggered with the latest from master.
Also, ++ on the notion that this is tied to nova vs. cinder, I simply filed it under Cinder in haste.

Changed in nova:
assignee: nobody → Srikar Deshmukh (srikardeshmukh)
assignee: Srikar Deshmukh (srikardeshmukh) → nobody
krishna (leburu-reddy)
Changed in cinder:
assignee: nobody → krishna (leburu-reddy)
Revision history for this message
Davanum Srinivas (DIMS) (dims-v) wrote :

Sorry, can someone please explain what is broken or needs to be fixed on the Nova side?

Changed in nova:
status: New → Incomplete
Revision history for this message
Patrick Crews (patrick-crews) wrote :

As previously noted, I was unable to duplicate this with the latest cinder / nova / etc the last time I tested for this.
Will attempt to re-test next week, but I suspect this is no longer an issue.

Revision history for this message
Robert Collins (lifeless) wrote :

Closing (its two weeks later and Patrick hasn't reported reproduction). Please re-open if it is in fact still an issue.

Changed in nova:
status: Incomplete → Invalid
Changed in cinder:
status: Incomplete → Invalid
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.