Boot from image and create a new volume ignores availability zone

Bug #1380780 reported by Oleksii Aleksieiev
18
This bug affects 3 people
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
Fix Released
Low
Andriy Kurilin

Bug Description

Boot from image and creation of a new volume does not pass instance availability zone to cinder when creating volume.

Here is a fail scenario.

Configure cinder to run volume service in different availability zones.
Cross zone volume usage should be disabled (in nova conf cinder_cross_az_attach=false ).

[root@node-7 ~]# cinder service-list
+------------------+--------------------+----------+---------+-------+----------------------------+
| Binary | Host | Zone | Status | State | Updated_at |
+------------------+--------------------+----------+---------+-------+----------------------------+
| cinder-scheduler | node-10.domain.tld | internal | enabled | up | 2014-10-13T20:12:18.000000 |
| cinder-scheduler | node-7.domain.tld | internal | enabled | up | 2014-10-13T20:12:15.000000 |
| cinder-scheduler | node-8.domain.tld | internal | enabled | up | 2014-10-13T20:12:18.000000 |
| cinder-volume | node-10.reg1a | reg1a | enabled | up | 2014-10-13T20:12:14.000000 |
| cinder-volume | node-10.reg1b | reg1b | enabled | up | 2014-10-13T20:12:14.000000 |
| cinder-volume | node-7.reg1a | reg1a | enabled | up | 2014-10-13T20:12:18.000000 |
| cinder-volume | node-7.reg1b | reg1b | enabled | up | 2014-10-13T20:12:14.000000 |
| cinder-volume | node-8.reg1a | reg1a | enabled | up | 2014-10-13T20:12:21.000000 |
| cinder-volume | node-8.reg1b | reg1b | enabled | up | 2014-10-13T20:12:21.000000 |
+------------------+--------------------+----------+---------+-------+----------------------------+

Run CLI as below to create a volume from an existing image and using it to boot an instance.

nova boot test --flavor 1 --image 32705323-4bfb-4cd7-9711-f5459fd236d8 --nic net-id=ca3b4232-405c-4225-9724-0f0dde69c1d5 --availability-zone=reg1a --block-device "source=image,id=32705323-4bfb-4cd7-9711-f5459fd236d8,dest=volume,size=10,bootindex=1"

This will attempt to create volume in internal (default) availability zone. But creation will fail because there are no volume service in internal availability zone.

+--------------------------------+--------------------------------------+
| Property | Value |
+--------------------------------+--------------------------------------+
| attachments | [] |
| availability_zone | internal |
| bootable | false |
| created_at | 2014-10-13T19:45:24.000000 |
| display_description | |
| display_name | |
| encrypted | False |
| id | e358b519-4287-45a5-85cc-1e6a0d371fb1 |
| metadata | {} |
| os-vol-host-attr:host | None |
| os-vol-mig-status-attr:migstat | None |
| os-vol-mig-status-attr:name_id | None |
| os-vol-tenant-attr:tenant_id | 1cd2c85585ed42dcaf266b57c22c86ef |
| size | 10 |
| snapshot_id | None |
| source_volid | None |
| status | error |
| volume_type | None |
+--------------------------------+--------------------------------------+

The instance boot fail with error: "InvalidVolume: Invalid volume: status must be 'available'".

Tags: volumes
Revision history for this message
Joe Gordon (jogo) wrote :

What version of nova is this using?

Changed in nova:
status: New → Incomplete
Revision history for this message
Oleksii Aleksieiev (alexzzman) wrote :

i'm using 2014.1.1

Tom Fifield (fifieldt)
Changed in nova:
status: Incomplete → New
Revision history for this message
Trung Trinh (trung-t-trinh) wrote :

Hi all,
I'd like to join fixing this bug.
If possible, please kindly input more available info for debugging.
Thanks

Changed in nova:
assignee: nobody → Trung Trinh (trung-t-trinh)
status: New → In Progress
Revision history for this message
Trung Trinh (trung-t-trinh) wrote :

I use DevStack to deploy OpenStack on a VM (i.e. single-node deployment).
On my local OpenStack, I can create a new VM instance (booted from volume) in an availability zone that is different from the availability zone of the volume used for booting.
I also try using provided CLI (above) and it also works fine.

My local OpenStack version is: stable/juno

Therefore, I doubt that whether or not this problem occurs only on multi-node deployment?

Jay Pipes (jaypipes)
Changed in nova:
assignee: Trung Trinh (trung-t-trinh) → nobody
description: updated
Changed in nova:
assignee: nobody → Andrey Kurilin (akurilin)
Changed in nova:
importance: Undecided → Low
Revision history for this message
Andriy Kurilin (andreykurilin) wrote :

Steps to reproduce this bug:
1. Set "cinder_cross_az_attach=false" in nova.conf
2. Set "storage_availability_zone=test-az" in cinder.conf
3. Restart nova and cinder services
4. Create a host aggregate that is exposed as "test-az" availability zone
5. Add a host to a host aggregate
6. Try to boot instance in non-default availability zone
(more details about 4-6 steps you can found at
http://blog.russellbryant.net/2013/05/21/availability-zones-and-host-aggregates-in-openstack-compute-nova/)

Revision history for this message
Andriy Kurilin (andreykurilin) wrote :
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to nova (master)

Reviewed: https://review.openstack.org/157041
Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=6060888b58db42eea826939852838f9e1c204d2c
Submitter: Jenkins
Branch: master

commit 6060888b58db42eea826939852838f9e1c204d2c
Author: Andrey Kurilin <email address hidden>
Date: Wed Feb 18 17:33:22 2015 +0200

    Create volume in the same availability zone as instance

    Volume should be created in the same availability zone as an instance to
    prevent:
     > InvalidVolume: Instance and volume not in same availability_zone

    Change-Id: I121cbdb68ec06d9b358a12c857dc24b75d7973e4
    Closes-Bug: #1380780

Changed in nova:
status: In Progress → Fix Committed
Thierry Carrez (ttx)
Changed in nova:
milestone: none → kilo-3
status: Fix Committed → Fix Released
Thierry Carrez (ttx)
Changed in nova:
milestone: kilo-3 → 2015.1.0
Revision history for this message
Madhu Mohan Nelemane (mmohan-9) wrote :
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Change abandoned on nova (stable/juno)

Change abandoned by Madhu Mohan (<email address hidden>) on branch: stable/juno
Review: https://review.openstack.org/216666
Reason: Another person is working on this change.

Matt Riedemann (mriedem)
tags: added: volumes
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to nova (master)

Related fix proposed to branch: master
Review: https://review.openstack.org/226977

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix merged to nova (master)

Reviewed: https://review.openstack.org/226977
Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=d8d2f02f3014e8d7e0ef9a61723f5a10f050cc81
Submitter: Jenkins
Branch: master

commit d8d2f02f3014e8d7e0ef9a61723f5a10f050cc81
Author: Matt Riedemann <email address hidden>
Date: Wed Sep 23 13:21:06 2015 -0700

    Deprecate cinder.cross_az_attach option

    Introduced in 1504cbc5d4a27695fa663f0b0f3f7b48745bdb45 in Grizzly, the
    use cases around this option are unclear but there have been several
    bugs related to it.

    For example, 6060888b58db42eea826939852838f9e1c204d2c changed boot from
    volume to pass the server instance availability zone to cinder when
    creating the volume. However, if that same AZ does not exist in Cinder,
    the volume create request will fail - which is bug 1496235.

    Cinder works around this with b85d2812a8256ff82934d150dbc4909e041d8b31
    using a config option to allow falling back to a default Cinder AZ if
    the one requested does not exist. By default this fallback is disabled
    though.

    It also sounds like availability zones in Cinder were an artifact from
    when Cinder was split out from the nova-volume service, but Cinder AZ
    use cases are also unclear. And it's also unclear what the relationship
    is between AZs in Nova and Cinder.

    So given the problems here and the lack of a clear use case to justify
    having this configuration option in tree, along with it's added
    complexity, deprecate the option for removal.

    Related-Bug: #1496235
    Related-Bug: #1380780
    Related-Bug: #1489575
    Related-Bug: #1497253

    Change-Id: I52cd3d8867d3b35f5caba377302bfc52c112f1d6

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.