creating a volume from a snapshot doesn't respect which az the volume & snapshot are in

Bug #1202648 reported by Michael Kerrin on 2013-07-18
10
This bug affects 2 people
Affects Status Importance Assigned to Milestone
Cinder
Undecided
Michael Kerrin

Bug Description

Cinder currently allows users to change the availability zone in the cinder database when creating a volume from a snapshot. And if a user doesn't specify a AZ then cinder automatically picks the default AZ for the volume even though the snapshot & the volume this snapshot was created from might be in a differnet AZ.

To reproduce you can run the following:

cinder create --availability-zone non-default-az --display-name vol11
cinder snapshot-create <ID for vol1>
cinder create --snapshot-id <ID for snapshot>

The second volume created here will be in the default az even though the volume & snapshot are in the non-default-az.

This may work for some deployments but this doesn't work for all deployments. If your deployment doesn't support attaching volumes from different az's then allowing this can cause a whole host of problems with volumes stucking in creating states. This becomes especially irksome when you are dealing with bootable volumes, as then instances end up stuck at weird points.

Changed in cinder:
assignee: nobody → Michael Kerrin (michael-kerrin-w)
Mike Perez (thingee) wrote :

There was a patch that came in for this, but there were disagreements:

https://review.openstack.org/#/c/25888/

John Griffith (john-griffith) wrote :

I guess I don't understand how a snapshot can be created in a different AZ than it's parent volume?

Create a volume from a snapshot is possible because the operation is an iscsi attach and dd of the contents from the snap to the new volume. Snapshots on the other hand are back-end device internals, so I'm not quite following.

Avishay Traeger (avishay-il) wrote :

In cinder/volume/api.py, we have this code in create():
        if availability_zone is None:
            availability_zone = CONF.storage_availability_zone
        else:
            self._check_availabilty_zone(availability_zone)

I think Michael's suggestion is that if availability_zone is None and we are creating from snapshot, we should use the snapshot's availability zone, not what's in CONF. I would extend this to clone operations as well.

John: I have no issue with the snapshots. The issue is with creating a volume from a snapshot under certains circumstances.

Creating a volume from a snapshot is also back-end deivce internal. As you point out the default driver supports creating a volume from a snapshot across AZ but it is not necessarily true that all drivers support this now or in the future. Currently our custom driver doesn't support this but it not necessarily true that all upstream drivers will have to support this (not sure if they all do now). If your driver doesn't support this feature depending on your setup you can get yourself into a whole host of problems with volumes & instances in weird states and there is no simple obvious solution to this.

Avishay: that is what I am planning on doing. Thanks for pointing out the clone operation aswell.

Fix proposed to branch: master
Review: https://review.openstack.org/38272

Changed in cinder:
status: New → In Progress

Reviewed: https://review.openstack.org/38272
Committed: http://github.com/openstack/cinder/commit/bec0c8c4a06ffc33ce7914a5f017b981d086435c
Submitter: Jenkins
Branch: master

commit bec0c8c4a06ffc33ce7914a5f017b981d086435c
Author: Michael Kerrin <email address hidden>
Date: Tue Jul 23 09:49:57 2013 +0000

    Create volume from snapshot must be in the same AZ as snapshot

    This issue and patch also apply to cloning volumes.

    When creating a volume from a snapshot we need to pick the
    availability zone of the snapshot's source volume. This patch
    goes further and enforces that the new volume must be in the same
    AZ as the snapshot. It raises an user error if the user tries
    to create a volume in a different AZ to the snapshot.

    This is enforced across all drivers because creating a volume from
    a snapshot is implemented in the drivers and not all drivers are
    guaranteed to support creating a volume from snapshot is a foreign
    AZ. More to point if you don't support create a volume like this,
    and we allow this then you can create volumes and instances that
    get stuck in some weird states that require a support call to fix.

    If you do support cross AZ functionality then you can override
    the enforcement of that cloned volumes most be in the same AZ
    as their source via the 'cloned_volume_same_az' option.

    Change-Id: Iafc8f35ecc6a6b51dbe6df8bf44eaa3e79c3bd01
    Fixes: bug #1202648

Changed in cinder:
status: In Progress → Fix Committed
Thierry Carrez (ttx) on 2013-09-05
Changed in cinder:
milestone: none → havana-3
status: Fix Committed → Fix Released
Thierry Carrez (ttx) on 2013-10-17
Changed in cinder:
milestone: havana-3 → 2013.2
To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers