creating a volume from a snapshot doesn't respect which az the volume & snapshot are in
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Cinder |
Fix Released
|
Undecided
|
Michael Kerrin |
Bug Description
Cinder currently allows users to change the availability zone in the cinder database when creating a volume from a snapshot. And if a user doesn't specify a AZ then cinder automatically picks the default AZ for the volume even though the snapshot & the volume this snapshot was created from might be in a differnet AZ.
To reproduce you can run the following:
cinder create --availability-zone non-default-az --display-name vol11
cinder snapshot-create <ID for vol1>
cinder create --snapshot-id <ID for snapshot>
The second volume created here will be in the default az even though the volume & snapshot are in the non-default-az.
This may work for some deployments but this doesn't work for all deployments. If your deployment doesn't support attaching volumes from different az's then allowing this can cause a whole host of problems with volumes stucking in creating states. This becomes especially irksome when you are dealing with bootable volumes, as then instances end up stuck at weird points.
Changed in cinder: | |
assignee: | nobody → Michael Kerrin (michael-kerrin-w) |
Changed in cinder: | |
milestone: | none → havana-3 |
status: | Fix Committed → Fix Released |
Changed in cinder: | |
milestone: | havana-3 → 2013.2 |
There was a patch that came in for this, but there were disagreements:
https:/ /review. openstack. org/#/c/ 25888/