OpenStack provisioner should have "volume-zones" constraints as "zones" for Cinder volumes, including with root-disk=volume constraint
Bug #1844099 reported by
Pedro Guimarães
This bug affects 1 person
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Canonical Juju |
Triaged
|
Low
|
Unassigned |
Bug Description
Generally, OpenStack is deployed with Ceph. That means Cinder has little to worry with volume being HA since Ceph runs cross-zone replication.
However, if Cinder storage-backend does not support that type of replication, then we need to be mindful of where virtual disks are placed. I believe the simplest example is the LVM storage backend.
Therefore, besides a "zones" constraint which applies to VMs, we should use same "zones" constraints to allocate Cinder volumes or define a "volume-zones" specific for Cinder since Cinder's AZ and Nova AZ configurations may result in different names and layouts.
summary: |
OpenStack provisioner should have "volume-zones" constraints as "zones" - for Cinder volumes + for Cinder volumes, including with root-disk=volume constraint |
Changed in juju: | |
status: | Expired → New |
Changed in juju: | |
status: | New → Triaged |
To post a comment you must log in.
I've run a lab with following OpenStack bundle: https:/ /pastebin. canonical. com/p/4jm8DmSPj 4/
Counting with 2 Cinder AZs, both connected to Ceph (to make sure it will be Up): ------- -----+- ------- ------- ------- ------- --+---- --+---- -----+- ------+ ------- ------- ------- ------- +------ ------- ----+ ------- -----+- ------- ------- ------- ------- --+---- --+---- -----+- ------+ ------- ------- ------- ------- +------ ------- ----+ 16T12:15: 22.000000 | - | 14T15:29: 36.000000 | - | az1@cinder- ceph | az1 | enabled | up | 2019-09- 16T12:15: 22.000000 | - | volume- az2@cinder- ceph | az2 | enabled | up | 2019-09- 16T12:15: 27.000000 | - | 1-lxd-1@ LVM | az1 | enabled | down | 2019-09- 14T15:29: 28.000000 | - | 1-lxd-2@ LVM | az2 | enabled | down | 2019-09- 14T15:28: 59.000000 | - | ------- -----+- ------- ------- ------- ------- --+---- --+---- -----+- ------+ ------- ------- ------- ------- +------ ------- ----+
$ cinder service-list
+------
| Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+------
| cinder-scheduler | cinder-az1 | az1 | enabled | up | 2019-09-
| cinder-scheduler | juju-d23cf7-1-lxd-1 | az1 | enabled | down | 2019-09-
| cinder-volume | cinder-
| cinder-volume | cinder-
| cinder-volume | juju-d23cf7-
| cinder-volume | juju-d23cf7-
+------
I've booted 7 cs:ubuntu VMs on this OpenStack with following bundle: https:/ /pastebin. canonical. com/p/VnSPZY4FS m/
Notice that I am using root-disk=volume constraint.
I can see that all volumes map to "az1":
$ for i in $(openstack volume list | tail -n +4 | head -n -1 | awk '{print $2}'); do echo $i; openstack volume show $i | grep "availability_ zone"; done 134c-4cab- b948-276b84aa18 f2 e150-4b42- 9622-0200e68e44 2a 75f6-4fa4- a547-03e8e7807e 51
f69c2274-
| availability_zone | az1 |
4a099ccc-
| availability_zone | az1 |
84504bc3-
| availability_zone | az1 ...