I've run a lab with following OpenStack bundle: https://pastebin.canonical.com/p/4jm8DmSPj4/ Counting with 2 Cinder AZs, both connected to Ceph (to make sure it will be Up): $ cinder service-list +------------------+-------------------------------+------+---------+-------+----------------------------+-----------------+ | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | +------------------+-------------------------------+------+---------+-------+----------------------------+-----------------+ | cinder-scheduler | cinder-az1 | az1 | enabled | up | 2019-09-16T12:15:22.000000 | - | | cinder-scheduler | juju-d23cf7-1-lxd-1 | az1 | enabled | down | 2019-09-14T15:29:36.000000 | - | | cinder-volume | cinder-az1@cinder-ceph | az1 | enabled | up | 2019-09-16T12:15:22.000000 | - | | cinder-volume | cinder-volume-az2@cinder-ceph | az2 | enabled | up | 2019-09-16T12:15:27.000000 | - | | cinder-volume | juju-d23cf7-1-lxd-1@LVM | az1 | enabled | down | 2019-09-14T15:29:28.000000 | - | | cinder-volume | juju-d23cf7-1-lxd-2@LVM | az2 | enabled | down | 2019-09-14T15:28:59.000000 | - | +------------------+-------------------------------+------+---------+-------+----------------------------+-----------------+ I've booted 7 cs:ubuntu VMs on this OpenStack with following bundle: https://pastebin.canonical.com/p/VnSPZY4FSm/ Notice that I am using root-disk=volume constraint. I can see that all volumes map to "az1": $ for i in $(openstack volume list | tail -n +4 | head -n -1 | awk '{print $2}'); do echo $i; openstack volume show $i | grep "availability_zone"; done f69c2274-134c-4cab-b948-276b84aa18f2 | availability_zone | az1 | 4a099ccc-e150-4b42-9622-0200e68e442a | availability_zone | az1 | 84504bc3-75f6-4fa4-a547-03e8e7807e51 | availability_zone | az1 | 8717692b-ec30-41ac-8289-dc706df1b48e | availability_zone | az1 | beedf30c-1125-4c10-87c4-d1077c0b21e6 | availability_zone | az1 | 7a9f8f75-e4c0-4baf-b39f-f57b274e5ead | availability_zone | az1 | 258563e3-16aa-4576-9cc3-8b39aa826c8d | availability_zone | az1 | We can see that out-of-the-box OpenStack deployment will have similar whenever using root-disk=volume. Also, to add AZs to Cinder, one needs to use config-flags to add to standard cinder charm, such as: cinder-az1: annotations: gui-x: '750' gui-y: '0' charm: cs:cinder num_units: 1 options: block-device: None glance-api-version: 2 worker-multiplier: *worker-multiplier openstack-origin: *openstack-origin config-flags: storage_availability_zone=az1,default_storage_availability_zone=az1 to: - 'lxd:1' cinder-volume-az2: annotations: gui-x: '750' gui-y: '0' charm: cs:cinder num_units: 1 options: enabled-services: volume block-device: None glance-api-version: 2 worker-multiplier: *worker-multiplier openstack-origin: *openstack-origin config-flags: "storage_availability_zone=az2" to: - lxd:1