"0" is explicitly set to the glance charm for example.
$ juju config -m openstack glance ceph-osd-replication-count -> 0
And it's likely from this logic. https://github.com/canonical/snap-openstack/blob/599e01aa263729d8f411241531bc424934b9ce05/sunbeam-python/sunbeam/commands/openstack.py#L139-L153
I have one OSD at least since I'm in the bootstrap phase. I will dig further why it's considered as 0.
$ sudo ceph status cluster: id: 78bb216d-fa9a-4938-b32b-6ac1f43448e9 health: HEALTH_WARN 1 pool(s) have no replicas configured
services: mon: 1 daemons, quorum sunbeam-1 (age 2h) mgr: sunbeam-1(active, since 2h) osd: 1 osds: 1 up (since 2h), 1 in (since 2h)
data: pools: 1 pools, 1 pgs objects: 2 objects, 449 KiB usage: 27 MiB used, 16 GiB / 16 GiB avail pgs: 1 active+clean
"0" is explicitly set to the glance charm for example.
$ juju config -m openstack glance ceph-osd- replication- count
-> 0
And it's likely from this logic. /github. com/canonical/ snap-openstack/ blob/599e01aa26 3729d8f41124153 1bc424934b9ce05 /sunbeam- python/ sunbeam/ commands/ openstack. py#L139- L153
https:/
I have one OSD at least since I'm in the bootstrap phase. I will dig further why it's considered as 0.
$ sudo ceph status fa9a-4938- b32b-6ac1f43448 e9
cluster:
id: 78bb216d-
health: HEALTH_WARN
1 pool(s) have no replicas configured
services:
mon: 1 daemons, quorum sunbeam-1 (age 2h)
mgr: sunbeam-1(active, since 2h)
osd: 1 osds: 1 up (since 2h), 1 in (since 2h)
data:
pools: 1 pools, 1 pgs
objects: 2 objects, 449 KiB
usage: 27 MiB used, 16 GiB / 16 GiB avail
pgs: 1 active+clean