I have a clean deployment of openstack using autopilot. I've deployed using ceph for both object and block storage, and this bug rears its head. I have a 3 storage node stack with 2 usable ceph drives per node. I'm curious on how might I temporarily get around this error by increase the max PG Number in which the warning is flagged. It's mentioned by in post #3, however, I'm unable to set it using juju. Placing it directly in the ceph.conf doesn't work as it won't persist.
'mon pg warn max per osd = 500'
ubuntu@juju-machine-0-lxc-1:~$ sudo ceph status
sudo: unable to resolve host juju-machine-0-lxc-1
cluster 6639db42-5c26-45b8-b658-ed4193b01c88
health HEALTH_WARN
too many PGs per OSD (462 > max 300)
monmap e2: 3 mons at {juju-machine-0-lxc-1=10.14.0.77:6789/0,juju-machine-1-lxc-1=10.14.0.69:6789/0,juju-machine-2-lxc-3=10.14.0.54:6789/0} election epoch 8, quorum 0,1,2 juju-machine-2-lxc-3,juju-machine-1-lxc-1,juju-machine-0-lxc-1
osdmap e58: 6 osds: 6 up, 6 in
pgmap v108: 924 pgs, 14 pools, 500 MB data, 118 objects
3012 MB used, 2501 GB / 2504 GB avail 924 active+clean
I have a clean deployment of openstack using autopilot. I've deployed using ceph for both object and block storage, and this bug rears its head. I have a 3 storage node stack with 2 usable ceph drives per node. I'm curious on how might I temporarily get around this error by increase the max PG Number in which the warning is flagged. It's mentioned by in post #3, however, I'm unable to set it using juju. Placing it directly in the ceph.conf doesn't work as it won't persist.
'mon pg warn max per osd = 500'
ubuntu@ juju-machine- 0-lxc-1: ~$ sudo ceph status 0-lxc-1 5c26-45b8- b658-ed4193b01c 88 0-lxc-1= 10.14.0. 77:6789/ 0,juju- machine- 1-lxc-1= 10.14.0. 69:6789/ 0,juju- machine- 2-lxc-3= 10.14.0. 54:6789/ 0}
election epoch 8, quorum 0,1,2 juju-machine- 2-lxc-3, juju-machine- 1-lxc-1, juju-machine- 0-lxc-1
924 active+clean
sudo: unable to resolve host juju-machine-
cluster 6639db42-
health HEALTH_WARN
too many PGs per OSD (462 > max 300)
monmap e2: 3 mons at {juju-machine-
osdmap e58: 6 osds: 6 up, 6 in
pgmap v108: 924 pgs, 14 pools, 500 MB data, 118 objects
3012 MB used, 2501 GB / 2504 GB avail