Comment 8 for bug 1492742

Revision history for this message
Brian Collins (bcollins-b) wrote :

Makes sense, I read the ceph documentation detailing the issues with changing the max pg limit. However, with it being a key / value accepted by ceph.conf I think it should be able to be altered, even temporarily. At one point the max pg was set to 500, so a tweak to 350 doesn't seem overly outrageous.

Obviously tuning would be the prefered method.

At the moment, I can't deploy openstack using autopilot because of the limit. I've added another physical node with 3 additional drives and deployed autopilot was the same issue. It is not till now that I realised increasing the nodes, increased the pool groups. So while i'm closer to the max pg limit, i'm still over.

ubuntu@juju-machine-1-lxc-2:~$ sudo ceph status
sudo: unable to resolve host juju-machine-1-lxc-2
    cluster 38fdd52e-3995-4ce8-977f-285b4b685378
     health HEALTH_WARN
            too many PGs per OSD (337 > max 300)
     monmap e2: 3 mons at {juju-machine-1-lxc-2=10.14.0.44:6789/0,juju-machine-2-lxc-0=10.14.0.46:6789/0,juju-machine-3-lxc-2=10.14.0.41:6789/0}
            election epoch 6, quorum 0,1,2 juju-machine-3-lxc-2,juju-machine-1-lxc-2,juju-machine-2-lxc-0
     osdmap e78: 10 osds: 10 up, 10 in
      pgmap v149: 1124 pgs, 14 pools, 500 MB data, 119 objects
            1887 MB used, 3470 GB / 3472 GB avail
                1124 active+clean