Comment 2 for bug 1492742

Revision history for this message
Edward Hope-Morley (hopem) wrote : Re: too many PGs per OSD

Nobuto, this is more a problem of not having enough OSDs to support the number of pools you need for your environment. As the charms deploy they are not aware of how many pool will be created, they only know what pools they individually create so they are not able to adjust their pg_num accordingly. If we were to cap the pg_num set in charms we would be setting a suboptimal value according to Ceph documentation which would work adversely for environments with fewer pools. Currently we use the following formula to calculate pg_num:

    pg_num = num_osds * 100 / replicas

So, I think the only option you have is to either add more OSDs to your ceph cluster or decrease the number of services that depend on Ceph.