Comment 1 for bug 1492742

Revision history for this message
Nobuto Murata (nobuto) wrote : Re: too many PGs per OSD

This issue can be reproduced easily when nova, glance, cinder-ceph pools are created in one ceph cluster.

[hooks/charmhelpers/contrib/storage/linux/ceph.py]
def create_pool(service, name, replicas=3):
    """Create a new RADOS pool."""
    if pool_exists(service, name):
        log("Ceph pool {} already exists, skipping creation".format(name),
            level=WARNING)
        return

    # Calculate the number of placement groups based
    # on upstream recommended best practices.
    osds = get_osds(service)
    if osds:
        pgnum = (len(osds) * 100 // replicas)
    else:
        # NOTE(james-page): Default to 200 for older ceph versions
        # which don't support OSD query from cli
        pgnum = 200

    cmd = ['ceph', '--id', service, 'osd', 'pool', 'create', name, str(pgnum)]
    check_call(cmd)