Ceph: too many PGs per OSD (320 > max 300)

Bug #1629338 reported by serlex
8
This bug affects 1 person
Affects Status Importance Assigned to Milestone
kolla
Won't Fix
Wishlist
Michał Jastrzębski

Bug Description

Hi,

When I run ceph status on a control node:

[stack@control1 ~]$ sudo docker exec -it ceph_mon ceph status
    cluster 21e54d28-9c5b-4a82-b755-d1ff18f0da4e
     health HEALTH_WARN
            too many PGs per OSD (320 > max 300)
     monmap e1: 3 mons at
{192.168.122.19=192.168.122.19:6789/0,192.168.122.3=192.168.122.3:6789/0,192.1
68.122.4=192.168.122.4:6789/0}
            election epoch 40, quorum 0,1,2
192.168.122.3,192.168.122.4,192.168.122.19
     osdmap e107: 3 osds: 3 up, 3 in
            flags sortbitwise
      pgmap v975: 320 pgs, 3 pools, 236 MB data, 36 objects
            834 MB used, 45212 MB / 46046 MB avail
                 320 active+clean

The Ceph Storage Cluster has a default maximum value of 300 placement groups
per OSD.

[stack@control1 ~]$ sudo docker exec -it ceph_mon ceph osd pool get images/vms/rbd
pg_num: 128
pg_num: 64
pg_num: 128

Possible solutions:
britthouser2: calculate the total, use their percentage as default, let an operator override the percentages if they so choose

Regards,

Changed in kolla:
importance: Undecided → Wishlist
Changed in kolla:
status: New → Confirmed
assignee: nobody → Michał Jastrzębski (inc007)
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to kolla (master)

Fix proposed to branch: master
Review: https://review.openstack.org/383837

Changed in kolla:
status: Confirmed → In Progress
Revision history for this message
Ryan Wallner (wallnerryan) wrote :

Anyone actively working on this? Last update was Oct, happy to help out. This is something we change in our environments post kolla deployment and would be nice to have them configurable / overridable

Revision history for this message
serlex (serlex) wrote :

Hi Ryan

I've only recent deployed Kolla Ocata with ceph and still facing "too many PGs per OSD" output. Can you share how you've resolved it in your environment? Were there drawbacks regarding performance?

Regards,

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Change abandoned on kolla (master)

Change abandoned by Mark Goddard (<email address hidden>) on branch: master
Review: https://review.opendev.org/383837
Reason: Very old

Mark Goddard (mgoddard)
Changed in kolla:
status: In Progress → Won't Fix
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.