Ceph: too many PGs per OSD (320 > max 300)
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
kolla |
Won't Fix
|
Wishlist
|
Michał Jastrzębski |
Bug Description
Hi,
When I run ceph status on a control node:
[stack@control1 ~]$ sudo docker exec -it ceph_mon ceph status
cluster 21e54d28-
health HEALTH_WARN
too many PGs per OSD (320 > max 300)
monmap e1: 3 mons at
{192.168.
68.122.
192.168.
osdmap e107: 3 osds: 3 up, 3 in
flags sortbitwise
pgmap v975: 320 pgs, 3 pools, 236 MB data, 36 objects
834 MB used, 45212 MB / 46046 MB avail
The Ceph Storage Cluster has a default maximum value of 300 placement groups
per OSD.
[stack@control1 ~]$ sudo docker exec -it ceph_mon ceph osd pool get images/vms/rbd
pg_num: 128
pg_num: 64
pg_num: 128
Possible solutions:
britthouser2: calculate the total, use their percentage as default, let an operator override the percentages if they so choose
Regards,
Changed in kolla: | |
importance: | Undecided → Wishlist |
Changed in kolla: | |
status: | New → Confirmed |
assignee: | nobody → Michał Jastrzębski (inc007) |
Changed in kolla: | |
status: | In Progress → Won't Fix |
Fix proposed to branch: master /review. openstack. org/383837
Review: https:/