ceph deployment fails with not enough pages
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
kolla-ansible |
Fix Released
|
Critical
|
Unassigned |
Bug Description
TASK [cinder : Creating ceph pool] *******
fatal: [10.1.2.3]: FAILED! => {"_ansible_parsed": true, "stderr_lines": ["Error ERANGE: pg_num 128 size 3 would mean 768 total pgs, which exceeds max 600 (mon_max_pg_per_osd 200 * num_in_osds 3)"], "changed": false, "end": "2018-04-12 12:42:29.120658", "_ansible_no_log": false, "_ansible_
I have 3 controllers where monitors run and 3 osd nodes with 1 ceph disk per node.
Refer to https:/
description: | updated |
This will happen once the Ceph packages been installed in images moves to 12.2.1 (currently 12.2.0).
From https:/ /ceph.com/ releases/ v12-2-1- luminous- released :
The maximum number of PGs per OSD before the monitor issues a warn_max_ per_osd option has been removed.
warning has been reduced from 300 to 200 PGs. 200 is still twice
the generally recommended target of 100 PGs per OSD. This limit can
be adjusted via the mon_max_pg_per_osd option on the
monitors. The older mon_pg_
Need to decide how best to address this in Kolla.