Ceph health is too many PGs per OSD (320 > max 300) after trying to delete ceph osds
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Fuel for OpenStack |
Invalid
|
High
|
MOS Ceph | ||
8.0.x |
Invalid
|
High
|
Egor Kotko | ||
Mitaka |
Invalid
|
High
|
MOS Ceph |
Bug Description
Steps to reproduce:
1. Create and deploy next cluster - Neutron Vlan, ceph for volumes/
2. After deployment add one ceph node and re-deploy (not necessary to reproduce)
3. After re-deploy start preparing ceph node to be deleted (using that guide - https:/
4. Execute next commands (on node-2 in my cases):
- ceph osd out 1
- ceph osd out 3
5. Start waiting for 'ceph -s' to show OK status
Actual result - after an hour of waiting (test cluster without any data on ceph nodes) status is:
ceph -s
cluster c3c93807-
health HEALTH_WARN
too many PGs per OSD (320 > max 300)
monmap e3: 3 mons at {node-3=
osdmap e65: 8 osds: 8 up, 6 in
pgmap v194: 640 pgs, 10 pools, 12977 kB data, 51 objects
12566 MB used, 284 GB / 296 GB avail
fuel iso - 478
logs are attached
Changed in fuel: | |
status: | Incomplete → Confirmed |
description: | updated |
description: | updated |
Changed in fuel: | |
assignee: | Fuel QA Team (fuel-qa) → MOS Ceph (mos-ceph) |
summary: |
- add_delete_ceph test timed out waiting ceph health to be ok + Ceph health is too many PGs per OSD (320 > max 300) after trying to + delete ceph osds |
tags: | removed: area-qa |
tags: | added: area-ceph |
Please provide Fuel ISO version