Ceph health status isn't ok after scale environment
Bug #1705710 reported by
Ilya Bumarskov
This bug affects 1 person
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Fuel for OpenStack |
Invalid
|
High
|
MOS Maintenance |
Bug Description
Steps to reproduce:
1. Create cluster
2. Add 3 controller, 1 compute, 3 ceph nodes
3. Deploy the cluster
4. Add 1 ceph node
5. Deploy changes
6. Verify network
7. Run OSTF
8. Add 1 ceph node and delete one deployed ceph node
9. Deploy changes
10. Check ceph health
Observed behaviour:
root@node-1:~# ceph health
HEALTH_WARN 431 pgs peering; 431 pgs stuck inactive; 431 pgs stuck unclean; 177 requests are blocked > 32 sec; too many PGs per OSD (640 > max 300)
Changed in fuel: | |
importance: | Undecided → High |
assignee: | nobody → MOS Maintenance (mos-maintenance) |
milestone: | none → 8.0-updates |
To post a comment you must log in.
Seems that it's issue after revert env.