Lost data in Ceph during failure adding new Ceph nodes.
Bug #1445296 reported by
Denis Ipatov
This bug affects 1 person
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Fuel for OpenStack |
Fix Committed
|
Critical
|
Stanislav Makar | ||
6.0.x |
In Progress
|
Critical
|
Stanislav Makar |
Bug Description
How to reproduce it:
1. Create new cluster. All data need to be located in Ceph.
2. Add some data in the working cloud. For example add several images.
3. Add new Ceph OSD nodes. Ceph starts repalancing after installing the OS and Ceph but before finish deployment.
4. If we had any deployment's error the cluster marks as 'Error'
5. Delete Ceph nodes witch were marked as "Error"
6. Number of lost number depend on how much data was rebalanced.
How to avoid it:
1. Execute the command `ceph osd set noout` to stop rebalancing data before adding Ceph OSD nodes
and `ceph osd unset noout` after succesful end of deployment.
This is affect all version of MOS.
description: | updated |
summary: |
- Lost data in Ceph during fail adding new Ceph nodes. + Lost data in Ceph during failure adding new Ceph nodes. |
tags: | added: customer-found |
description: | updated |
Changed in fuel: | |
milestone: | none → 6.1 |
assignee: | nobody → Fuel Library Team (fuel-library) |
importance: | Undecided → High |
Changed in fuel: | |
status: | Fix Committed → Incomplete |
tags: | added: on-verification |
tags: | removed: on-verification |
Changed in fuel: | |
status: | Fix Committed → Fix Released |
Changed in fuel: | |
status: | Fix Released → In Progress |
To post a comment you must log in.
It would be fine to know ISO version ? /github. com/stackforge/ fuel-library/ commit/ c52d4fc377efe11 34e8be81a18560c 0a6e0138c3 which should help with it
We have already merged the patch https:/