Deployment of cluster with ceph is failed with error: "change from notrun to 0 failed: ceph-deploy --overwrite-conf config pull node-8 returned 1 instead of one of [0]"
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Fuel for OpenStack |
Invalid
|
Critical
|
Alexey Stupnikov |
Bug Description
Detailed bug description:
Deployment of clusters with ceph nodes is failed.
Steps to reproduce:
1. Create the following clusters:
cluster 1:
1 controller node + 3 ceph-osd nodes
Storage Backends:
Ceph RBD for volumes (Cinder)
Ceph RadosGW for objects (Swift API)
Ceph RBD for ephemeral volumes (Nova)
Ceph RBD for images (Glance)
cluster 2:
1 controller node + 2 compute+
Storage Backends:
Cinder LVM over iSCSI for volumes
Ceph RadosGW for objects (Swift API)
Ceph RBD for ephemeral volumes (Nova)
2. Deploy clusters
Actual results:
Deployment is failed for every cluster. Errors are similar for both clusters:
From Astute log on master node:
2017-02-15 09:30:01 ERROR [391] Task '{"priority"=>1200, "type"=>"puppet", "id"=>"
From puppet log on error node with ceph:
2017-02-15 09:30:01 ERR (/Stage[
Expected results:
Deployment should be finished successfully.
Reproducibility:
Always
VERSION:
MOS 8.0 + MU4 updates
Additional information:
Issue is not reproducible on clear 8.0 without MU4 updates.
Changed in fuel: | |
milestone: | none → 8.0-mu-4 |
importance: | Undecided → Critical |
assignee: | nobody → MOS Maintenance (mos-maintenance) |
Note:
Not all clusters with ceph are affected with this bug.
For example, the following cluster with Ceph is deployed successfully on 8.0 + MU4:
1 controller + ceph-osd
1 compute + ceph-osd
Ceph object replication factor 2
Storage Backends:
Ceph RBD for volumes (Cinder)
Ceph RadosGW for objects (Swift API)
Ceph RBD for ephemeral volumes (Nova)
Ceph RBD for images (Glance)