Puppet manifests should not fail if user add cinder nodes to environment with Ceph RBD
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Fuel for OpenStack |
Invalid
|
Low
|
Fuel Library (Deprecated) | ||
7.0.x |
Invalid
|
Low
|
Fuel Library (Deprecated) |
Bug Description
"build_id": "2014-09-
"ostf_sha": "64cb59c681658a
"build_number": "11",
"auth_required": true,
"api": "1.0",
"nailgun_sha": "eb8f2b358ea4bb
"production": "docker",
"fuelmain_sha": "8ef433e939425e
"astute_sha": "f5fbd89d1e0e1f
"feature_groups": ["mirantis"],
"release": "5.1",
"release_versions": {"2014.1.1-5.1": {"VERSION": {"build_id": "2014-09-
"fuellib_sha": "d9b16846e54f76
I did test case:
1. Create new environment (CentOS, HA mode)
2. Choose nova-network, vlan manager
3. Choose both Ceph
4. Choose Sahara
5. Add 3 controller+
6. Configure interfaces (see screen) and untag management network
7. Start deployment. It was successful
8. But there were many errors in cinder logs and puppet log (see this bug https:/
This errors appears because adding cinder nodes to an environment with Ceph RBD used as backend for volumes (instead of Cinder LVM backend) is not a valid configuration.
Puppet manifests should not failed if user did this case.
Changed in fuel: | |
importance: | Medium → Low |
status: | New → Confirmed |
Changed in fuel: | |
milestone: | 6.0 → 6.1 |
Changed in fuel: | |
status: | Confirmed → Won't Fix |
Current roles and settings constrains make impossible to reproduce such setup. (More specifically, you can't assign controller+ ceph+cinder) .