1. Create new environment (CentOS, HA mode)
2. Choose nova-network, vlan manager
3. Choose both Ceph
4. Choose Sahara
5. Add 3 controller+ceph+cinder, 2 compute+ceph, 1 cinder
6. Configure interfaces (see screen) and untag management network
7. Start deployment. It was successful
8. Start OSTF tests. It was successful
9. But there is error on first controller (node-1) with ceph in puppet.log:
2014-09-19 12:31:24 ERR
(/Stage[main]/Ceph::Osd/Exec[ceph-deploy osd activate]/returns) change from notrun to 0 failed: ceph-deploy osd activate node-1:/dev/sdb4 node-1:/dev/sdc4 returned 1 instead of one of [0]
10. Also there are errors on cinder (node-6):
2014-09-19 12:35:09 WARNING
cinder.volume.manager [req-2dad97a0-b509-46d5-a239-06fa1344f2bf - - - - -] Unable to update stats, RBDDriver -1.1.0 driver is uninitialized.
2014-09-19 12:34:19 ERROR
cinder.volume.manager [req-8a4e3758-7d8f-4bfd-876a-d913163aaf83 - - - - -] Bad or unexpected response from the storage volume backend API: error connecting to ceph cluster
2014-09-19 11:34:19.495 26002 TRACE cinder.volume.manager Traceback (most recent call last):
2014-09-19 11:34:19.495 26002 TRACE cinder.volume.manager File "/usr/lib/python2.6/site-packages/cinder/volume/manager.py", line 243, in init_host
2014-09-19 11:34:19.495 26002 TRACE cinder.volume.manager self.driver.check_for_setup_error()
2014-09-19 11:34:19.495 26002 TRACE cinder.volume.manager File "/usr/lib/python2.6/site-packages/cinder/volume/drivers/rbd.py", line 268, in check_for_setup_error
2014-09-19 11:34:19.495 26002 TRACE cinder.volume.manager raise exception.VolumeBackendAPIException(data=msg)
2014-09-19 11:34:19.495 26002 TRACE cinder.volume.manager VolumeBackendAPIException: Bad or unexpected response from the storage volume backend API: error connecting to ceph cluster
2014-09-19 11:34:19.495 26002 TRACE cinder.volume.manager
2014-09-19 12:34:19 ERROR
cinder.volume.manager [req-8a4e3758-7d8f-4bfd-876a-d913163aaf83 - - - - -] Error encountered during initialization of driver: RBDDriver
Reproduced on ISO #11 for 5.1
"build_id": "2014-09- 17_21-40- 34", 7a55cc2c09d0790 72a41beb346" , 7eb0b2a0075e7ad 3d3a905db0d" , abd1034c0b70e90 bdf888b69fd" , 22ef9ab2af26da5 ffbfbf24b13" , 17_21-40- 34", "ostf_sha": "64cb59c681658a 7a55cc2c09d0790 72a41beb346" , "build_number": "11", "api": "1.0", "nailgun_sha": "eb8f2b358ea4bb 7eb0b2a0075e7ad 3d3a905db0d" , "production": "docker", "fuelmain_sha": "8ef433e939425e abd1034c0b70e90 bdf888b69fd" , "astute_sha": "f5fbd89d1e0e1f 22ef9ab2af26da5 ffbfbf24b13" , "feature_groups": ["mirantis"], "release": "5.1", "fuellib_sha": "d9b16846e54f76 c8ebe7764d2b5b8 231d6b25079" }}}, "fuellib_sha": "d9b16846e54f76 c8ebe7764d2b5b8 231d6b25079"
"ostf_sha": "64cb59c681658a
"build_number": "11",
"auth_required": true,
"api": "1.0",
"nailgun_sha": "eb8f2b358ea4bb
"production": "docker",
"fuelmain_sha": "8ef433e939425e
"astute_sha": "f5fbd89d1e0e1f
"feature_groups": ["mirantis"],
"release": "5.1",
"release_versions": {"2014.1.1-5.1": {"VERSION": {"build_id": "2014-09-
1. Create new environment (CentOS, HA mode) ceph+cinder, 2 compute+ceph, 1 cinder
2. Choose nova-network, vlan manager
3. Choose both Ceph
4. Choose Sahara
5. Add 3 controller+
6. Configure interfaces (see screen) and untag management network
7. Start deployment. It was successful
8. Start OSTF tests. It was successful
9. But there is error on first controller (node-1) with ceph in puppet.log:
2014-09-19 12:31:24 ERR
(/Stage[ main]/Ceph: :Osd/Exec[ ceph-deploy osd activate]/returns) change from notrun to 0 failed: ceph-deploy osd activate node-1:/dev/sdb4 node-1:/dev/sdc4 returned 1 instead of one of [0]
10. Also there are errors on cinder (node-6):
2014-09-19 12:35:09 WARNING
cinder. volume. manager [req-2dad97a0- b509-46d5- a239-06fa1344f2 bf - - - - -] Unable to update stats, RBDDriver -1.1.0 driver is uninitialized.
2014-09-19 12:34:19 ERROR
cinder. volume. manager [req-8a4e3758- 7d8f-4bfd- 876a-d913163aaf 83 - - - - -] Bad or unexpected response from the storage volume backend API: error connecting to ceph cluster volume. manager Traceback (most recent call last): volume. manager File "/usr/lib/ python2. 6/site- packages/ cinder/ volume/ manager. py", line 243, in init_host volume. manager self.driver. check_for_ setup_error( ) volume. manager File "/usr/lib/ python2. 6/site- packages/ cinder/ volume/ drivers/ rbd.py" , line 268, in check_for_ setup_error volume. manager raise exception. VolumeBackendAP IException( data=msg) volume. manager VolumeBackendAP IException: Bad or unexpected response from the storage volume backend API: error connecting to ceph cluster volume. manager
2014-09-19 11:34:19.495 26002 TRACE cinder.
2014-09-19 11:34:19.495 26002 TRACE cinder.
2014-09-19 11:34:19.495 26002 TRACE cinder.
2014-09-19 11:34:19.495 26002 TRACE cinder.
2014-09-19 11:34:19.495 26002 TRACE cinder.
2014-09-19 11:34:19.495 26002 TRACE cinder.
2014-09-19 11:34:19.495 26002 TRACE cinder.
2014-09-19 12:34:19 ERROR
cinder. volume. manager [req-8a4e3758- 7d8f-4bfd- 876a-d913163aaf 83 - - - - -] Error encountered during initialization of driver: RBDDriver
2014-09-19 12:34:19 ERROR
cinder. volume. drivers. rbd [req-8a4e3758- 7d8f-4bfd- 876a-d913163aaf 83 - - - - -] error connecting to ceph cluster volume. drivers. rbd Traceback (most recent call last): volume. drivers. rbd File "/usr/lib/ python2. 6/site- packages/ cinder/ volume/ drivers/ rbd.py" , line 263, in check_for_ setup_error volume. drivers. rbd with RADOSClient(self): volume. drivers. rbd File "/usr/lib/ python2. 6/site- packages/ cinder/ volume/ drivers/ rbd.py" , line 235, in __init__ volume. drivers. rbd self.cluster, self.ioctx = driver. _connect_ to_rados( pool) volume. drivers. rbd File "/usr/lib/ python2. 6/site- packages/ cinder/ volume/ drivers/ rbd.py" , line 283, in _connect_to_rados volume. drivers. rbd client.connect() volume. drivers. rbd File "/usr/lib/ python2. 6/site- packages/ rados.py" , line 419, in connect volume. drivers. rbd raise make_ex(ret, "error calling connect") volume. drivers. rbd ObjectNotFound: error calling connect volume. drivers. rbd
2014-09-19 11:34:19.468 26002 TRACE cinder.
2014-09-19 11:34:19.468 26002 TRACE cinder.
2014-09-19 11:34:19.468 26002 TRACE cinder.
2014-09-19 11:34:19.468 26002 TRACE cinder.
2014-09-19 11:34:19.468 26002 TRACE cinder.
2014-09-19 11:34:19.468 26002 TRACE cinder.
2014-09-19 11:34:19.468 26002 TRACE cinder.
2014-09-19 11:34:19.468 26002 TRACE cinder.
2014-09-19 11:34:19.468 26002 TRACE cinder.
2014-09-19 11:34:19.468 26002 TRACE cinder.
2014-09-19 11:34:19.468 26002 TRACE cinder.