- juju ssh ceph/0 sudo rados lspools # pool for cinder, glance, rgw and rbd need to be created
- mon hosts is properly configured in the related units:
juju run --application ceph-proxy "sudo cat /etc/ceph/ceph.conf"
juju run --application nova-compute "sudo cat /etc/ceph/ceph.conf"
juju run --application cinder-ceph "sudo cat /etc/ceph/ceph.conf"
juju run --application ceph-radosgw "sudo cat /etc/ceph/ceph.conf"
juju run --application glance "sudo cat /etc/ceph/ceph.conf"
To test the patches that fix this bug, I used the following bundles and scripts
- gen.py http:// paste.ubuntu. com/24671884/ ceph.yaml http:// paste.ubuntu. com/24671885/ with-ceph- proxy.yaml. tpl http:// paste.ubuntu. com/24671887/
- standalone-
- openstack-
Usage:
0) get copies of the charms with the proposed backport /review. openstack. org/#/c/ 468569/ /review. openstack. org/#/c/ 468570/ /review. openstack. org/#/c/ 468571/ /review. openstack. org/#/c/ 468572/ /review. openstack. org/#/c/ 468573/ ceph.yaml with-ceph- proxy.yaml. tpl, populate the admin-key and monitor-hosts fields for the ceph-proxy charm and write openstack- with-ceph- proxy.yaml with-ceph- proxy.yaml
- https:/
- https:/
- https:/
- https:/
- https:/
1) juju deploy ./standalone-
2) .. wait until ceph is done
3) ./gen.py
- This script will read openstack-
4) juju deploy ./openstack-
Verification
- juju ssh ceph/0 sudo rados lspools # pool for cinder, glance, rgw and rbd need to be created ceph.conf" ceph.conf" ceph.conf" ceph.conf" ceph.conf"
- mon hosts is properly configured in the related units:
juju run --application ceph-proxy "sudo cat /etc/ceph/
juju run --application nova-compute "sudo cat /etc/ceph/
juju run --application cinder-ceph "sudo cat /etc/ceph/
juju run --application ceph-radosgw "sudo cat /etc/ceph/
juju run --application glance "sudo cat /etc/ceph/