Comment 19 for bug 1642430

Revision history for this message
Felipe Reyes (freyes) wrote :

To test the patches that fix this bug, I used the following bundles and scripts

- gen.py http://paste.ubuntu.com/24671884/
- standalone-ceph.yaml http://paste.ubuntu.com/24671885/
- openstack-with-ceph-proxy.yaml.tpl http://paste.ubuntu.com/24671887/

Usage:

0) get copies of the charms with the proposed backport
  - https://review.openstack.org/#/c/468569/
  - https://review.openstack.org/#/c/468570/
  - https://review.openstack.org/#/c/468571/
  - https://review.openstack.org/#/c/468572/
  - https://review.openstack.org/#/c/468573/
1) juju deploy ./standalone-ceph.yaml
2) .. wait until ceph is done
3) ./gen.py
  - This script will read openstack-with-ceph-proxy.yaml.tpl, populate the admin-key and monitor-hosts fields for the ceph-proxy charm and write openstack-with-ceph-proxy.yaml
4) juju deploy ./openstack-with-ceph-proxy.yaml

Verification

- juju ssh ceph/0 sudo rados lspools # pool for cinder, glance, rgw and rbd need to be created
- mon hosts is properly configured in the related units:
  juju run --application ceph-proxy "sudo cat /etc/ceph/ceph.conf"
  juju run --application nova-compute "sudo cat /etc/ceph/ceph.conf"
  juju run --application cinder-ceph "sudo cat /etc/ceph/ceph.conf"
  juju run --application ceph-radosgw "sudo cat /etc/ceph/ceph.conf"
  juju run --application glance "sudo cat /etc/ceph/ceph.conf"