Comment 10 for bug 1855922

Revision history for this message
BN (zatoichy) wrote :

Okay, the issue was not related to user permissions; I ve tried to change root to nova but it did not solve my issue. Then I checked external ceph guide again and I ve noticed that I was using some other guide therefore, I was using rbd_secret_uuid instead of cinder_rbd_secret_uuid in cinder-volume.conf file. Once I ve changed that to cinder_rbd_secret_uuid it solved the issue so the instance can be started with volumes created. Also, volumes can be attached to instances as well.

Thank you

P.S. Just quick question: is it possible that if you run rbd init vms [from external ceph setup], and you already have some instances running in openstack - there is possibility that they can be automatically deleted?