Comment 13 for bug 1928690

Revision history for this message
Albert Braden (ozzzo) wrote (last edit ):

Yes, this is the problem. I rebuilt my heat stack and before I start the Train->Ussuri upgrade, the keys are different:

(nova-compute)[root@chrnc-void-testupgrade-compute-1-replace ceph]# cat ceph.client.cinder.keyring
[client.cinder]
        key = AQCnarZgAAAAABAA/fprRD3z8dRzTgi7jtDeYA==
        caps mon = "profile rbd"
        caps osd = "profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images"
(nova-compute)[root@chrnc-void-testupgrade-compute-1-replace ceph]# cat ceph.client.nova.keyring
[client.nova]
        key = AQCqarZgAAAAABAAt0lhY7TXttXIk2Y6HYQxEw==
        caps mon = "profile rbd"
        caps osd = "profile rbd pool=vms"

I don't see anything key-related in nova.conf:

(openstack) [root@chrnc-void-testupgrade-build ~]# cat /etc/kolla/config/nova.conf
[libvirt]
cpu_mode = host-model
disk_cachemodes = network=writeback
hw_disk_discard = unmap

[DEFAULT]
instance_name_template = instance-%(uuid)s
dhcp_domain = dmz.chtrse.com

cpu_allocation_ratio = 1
initial_cpu_allocation_ratio = 1
ram_allocation_ratio = 1
initial_ram_allocation_ratio = 1

reserved_host_cpus = 2
reserved_host_memory_mb = 1024

Here are the uncommented "ceph" settings from /etc/kolla/globals.yml:

(openstack) [root@chrnc-void-testupgrade-build ~]# grep ceph /etc/kolla/globals.yml|grep -v ^#
enable_ceph: "no"
external_ceph_cephx_enabled: "yes"
glance_backend_ceph: "yes"
cinder_backend_ceph: "yes"
cinder_backup_driver: "ceph"
nova_backend_ceph: "yes"

I searched for ceph_cinder and ceph_nova in my config but don't find anything:
(openstack) [root@chrnc-void-testupgrade-build kolla]# grep -r ceph_cinder /etc/kolla/
(openstack) [root@chrnc-void-testupgrade-build kolla]# grep -r ceph_nova /etc/kolla/
(openstack) [root@chrnc-void-testupgrade-build kolla]#

I'll experiment with changing the perms of the cinder keyring. Is there a document on changing keyring perms?