nova-compute-ceph-auth-c91ce26f leads to [errno 13] RADOS permission denied (error connecting to the cluster) on existing setup using ceph-proxy

Bug #2054744 reported by Dan Munteanu
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenStack Nova Compute Charm
New
Undecided
Unassigned

Bug Description

Hi there,

I have a setup with 2 different nova-compute charm apps accessing a ceph cluster via ceph-proxy charm.

Although both nova-compute apps use the same ceph pool for VMs, the nova-compute_{name}:ceph <-> ceph-proxy:client integration had generated 2 different keyrings with different keys.

Now, after updating the charms to 2023.1/stable @rev 711, (which seems to contain https://opendev.org/openstack/charm-nova-compute/commit/650f3a5d511690ec27648b30f3b24532378a33a1) all nova-compute nodes have started to fail with

```
[errno 13] RADOS permission denied (error connecting to the cluster)
```

Doing some debugging I've discovered that:

- each nova-compute node contains now a new keyring named ```/etc/ceph/ceph.client.nova-compute-ceph-auth-c91ce26f.keyring``` containing the same key as the old ```/etc/ceph/ceph.client.nova-compute-{capp_name}.keyring``` and /etc/nova/nova.conf uses now ```rbd_user = nova-compute-ceph-auth-c91ce26f```

- the new keyring doesn't have any entry on my external ceph cluster accessed via ceph-proxy

Now, the BIG problem is that although I can manually create ```nova-compute-ceph-auth-c91ce26f``` in ceph cluster, it can be created using only one key while my setup uses 2 keys automatically generated by nova-compute/ceph-proxy charms.

Is there any way to work around this?

Thank you
Dan

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.