Comment 6 for bug 2049770

Revision history for this message
RafaƂ Krzewski (rafal-krzewski) wrote :

After redeploying the cluster with kubernetes-control-plane units in LXD containers I've run into another problem:

  Warning FailedMount 10m kubelet MountVolume.MountDevice failed for volume "pvc-235b7bf7-b4b7-443d-a810-b1acc12eed45" : rpc error: code = Internal desc = rbd: map failed with error an error (exit status 22) occurred while running rbd args: [--id ceph-csi -m 192.168.3.45,192.168.3.46,192.168.3.60 --keyfile=***stripped*** map xfs-pool/csi-vol-79150a03-d4df-45f6-a339-d919a0184236 --device-type krbd --options noudev], rbd error output: rbd: mapping succeeded but /dev/rbd0 is not accessible, is host /dev mounted?
rbd: map failed: (22) Invalid argument

I can see /dev/rbd0 device on machine 2 but not in 2/lxd/5 container where kubelet is running.

I've tried setting ceph-csi cephfs-mounter=ceph-fuse and now I see the following message:

Warning FailedMount 64s (x5 over 9m13s) kubelet MountVolume.MountDevice failed for volume "pvc-235b7bf7-b4b7-443d-a810-b1acc12eed45" : rpc error: code = Internal desc = exit status 1

I don't know if changing the setting on already deployed cluster doesn't work or I have run into yet another limitation. I've looked into systemctl logs of snap.kubelet.daemon.service on 2/lxd/5 but it does not shows any more details about the error.

Next thing I'm going to try is destroying the model and redeploying with cephfs-mounter=ceph-fuse from the get go. Perhaps you have any other suggestions?