ceph-csi charm does not handle ceph-fs correctly: InvalidArgument desc = volume not found
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Ceph CSI Charm |
Fix Released
|
High
|
Kevin W Monroe |
Bug Description
when cephfs is enabled
https:/
The charm will create StorageClass cephfs
However when trying to spawn pvc from we will encounter the following error
14s (x6 over 29s) Warning ProvisioningFailed PersistentVolum
$ kubectl get sc cephfs -o yaml
allowVolumeExpa
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
storageclass.
creationTimes
labels:
juju.io/
juju.io/manifest: cephfs
juju.io/
name: cephfs
resourceVersion: "2587452"
uid: 90368aef-
parameters:
clusterID: 9857d9aa-
csi.storage.
csi.storage.
csi.storage.
csi.storage.
csi.storage.
csi.storage.
fsName: default
pool: ceph-fs_data
provisioner: cephfs.csi.ceph.com
reclaimPolicy: Delete
volumeBindingMode: Immediate
The class contains two parameters, filesystem name and pool name
and will always use these values:
fsName: default
pool: ceph-fs_data
Surprisingly the 2nd is mostly correct if you use ceph-fs as application name
as described here: https:/
Otherwise it will also be a problem.
$ sudo ceph fs ls
name: ceph-fs, metadata pool: ceph-fs_metadata, data pools: [ceph-fs_data ] │name: ceph-fs, metadata pool: ceph-fs_metadata, data pools: [ceph-fs_data ]
After doing in-place replacement of the ceph-fs sc with correct fsName I was able to create pvc
$ kubectl get pvc -n whatever-test
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
cephfs-pvc Bound pvc-d2e0bdd4-
It mostly seems hardcoded
fsName:
https:/
pool:
https:/
As we don't have direct relation to ceph-fs but to ceph-mon or ceph-proxy
we need some other method for passing that information
Maybe better just define these properties directly in ceph-csi charm as config options ?
summary: |
- ceph-csi charm does not handle ceph-fs correctly + ceph-csi charm does not handle ceph-fs correctly: InvalidArgument desc = + volume not found |
Changed in charm-ceph-csi: | |
status: | New → In Progress |
status: | In Progress → Triaged |
importance: | Undecided → Medium |
assignee: | nobody → Kevin W Monroe (kwmonroe) |
milestone: | none → 1.29+ck1 |
Changed in charm-ceph-csi: | |
status: | Triaged → In Progress |
Changed in charm-ceph-csi: | |
status: | In Progress → Fix Committed |
Changed in charm-ceph-csi: | |
status: | Fix Committed → Fix Released |
Thanks for the report! We purposely don't expose the fsname as config, but rather discover it in the charm:
https:/ /github. com/charmed- kubernetes/ ceph-csi- operator/ blob/release_ 1.29/src/ charm.py# L235-L244
The problem is that we also cache that value:
https:/ /github. com/charmed- kubernetes/ ceph-csi- operator/ blob/release_ 1.29/src/ charm.py# L283
So if ceph-csi comes in before ceph-fs is deployed/related, that value will be None (because ceph-fs hasn't created the fs yet). And then we're stuck with it as the fsname in the ceph-fs storage class.