My original thought here was to be able to specify the pools that we want.
Currently the charm creates 2 pools, ceph-ext and ceph-xfs, presumably so that different filesystems can be specified in the volumeMount spec ?
However, it is likely that a customer may want to customize these pools.
In that, having a pool just for RBD block storage for containers and one for specific filesystems.
Or, perhaps, one pool in general.
There could be charm config options for ceph-pools where, when ceph is related, it could be a white space delimited list of pool names (and potentially pool options separated via :'s) these pools that are defined are created and setup as storageClasses
Or, in the case of ceph-proxy, specify the pools the ceph admin pre-created so that they would automatically be setup as a storageClass and then the ceph-csi bits be auto installed.
I realize the lattermost should possibly be a second feature request.
My original thought here was to be able to specify the pools that we want.
Currently the charm creates 2 pools, ceph-ext and ceph-xfs, presumably so that different filesystems can be specified in the volumeMount spec ?
However, it is likely that a customer may want to customize these pools.
In that, having a pool just for RBD block storage for containers and one for specific filesystems.
Or, perhaps, one pool in general.
There could be charm config options for ceph-pools where, when ceph is related, it could be a white space delimited list of pool names (and potentially pool options separated via :'s) these pools that are defined are created and setup as storageClasses
Or, in the case of ceph-proxy, specify the pools the ceph admin pre-created so that they would automatically be setup as a storageClass and then the ceph-csi bits be auto installed.
I realize the lattermost should possibly be a second feature request.