[feature] add ability to specify desired storage class (ceph)

Bug #1835080 reported by Jeff Hillman
12
This bug affects 2 people
Affects Status Importance Assigned to Milestone
Kubernetes Control Plane Charm
Triaged
Wishlist
Unassigned

Bug Description

Currently, k8s-master creates 2 storage classes when related to ceph.

ceph-ext4 and ceph-xfs

This appears to be making an assumption about how ceph is created. And, if removed via kubectl, these classes get recreated.

In our particular environment, we're using full ceph disks (OSD) that are not pre-formatted or mounted. So there is no xfs or ext4 in this environment. This can cause a confusing scenario for the customer.

I'm proposing that instead, we have the ability to specify if we're using a filesystem, or full disk (SSD|HDD).

so having a config option to allow that.

Then, perhaps either a config option, or just learn what the default storage class should be based upon the type of storage specified. I.e., if I say I have SSD storage, then configure the pool/class as SSD and create ceph-ssd as a class.

Tags: cpe-onsite
Revision history for this message
Mike Wilson (knobby) wrote :

From IRC:

<jhillman> knobby: well the ability to have a little more control of what classes get made, like again, in this scenario we won't have xfs or ext4, but instead block ssd. so perhaps options like "ceph-storage-type={block,fs,whatever}" then "ceoh-storage-pools={whatever,list,possibly}"
<jhillman> then classes made to match that
<knobby> jhillman: so ceph-storage-type would say "create pools backed with these types" and then ceph-storage-pools would be the names of the storage classes in k8s tied to each of the pools?
<jhillman> knobby: that's what i'm thinking

Revision history for this message
Jeff Hillman (jhillman) wrote :

To further clarify...

Yes, we'd like the pools created. In this current deploy, ceph is deployed along side K8s in the same bundle and is used exclusively for this.

In the event that there are multiple classes of storage (HDD|SSD|and possibly even mounted XFS), I can see the need to specify a default storage class. The current config option of "default-storage" is still relevant here. We'd just have to either make an assumption or be able to pre-define the name of the classes ahead of time.

I think the assumption of ceph-<specified class> is appropriate. So, we could then say, default-storage=ceph-ssd for example, assuming we specified that we needed an SSD class.

I will admit, at this time I don't know a great way (if possible) for the master charm to determine if there are different types of ceph storage available or if it would just lump all disks together into one big pool.

Revision history for this message
Tim Van Steenburgh (tvansteenburgh) wrote :

Jeff, not sure exactly what you'd like to see here. Not sure if you're wanting charm config options or action params. Can you give an example - that would help a lot.

Revision history for this message
Jeff Hillman (jhillman) wrote :

My original thought here was to be able to specify the pools that we want.

Currently the charm creates 2 pools, ceph-ext and ceph-xfs, presumably so that different filesystems can be specified in the volumeMount spec ?

However, it is likely that a customer may want to customize these pools.

In that, having a pool just for RBD block storage for containers and one for specific filesystems.

Or, perhaps, one pool in general.

There could be charm config options for ceph-pools where, when ceph is related, it could be a white space delimited list of pool names (and potentially pool options separated via :'s) these pools that are defined are created and setup as storageClasses

Or, in the case of ceph-proxy, specify the pools the ceph admin pre-created so that they would automatically be setup as a storageClass and then the ceph-csi bits be auto installed.

I realize the lattermost should possibly be a second feature request.

George Kraft (cynerva)
Changed in charm-kubernetes-master:
importance: Undecided → Wishlist
status: New → Triaged
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.