baremetal k8s stays blocked when using ceph with full disk encryption
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Kubernetes Control Plane Charm |
Invalid
|
Undecided
|
Unassigned |
Bug Description
SQA recently added the relations between ceph and k8s-master. The ceph cluster is set to use full disk encryption. the k8s master units are all staying blocked with " Failed to configure encryption; secrets are unencrypted or inaccessible" and the logs show.
2021-10-31 12:16:51 WARNING unit.kubernetes
2021-10-31 12:16:51 ERROR unit.kubernetes
Traceback (most recent call last):
File "/var/lib/
check_
File "/usr/lib/
raise CalledProcessEr
subprocess.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/var/lib/
vaultlocker
File "/var/lib/
raise VaultLockerErro
charms.
I saw there is a note for setting permissions for containers when using ceph[1], is this how that issue presents itself?
A testrun with this issue can be found at:
https:/
with crashdump at:
https:/
and bundle at:
https:/
All testruns with this issue can be found at:
https:/
description: | updated |
Changed in charm-kubernetes-master: | |
status: | New → Incomplete |
The VaultLocker error is unrelated to Ceph, and Ceph using full-disk encryption should be completely transparent to Kubernetes. This error is because you're using LXD placement for the K8s master, which is called out in the docs as being unsupported [1] due to this exact issue: specifically, that the containerized charm cannot manage the loopback device needed to store the encrypted data. The suggested work-arounds are to either have the LXD storage pool encrypted, or use full-disk encryption on the host machine. It might be possible to add support for using Juju storage with encryption [2] but there was some reason we didn't go that route originally, though I can't recall what it was, so it might not be feasible.
[1]: https:/ /ubuntu. com/kubernetes/ docs/encryption -at-rest# known-issues /github. com/juju- solutions/ layer-vaultlock er#using- juju-storage- annotations
[2]: https:/
The permissions issue you mentioned with Ceph only applies in a very specific case: old versions of Ceph (Train and before) along with CephFS and pods which run as the non-root user but require RWX volumes. Obviously, it came up at least once, but it's generally a pretty rare combination of circumstances, especially now that OpenStack is several released beyond that. Either way, it also has nothing to do with full-disk encryption for the Ceph storage.