Multi-Master deployments for k8s driver use different service account keys

Bug #1766546 reported by SFilatov
12
This bug affects 2 people
Affects Status Importance Assigned to Milestone
Magnum
New
Undecided
SFilatov

Bug Description

Multi-Master deployments for k8s driver use different service account keys for each api/controller manager server which leads to 401 errors for service accounts.

we should set artifacts for service-account-private-key-file and service-account-key-file so each apiserver could authenticate serviceaccount generated on any server

Revision history for this message
SFilatov (sergeyfilatov) wrote :

To get things clear about certificates in Kubernetes:

K8s api-server:
1. apiserver tls certificate is generated on master nodes via make-cert.sh script and signed by ca(it will be different on each api server).
options:
--tls-cert-file
--tls-private-key-file

2. ca.pem stored in magnum is deployed to master nodes via api call in make-cert.sh script. It is used to for user certificate authentication.
option:
--client-ca-file

3. service-account-key-file is used to verify sa secrets generated by controller-manager. Should be the same on each master node. It generally has nothing to do with ca. We basically need public key for that.
option:
--service-account-key-file

K8s controller-manager:
4. root-ca-file is ca.pem to be included in serviceaccount's secret.
option:
--root-ca-file

5. service-account-private-key-file is used to sign secrets and should be the same on each master node. Private key for a public one used in (3).
option:
--service-account-private-key-file

6. cluster signing pair is used in k8s to support signer api(https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/). We do need to specify both ca.pem and ca.key for that. Currently magnum exposes ca.key via user-data if cert_manager_api label is specified.

Here I assume that we are using a single ca for everything in a cluster.

Currently magnum uses apiserver key file for both service-account-private-key-file and service-account-key-file which is wrong for multimaster deployments.
I suggest we generate keypair on magnum side and deploy it to master nodes on boot.

I'm interested in community members opinions about this so lets have an open discussion

Revision history for this message
Spyros Trigazis (strigazi) wrote :

Yes, we use a single ca for the cluster. The ca.key is passed with the heat-agent, not via user_data.

Having the ca.key in the master node is a moderate security concern.

We need to check:
1. If we generate a second CA for the certificate signing and use it also for the service account keys, will it work? I mean having different sets if CA, will it cause any incompatibility issues?
2. What we will achieve with this? The problem we try to solve is to secure the the ca.key to not grant access to someone that has access to the ca.key. But, to access the ca.key you must access a master node. Even with different CAs, if someone has access to a master node, he has access to the kubernetes api as admin and access to the etcd data. The question is, what we gain?

Revision history for this message
SFilatov (sergeyfilatov) wrote :

I think we are trying to fix too many problems here.
This bug refers to serviceaccount keys being different on each node which causes errors in multimaster deployments.
ca.key being exposed to master nodes is a security concern but has nothing to do with this bug.
My point here is that service account key pair does not have to be CA at all, it does not need to be signed by our cluster CA. And generally k8s deployments use different keypar for them.
I suppose we might dont want to use existing ca.key for it(and it's not always presented on master nodes)

SFilatov (sergeyfilatov)
Changed in magnum:
assignee: nobody → SFilatov (sergeyfilatov)
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.