Upgrade to ck-1.19 does not update credentials across workers

Bug #1905058 reported by Chris Johnston
12
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Kubernetes Control Plane Charm
Fix Released
High
George Kraft

Bug Description

Upgrading the Kubernetes charms from 1.18 to 1.19 and the switch from basic auth to secrets does not properly update the credentials across Kubernetes workers.

Repro:

1) Deploy CK 1.18+ck1 with HA kubernetes-master
2) juju status kubernetes-master [1]
3) Check the kube-control relation data on all masters [2]
4) Shut down the kubernetes-master/leader
5) wait for a new leader to be elected
6) After things have had a chance to settle, power on the kubernetes-master that was shut down
7) deploy a new worker (don't know if this is required or not)
8) note that 2 kubernetes-masters now show the credentials [3]
9) juju upgrade-charm kubernetes-master --revision 891
let it settle
10) notice that only one of the masters got updated with the new credentials [4]
11) juju upgrade-charm kubernetes-worker --revision 704
and allow it to settle
12) login to a kubernetes-worker and look at /home/ubuntu/.kube/config and /root/cdk/kubeconfig
Experince here varies, but the credentials are not updated to the new tokens

[1] https://paste.ubuntu.com/p/NpsbWHG8Z2/
[2] https://paste.ubuntu.com/p/MZfFHf2WRS/
[3] https://paste.ubuntu.com/p/qfp99Ngrh8/
[4] https://paste.ubuntu.com/p/mrC7pBZbyS/
[5] https://paste.ubuntu.com/p/dh9gfDk49C/

Tags: sts
tags: added: sts
Revision history for this message
Felipe Reyes (freyes) wrote :

This might be related to https://bugs.launchpad.net/charm-kubernetes-master/+bug/1839704 where in comment #7 it was said that this wasn't going to need to changes on the k8s-master side, where I do believe it should.

k8s-master should keep the data in the relation synced, clear the data from the non-leader units, or take advantage of "relation-set --app", although this last option would still require some safe fallback for older versions of juju.

tags: added: field-critical
Changed in charm-kubernetes-master:
assignee: nobody → Kevin W Monroe (kwmonroe)
importance: Undecided → Critical
status: New → Triaged
Revision history for this message
Kevin W Monroe (kwmonroe) wrote :

First attempt at a repro didn't get very far (in a good way). I:

- verified leadership changed (step 6)
- verified different creds on the relation (step 10)

However, at step 12, everything seems fine:

https://paste.ubuntu.com/p/NQHqh5gw6X/

Do you happen to know if this only happens with hacluster in the mix? I'll try that next, but it'd be nice to know if you've seen this same behavior without hacluster.

Changed in charm-kubernetes-master:
status: Triaged → In Progress
Revision history for this message
Kevin W Monroe (kwmonroe) wrote :

From a chat with Chris, he's seeing workers that do not get updated kubeconfigs after the switch to k8s secrets. One thing that's different in this bug vs my repro attempts is that the original lead unit was k8s-master/2. For me, it's always k8s-master/0.

The reason this may be significant is because the relation data view [1] that we parse may overwrite data with key/values from the lowest unit name. Perhaps k8s-master/1 became the new leader in step 5, got new credentials in step 10, but those were being clobbered by old creds from k8s-master/0 (because k-m/0 is a lower unit name than k-m/1).

Then it's possible that the k8s-worker doesn't see a creds change [2], so it never sets the restart flag and therefore never generates the new kubeconfig.

Next steps to test this theory is to deploy with a bunch of masters to increase the chance of the original leader not being /0, and the subsequent leader not being /0 either. If that pans out, we'll need to adjust how k8s-worker detects changing creds.

Until then, a workaround that worked for Chris is to manually change the token in /root/cdk/kubeconfig (admin token) and /home/ubuntu/.kube/config (kubelet token) on the broken workers. You can see a list of tokens that Charmed K8s expects with the following:

juju run -u kubernetes-master/leader 'for i in `kubectl --kubeconfig /root/.kube/config get -n kube-system secrets --field-selector type=juju.is/token-auth | grep -o .*-token-auth`; do echo user: $i; kubectl --kubeconfig /root/.kube/config get -n kube-system secrets/$i --template=dG9rZW46IA=={{.data.password}} | base64 -d; echo; echo; done'

1: https://github.com/juju-solutions/charms.reactive/blob/master/charms/reactive/endpoints.py#L710-L713
2: https://github.com/charmed-kubernetes/charm-kubernetes-worker/blob/master/reactive/kubernetes_worker.py#L1230

Revision history for this message
Kevin W Monroe (kwmonroe) wrote :

Gah! I got the tokens backwards in #3. That should read:

...manually change the token in /root/cdk/kubeconfig (kubelet token) and /home/ubuntu/.kube/config (admin token) on the broken workers...

Revision history for this message
Nivedita Singhvi (niveditasinghvi) wrote :

Is there any reason, once you have corrected the tokens
on the workers, for it to fail again?

If the tokens are not corrected, and the worker goes to
a master with the newer creds, how would restarting kubelet
help?

IOW, could this issue be responsible for a scenario in which
node renewals fail after about 6 hrs and are restored by
restarting kubelet?

Revision history for this message
Kevin W Monroe (kwmonroe) wrote :

> Is there any reason, once you have corrected the tokens on the workers, for it to fail again?

If a worker received a different, bad set of credentials from a master that had not properly migrated to k8s secrets, the worker would recreate /root/cdk/kubeconfig and restart kubelet with those bad creds and fail. This seems unlikely since you corrected the tokens on the worker in the first place because it had bad creds. Getting those same bad creds shouldn't trigger a restart since the data hasn't changed, and getting new good creds should be fine since these should match what you corrected them to.

> node renewals fail after about 6 hrs and are restored by restarting kubelet

I cannot think of a scenario where bad creds work their way into a worker after 6 hours, nor how restarting kubelet would resolve that since the bad creds would still be on disk.

Revision history for this message
Kevin W Monroe (kwmonroe) wrote :

I was able to get the env like i wanted in comment #3, but no matter what combination of "old" vs "new" creds were on the kube-control relation of "leader" vs "follower" master units, all my workers had the "new" creds in their kubelet kubeconfig files.

I'm moving this to Incomplete until we can get more details on a failed env. If possible, please attach a juju crashdump; hopefully we'll find more clues in the auth-webhook and unit debug-hook logs as to why some workers are insistent on holding onto old creds.

Changed in charm-kubernetes-master:
assignee: Kevin W Monroe (kwmonroe) → nobody
status: In Progress → Incomplete
Revision history for this message
Kevin W Monroe (kwmonroe) wrote :

The lead master always seems to have the right creds on the k-c relation. Given that, a better workaround than the one in comment #3 would be:

juju run -u kubernetes-master/FOLLOWER_UNIT_NUMS 'relation-set -r $(relation-ids kube-control) creds='

This would ensure that *only* the master leader is providing creds to workers, and it's easier than manually changing kubeconfig files on each worker.

Changed in charm-kubernetes-master:
assignee: nobody → George Kraft (cynerva)
Revision history for this message
Chris Johnston (cjohnston) wrote :

Regarding comment #8, I think that would have to be done prior to the upgrade-charm. We cleared the creds on our non-leader units after the upgrade-charm had run and nothing changed, we still had to manually clean things up.

Revision history for this message
Chris Johnston (cjohnston) wrote :

The workaround in comment #3 has worked for me a couple of times at this point.

Revision history for this message
Chris Sanders (chris.sanders) wrote :

With confirmation that the workaround in comment #3 does work I've adjusted this bug to field-high status. We'll continue to work on a fix for the charms, anyone that has this issue in the mean time please see the above workaround for an immediate fix.

Revision history for this message
George Kraft (cynerva) wrote :

I just reproduced this. It looks like it only occurs if there is a kubernetes-master follower with stale creds, and the follower's unit ID is *higher* than that of the current leader. e.g. kubernetes-master/0 is current leader, kubernetes-master/1 is former leader with stale creds. The higher unit ID takes precedence because of the merge logic in interface-kube-control[1].

The workaround from #8 worked for me, although it did take a couple minutes for the workers to update their tokens.

I think Felipe is on the right track in comment #1. Clearing the stale creds from non-leaders is probably the easiest and quickest solution.

[1]: https://github.com/charmed-kubernetes/interface-kube-control/blob/ea90ca566da63750ab90bc1443c77d912abf6d58/requires.py#L57-L58

Changed in charm-kubernetes-master:
importance: Critical → High
status: Incomplete → In Progress
Revision history for this message
George Kraft (cynerva) wrote :
tags: added: review-needed
removed: field-critical
Revision history for this message
George Kraft (cynerva) wrote :
Changed in charm-kubernetes-master:
status: In Progress → Fix Committed
tags: removed: review-needed
George Kraft (cynerva)
Changed in charm-kubernetes-master:
milestone: none → 1.19+ck2
George Kraft (cynerva)
Changed in charm-kubernetes-master:
status: Fix Committed → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.