support k8s-worker being deployed as different names
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Kubernetes Control Plane Charm |
Fix Released
|
Undecided
|
Kevin W Monroe |
Bug Description
With the latest charms I was able to get an all-green juju status but only 3/9 kubernetes-worker nodes would register with the cluster when I ran kubectl get nodes.
Pastebin of failure scenario with newest kubernetes-master charm:
https:/
This seems to be an issue with kubernetes-master charm versions newer than 850 because I was able to successfully deploy and get all workers registered with kubernetes-
Pastebin of success with kubernetes-
https:/
The bundle I used to reproduce this can be found here:
https:/
As can be seen from the bundle, I also have different kubernetes-worker types (each one is configured with diffrent labels and taints) which may be a cause of the problem on the newer charm versions.
The k8s version is 1.17 but I recall facing the same issue with even 1.19.
description: | updated |
Changed in charm-kubernetes-master: | |
assignee: | nobody → Kevin W Monroe (kwmonroe) |
status: | New → In Progress |
Changed in charm-kubernetes-master: | |
status: | Incomplete → In Progress |
summary: |
- kubernetes-master charm versions newer than 850 not registering all - worker nodes + support k8s-worker being deployed as different names |
Changed in charm-kubernetes-master: | |
milestone: | none → 1.20+ck1 |
tags: | added: review-needed |
Changed in charm-kubernetes-master: | |
status: | In Progress → Fix Committed |
tags: | removed: review-needed |
Changed in charm-kubernetes-master: | |
status: | Fix Committed → Fix Released |
Link to juju-crashdump: /drive. google. com/file/ d/1_9ZWClSt09wp F9BvYqkeYCB0kUe rOI9R/view? usp=sharing
https:/