kubernetes-master can't talk to kubeapi-load-balancer on dual stack deployment
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Kubernetes API Load Balancer |
Fix Released
|
Medium
|
Cory Johns |
Bug Description
Background:
*Deploying charmed-kubernetes via juju 2.7.0 on ubuntu 18.04.
*Using the stock model with no alterations or overlays. ("juju deploy charmed-
*Cluster gets to the state where everything is active except the kubernetes-master nodes
*Juju cloud is a vsphere cloud. All machines are in a dedicated subnet with dhcp services and both ipv4 and ipv6 are enabled on the vsphere/networking side of things.
*I see no way to disable IPv6 from juju or via the charmed-kubernetes charms or overlays, so I'm not able to tell it to not try to use IPv6.
*Debug indicates the failure is from a kubectl get po -n kube-system not working.
*Doing a juju ssh to the kubernetes-master node and running kubectl get po -n kube-system as root gives the following error: "The connection to the server 2620:72:
*Doing a juju ssh to the kubeapi-
*The worker nodes in the cluster have the IPv4 address of the API server specified in their kubeconfig file, thus they work fine when trying to talk to the API server.
*Trying to update the kubeconfig files on the kubernetes-master nodes, however, does not fix the issue and let the checks pass. It does fix kubectl from root's command line on those nodes, however. So perhaps address used by the kubectl commands are being passed in from somewhere else or the kubeconfig is cached somehow?
*Easy remediations in my mind could be any of the following: Enable the load-balancer to run on IPv6 AND IPv4, enable an option to explicitly disable IPv6 for the entire deployment in the charm bundle, make the kubernetes-master checks only use the IPv4 address like the kubernetes-worker checks do.
Here's my juju status, in case it helps.
```
Model Controller Cloud/Region Version SLA Timestamp
tabby tst-vcenter-dc1 tst-vcenter/dc1 2.7.0 unsupported 14:24:25-06:00
App Version Status Scale Charm Store Rev OS Notes
containerd active 5 containerd jujucharms 53 ubuntu
easyrsa 3.0.1 active 1 easyrsa jujucharms 295 ubuntu
etcd 3.3.15 active 3 etcd jujucharms 485 ubuntu
flannel 0.11.0 active 5 flannel jujucharms 466 ubuntu
kubeapi-
kubernetes-master 1.17.0 waiting 2 kubernetes-master jujucharms 788 ubuntu
kubernetes-worker 1.17.0 active 3 kubernetes-worker jujucharms 623 ubuntu exposed
Unit Workload Agent Machine Public address Ports Message
easyrsa/0* active idle 0 2620:72:
etcd/0* active idle 1 2620:72:
etcd/1 active idle 2 2620:72:
etcd/2 active idle 3 2620:72:
kubeapi-
kubernetes-master/0 waiting idle 5 2620:72:
containerd/4 active idle 2620:72:
flannel/4 active idle 2620:72:
kubernetes-
containerd/3 active idle 2620:72:
flannel/3 active idle 2620:72:
kubernetes-
containerd/1 active idle 2620:72:
flannel/1 active idle 2620:72:
kubernetes-worker/1 active idle 8 2620:72:
containerd/2 active idle 2620:72:
flannel/2 active idle 2620:72:
kubernetes-worker/2 active idle 9 2620:72:
containerd/0* active idle 2620:72:
flannel/0* active idle 2620:72:
```
Changed in charm-kubeapi-load-balancer: | |
importance: | Undecided → Medium |
status: | New → Triaged |
no longer affects: | charm-kubernetes-master |
Changed in charm-kubeapi-load-balancer: | |
assignee: | nobody → Cory Johns (johnsca) |
milestone: | none → 1.19 |
status: | Triaged → In Progress |
tags: | removed: review-needed |
Changed in charm-kubeapi-load-balancer: | |
status: | In Progress → Fix Committed |
Changed in charm-kubeapi-load-balancer: | |
status: | Fix Committed → Fix Released |
One follow up, the reason updating the kubeconfig file manually doesn't work is because it's getting automatically reset by juju (or something). So even if I put the IPv4 address in there, it gets reset to the IPv6 address next time the checks run.