Hi George, thank you for looking in to this.
I reproduced the issue by adding relation between worker charm and the openstack-integrator charm.
$ kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME master-1 Ready <none> 27h v1.24.1 192.168.150.50 <none> Ubuntu 20.04.4 LTS 5.4.0-109-generic containerd://1.5.9 worker-1 Ready <none> 27h v1.24.1 192.168.150.51 <none> Ubuntu 20.04.4 LTS 5.4.0-109-generic containerd://1.5.9 worker-2 Ready <none> 27h v1.24.1 192.168.150.52 <none> Ubuntu 20.04.4 LTS 5.4.0-109-generic containerd://1.5.9 worker-3 Ready <none> 27h v1.24.1 192.168.141.41 <none> Ubuntu 20.04.4 LTS 5.4.0-109-generic containerd://1.5.9
Here is the requested output:
$ juju run -m k8s --unit kubernetes-worker/2 -- network-get kube-control bind-addresses: - mac-address: fa:16:3e:e2:d7:70 interface-name: ens3 addresses: - hostname: "" address: 192.168.150.53 cidr: 192.168.150.0/24 macaddress: fa:16:3e:e2:d7:70 interfacename: ens3 - mac-address: 7e:54:d9:66:e6:20 interface-name: fan-252 addresses: - hostname: "" address: 252.53.0.1 cidr: 252.0.0.0/8 macaddress: 7e:54:d9:66:e6:20 interfacename: fan-252 egress-subnets: - 192.168.150.53/32 ingress-addresses: - 192.168.150.53 - 252.53.0.1
So it seems that the charm doesn't see the other Openstack networks. By the way, --node-ip in the kubelet's cmdline arguments is also correct:
$ juju ssh -m k8s kubernetes-worker/2 'ps aux | grep kubelet | grep node-ip' root 554863 1.3 0.3 3038536 102496 ? Ssl 01:28 0:06 /snap/kubelet/2423/kubelet --kubeconfig=/root/cdk/kubeconfig --v=0 --logtostderr=true --node-ip=192.168.150.53 --container-runtime=remote --container-runtime-endpoint=unix:///var/run/containerd/containerd.sock --cloud-provider=external --config=/root/cdk/kubelet/config.yaml --pod-infra-container-image=rocks.canonical.com:443/cdk/pause:3.6 ubuntu 560091 0.0 0.0 8616 3176 pts/0 Ss+ 01:36 0:00 bash -c ps aux | grep kubelet | grep node-ip Connection to 192.168.150.53 closed
Hi George, thank you for looking in to this.
I reproduced the issue by adding relation between worker charm and the openstack- integrator charm.
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
master-1 Ready <none> 27h v1.24.1 192.168.150.50 <none> Ubuntu 20.04.4 LTS 5.4.0-109-generic containerd://1.5.9
worker-1 Ready <none> 27h v1.24.1 192.168.150.51 <none> Ubuntu 20.04.4 LTS 5.4.0-109-generic containerd://1.5.9
worker-2 Ready <none> 27h v1.24.1 192.168.150.52 <none> Ubuntu 20.04.4 LTS 5.4.0-109-generic containerd://1.5.9
worker-3 Ready <none> 27h v1.24.1 192.168.141.41 <none> Ubuntu 20.04.4 LTS 5.4.0-109-generic containerd://1.5.9
Here is the requested output:
$ juju run -m k8s --unit kubernetes-worker/2 -- network-get kube-control
bind-addresses :
interface- name: ens3
addresses:
address: 192.168.150.53
macaddress: fa:16:3e:e2:d7:70
interfacenam e: ens3
interface- name: fan-252
addresses:
address: 252.53.0.1
macaddress: 7e:54:d9:66:e6:20
interfacenam e: fan-252
egress- subnets:
ingress- addresses:
- mac-address: fa:16:3e:e2:d7:70
- hostname: ""
cidr: 192.168.150.0/24
- mac-address: 7e:54:d9:66:e6:20
- hostname: ""
cidr: 252.0.0.0/8
- 192.168.150.53/32
- 192.168.150.53
- 252.53.0.1
So it seems that the charm doesn't see the other Openstack networks. By the way, --node-ip in the kubelet's cmdline arguments is also correct:
$ juju ssh -m k8s kubernetes-worker/2 'ps aux | grep kubelet | grep node-ip' 2423/kubelet --kubeconfig= /root/cdk/ kubeconfig --v=0 --logtostderr=true --node- ip=192. 168.150. 53 --container- runtime= remote --container- runtime- endpoint= unix:// /var/run/ containerd/ containerd. sock --cloud- provider= external --config= /root/cdk/ kubelet/ config. yaml --pod-infra- container- image=rocks. canonical. com:443/ cdk/pause: 3.6
root 554863 1.3 0.3 3038536 102496 ? Ssl 01:28 0:06 /snap/kubelet/
ubuntu 560091 0.0 0.0 8616 3176 pts/0 Ss+ 01:36 0:00 bash -c ps aux | grep kubelet | grep node-ip
Connection to 192.168.150.53 closed