change CIDR network

Bug #1932551 reported by Robert Gildein
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Flannel Charm
New
Undecided
Unassigned

Bug Description

I tried to change CIDR network and it looks like it's working fine. However all kubernetes pods have old IPs and all new ones are stuck in ContainerCreating.

I deployed bundle [1] on top of LXD cloud.

environment:
```bash
$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 20.04.2 LTS
Release: 20.04
Codename: focal

$ sudo snap list juju lxd kubectl
Name Version Rev Tracking Publisher Notes
juju 2.9.4 16423 latest/stable canonical✓ classic
kubectl 1.21.1 1976 latest/stable canonical✓ classic
lxd 4.0.6 20326 4.0/stable/… canonical✓ -
```
how I deployed
```bash
$ juju deploy ./bundle.yaml

$ juju scp kubernetes-master/0:config ~/.kube/config

$ kubectl get pods -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
ingress-nginx-kubernetes-worker default-http-backend-kubernetes-worker-cd9b77777-q6g2p 1/1 Running 0 2m42s 10.1.94.7 juju-3f1edd-1 <none> <none>
ingress-nginx-kubernetes-worker nginx-ingress-controller-kubernetes-worker-gv422 1/1 Running 0 2m34s 10.45.36.91 juju-3f1edd-1 <none> <none>
kube-system coredns-6f867cd986-5lrz9 1/1 Running 0 3m26s 10.1.94.5 juju-3f1edd-1 <none> <none>
kube-system kube-state-metrics-7799879d89-74sp4 1/1 Running 0 3m26s 10.1.94.4 juju-3f1edd-1 <none> <none>
kube-system metrics-server-v0.3.6-f6cf867b4-9rzr2 2/2 Running 0 48s 10.1.94.9 juju-3f1edd-1 <none> <none>
kubernetes-dashboard dashboard-metrics-scraper-8458d7fdf6-xf67k 1/1 Running 0 3m26s 10.1.94.3 juju-3f1edd-1 <none> <none>
kubernetes-dashboard kubernetes-dashboard-5784589f96-9m6tj 1/1 Running 0 3m26s 10.1.94.2 juju-3f1edd-1 <none> <none>

$ juju status
Model Controller Cloud/Region Version SLA Timestamp
test lxd localhost/localhost 2.9.4 unsupported 15:43:07+02:00

App Version Status Scale Charm Store Channel Rev OS Message
containerd go1.13.8 active 2 containerd charmstore edge 134 ubuntu Container runtime available
easyrsa 3.0.1 active 1 easyrsa charmstore edge 387 ubuntu Certificate Authority connected.
etcd 3.4.5 active 1 etcd charmstore edge 597 ubuntu Healthy with 1 known peer
flannel 0.11.0 active 2 flannel charmstore edge 561 ubuntu Flannel subnet 10.1.94.1/24
kubernetes-master 1.21.1 active 1 kubernetes-master charmstore edge 1015 ubuntu Kubernetes master running.
kubernetes-worker 1.21.1 active 1 kubernetes-worker charmstore edge 774 ubuntu Kubernetes worker running.

Unit Workload Agent Machine Public address Ports Message
easyrsa/0* active idle 1 10.45.36.91 Certificate Authority connected.
etcd/0* active idle 0 10.45.36.214 2379/tcp Healthy with 1 known peer
kubernetes-master/0* active idle 0 10.45.36.214 6443/tcp Kubernetes master running.
  containerd/1 active idle 10.45.36.214 Container runtime available
  flannel/1 active idle 10.45.36.214 Flannel subnet 10.1.95.1/24
kubernetes-worker/0* active idle 1 10.45.36.91 80/tcp,443/tcp Kubernetes worker running.
  containerd/0* active idle 10.45.36.91 Container runtime available
  flannel/0* active idle 10.45.36.91 Flannel subnet 10.1.94.1/24

Machine State DNS Inst id Series AZ Message
0 started 10.45.36.214 juju-3f1edd-0 focal Running
1 started 10.45.36.91 juju-3f1edd-1 focal Running

$ juju config flannel cidr=10.2.0.0/16

$ juju status
Model Controller Cloud/Region Version SLA Timestamp
test lxd localhost/localhost 2.9.4 unsupported 16:01:15+02:00

App Version Status Scale Charm Store Channel Rev OS Message
containerd go1.13.8 active 2 containerd charmstore edge 134 ubuntu Container runtime available
easyrsa 3.0.1 active 1 easyrsa charmstore edge 387 ubuntu Certificate Authority connected.
etcd 3.4.5 active 1 etcd charmstore edge 597 ubuntu Healthy with 1 known peer
flannel 0.11.0 active 2 flannel charmstore edge 561 ubuntu Flannel subnet 10.2.25.1/24
kubernetes-master 1.21.1 active 1 kubernetes-master charmstore edge 1015 ubuntu Kubernetes master running.
kubernetes-worker 1.21.1 active 1 kubernetes-worker charmstore edge 774 ubuntu Kubernetes worker running.

Unit Workload Agent Machine Public address Ports Message
easyrsa/0* active idle 1 10.45.36.91 Certificate Authority connected.
etcd/0* active idle 0 10.45.36.214 2379/tcp Healthy with 1 known peer
kubernetes-master/0* active idle 0 10.45.36.214 6443/tcp Kubernetes master running.
  containerd/1 active idle 10.45.36.214 Container runtime available
  flannel/1 active idle 10.45.36.214 Flannel subnet 10.2.1.1/24
kubernetes-worker/0* active idle 1 10.45.36.91 80/tcp,443/tcp Kubernetes worker running.
  containerd/0* active idle 10.45.36.91 Container runtime available
  flannel/0* active idle 10.45.36.91 Flannel subnet 10.2.25.1/24

Machine State DNS Inst id Series AZ Message
0 started 10.45.36.214 juju-3f1edd-0 focal Running
1 started 10.45.36.91 juju-3f1edd-1 focal Running

$ kubectl get pods -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
ingress-nginx-kubernetes-worker default-http-backend-kubernetes-worker-cd9b77777-q6g2p 1/1 Running 0 21m 10.1.94.7 juju-3f1edd-1 <none> <none>
ingress-nginx-kubernetes-worker nginx-ingress-controller-kubernetes-worker-gv422 1/1 Running 0 21m 10.45.36.91 juju-3f1edd-1 <none> <none>
kube-system coredns-6f867cd986-5lrz9 1/1 Running 0 22m 10.1.94.5 juju-3f1edd-1 <none> <none>
kube-system kube-state-metrics-7799879d89-74sp4 1/1 Running 0 22m 10.1.94.4 juju-3f1edd-1 <none> <none>
kube-system metrics-server-v0.3.6-f6cf867b4-9rzr2 2/2 Running 0 19m 10.1.94.9 juju-3f1edd-1 <none> <none>
kubernetes-dashboard dashboard-metrics-scraper-8458d7fdf6-xf67k 1/1 Running 0 22m 10.1.94.3 juju-3f1edd-1 <none> <none>
kubernetes-dashboard kubernetes-dashboard-5784589f96-9m6tj 1/1 Running 0 22m 10.1.94.2 juju-3f1edd-1 <none> <none>

$ kubectl create job hello --image=busybox -- echo "Hello World"

$ kubectl get pods -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default hello-kdrtm 0/1 ContainerCreating 0 16m <none> juju-3f1edd-1 <none> <none>
ingress-nginx-kubernetes-worker default-http-backend-kubernetes-worker-cd9b77777-q6g2p 1/1 Running 0 39m 10.1.94.7 juju-3f1edd-1 <none> <none>
ingress-nginx-kubernetes-worker nginx-ingress-controller-kubernetes-worker-gv422 1/1 Running 0 39m 10.45.36.91 juju-3f1edd-1 <none> <none>
kube-system coredns-6f867cd986-5lrz9 1/1 Running 0 40m 10.1.94.5 juju-3f1edd-1 <none> <none>
kube-system kube-state-metrics-7799879d89-74sp4 1/1 Running 0 40m 10.1.94.4 juju-3f1edd-1 <none> <none>
kube-system metrics-server-v0.3.6-f6cf867b4-9rzr2 2/2 Running 0 37m 10.1.94.9 juju-3f1edd-1 <none> <none>
kubernetes-dashboard dashboard-metrics-scraper-8458d7fdf6-xf67k 1/1 Running 0 40m 10.1.94.3 juju-3f1edd-1 <none> <none>
kubernetes-dashboard kubernetes-dashboard-5784589f96-9m6tj 1/1 Running 0 40m 10.1.94.2 juju-3f1edd-1 <none> <none>
```

---
[1]: https://pastebin.ubuntu.com/p/f2QyFw6H8n/

Revision history for this message
Robert Gildein (rgildein) wrote :
Download full text (3.5 KiB)

I solved it by restarting kubernetes-worker.

```bash
$ kubectl get pods -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default hello-kdrtm 0/1 ContainerCreating 0 25m <none> juju-3f1edd-1 <none> <none>
ingress-nginx-kubernetes-worker default-http-backend-kubernetes-worker-cd9b77777-q6g2p 1/1 Running 0 47m 10.1.94.7 juju-3f1edd-1 <none> <none>
ingress-nginx-kubernetes-worker nginx-ingress-controller-kubernetes-worker-gv422 1/1 Running 0 47m 10.45.36.91 juju-3f1edd-1 <none> <none>
kube-system coredns-6f867cd986-5lrz9 1/1 Running 0 48m 10.1.94.5 juju-3f1edd-1 <none> <none>
kube-system kube-state-metrics-7799879d89-74sp4 1/1 Running 0 48m 10.1.94.4 juju-3f1edd-1 <none> <none>
kube-system metrics-server-v0.3.6-f6cf867b4-9rzr2 2/2 Running 0 45m 10.1.94.9 juju-3f1edd-1 <none> <none>
kubernetes-dashboard dashboard-metrics-scraper-8458d7fdf6-xf67k 1/1 Running 0 48m 10.1.94.3 juju-3f1edd-1 <none> <none>
kubernetes-dashboard kubernetes-dashboard-5784589f96-9m6tj 1/1 Running 0 48m 10.1.94.2 juju-3f1edd-1 <none> <none>

$ juju ssh kubernetes-worker/0 -- sudo reboot now
Connection to 10.45.36.91 closed by remote host.
Connection to 10.45.36.91 closed.

$ kubectl get pods -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default hello-kdrtm 0/1 Completed 0 26m 10.2.25.121 juju-3f1edd-1 <none> <none>
ingress-nginx-kubernetes-worker default-http-backend-kubernetes-worker-cd9b77777-q6g2p 1/1 Running 1 49m 10.2.25.127 juju-3f1edd-1 <none> <none>
ingress-nginx-kubernetes-worker nginx-ingress-controller-kubernetes-worker-gv422 1/1 Running 1 48m 10.45.36.91 juju-3f1edd-1 <none> <none>
kube-system coredns-6f867cd986-5lrz9 1/1 Running 1 49m 10.2.25.124 juju-3f1edd-1 <none> <none>
kube-system kube-state-metrics-7799879d89-74sp4 1/1 Running 1 49m 10.2.25.122 juju-3f1edd-1 <none> <none>
kube-system metrics-server-v0.3.6-f6cf867b4-9rzr2 2/2 Running 2 47m 10....

Read more...

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.