Completed pods in kube-system namespace cause "Waiting for X kube-system pods to start" status
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Kubernetes Control Plane Charm |
Fix Released
|
Medium
|
Mateo Florido |
Bug Description
kubernetes-master/0 waiting idle 0 <MASKED> 6443/tcp Waiting for 7 kube-system pods to start
kubernetes-master/1 waiting idle 1 <MASKED> 6443/tcp Waiting for 7 kube-system pods to start
kubernetes-
DSV MODEL(kubernetes) jujumanage@
NAME READY STATUS RESTARTS AGE
pod/calico-
pod/coredns-
pod/kube-
pod/metrics-
pod/node-
pod/node-
pod/node-
pod/node-
pod/node-
pod/node-
pod/node-
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kube-dns ClusterIP <MASKED> <none> 53/UDP,
service/
service/
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.
deployment.
deployment.
deployment.
NAME DESIRED CURRENT READY AGE
replicaset.
replicaset.
replicaset.
replicaset.
The node-shell pods are 'Completed' and should not be checked as far as I can see.
kubectl -n kube-system delete pod/node-shell-* fixed this.
description: | updated |
Changed in charm-kubernetes-master: | |
assignee: | nobody → Mateo Florido (mateoflorido) |
Changed in charm-kubernetes-master: | |
milestone: | none → 1.27 |
Changed in charm-kubernetes-master: | |
status: | Fix Committed → Fix Released |
Looks like this could be easily reproduced by creating a Job in the kube-system namespace.
Seems like "Completed" should be added as an acceptable state here: https:/ /github. com/charmed- kubernetes/ charm-kubernete s-master/ blob/93883d785a 5e6394e2de133bc 52164aa74695fd5 /reactive/ kubernetes_ master. py#L2473- L2477