Comment 3 for bug 1847607

Revision history for this message
Yang Liu (yliu12) wrote : Re: IPv6 regular, 5 mins after swact calico-kube-controllers pod not getting ready

This issue is uncovered recently because we added extra check in automation to wait for Running pods Ready after each test. So the last pass indicated in description was not accurate.

Looking at previous automation logs, this issue was also already seen with 20190828T013000Z load.

I tried it manually on a ipv4 system, it eventually got ready about 15 minutes after swacted to new active controller.

Another odd things is it sometimes moves to a worker node...
controller-1:~$
[2019-09-17 13:40:11,184] 311 DEBUG MainThread ssh.send :: Send 'kubectl get pod -o=wide --all-namespaces'
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default resource-consumer-755c664f77-5hmlr 1/1 Running 0 9m22s dead:beef::bc1b:6533:4fd4:e141 compute-2 <none> <none>
kube-system calico-kube-controllers-767467f9cf-8bbrf 1/1 Running 0 7m37s dead:beef::a2bf:c94c:345d:bc40 compute-0 <none> <none>