adding vrouter object later does not update VM object created by kube-manager
Affects | Status | Importance | Assigned to | Milestone | ||
---|---|---|---|---|---|---|
Juniper Openstack | Status tracked in Trunk | |||||
R4.0 |
Fix Committed
|
High
|
Vedamurthy Joshi | |||
Trunk |
Fix Committed
|
High
|
Vedamurthy Joshi |
Bug Description
R4.0.1.0 Continuous build 22 Ubuntu 16.04.2 container setup
It was noticed that kube-dns pods were not getting launched.
It was seen that the compute node where this pod was scheduled, had not gone through the ansible internal provisioning fully. So the virtual-router object was also not present in contrail controller.
But the kube-dns vm object was created though.
Now, i brought up the agent container successfully later and the virtual-router object was added fine.
But the existing kube-dns VM did not get updated to include a reference to that virtual-router
---
Workaround currently is to restart the kube-dns pod which is stuck in ContainerCreating state
root@testbed-
NAME READY STATUS RESTARTS AGE IP NODE
etcd-testbed-1-vm1 1/1 Running 1 9h 10.204.217.194 testbed-1-vm1
kube-apiserver-
kube-controller
kube-dns-
kube-dns-
kube-proxy-1hl4n 1/1 Running 1 9h 10.204.217.198 testbed-1-vm3
kube-proxy-9w1jm 1/1 Running 1 9h 10.204.217.197 testbed-1-vm2
kube-proxy-q4h87 1/1 Running 1 9h 10.204.217.194 testbed-1-vm1
kube-scheduler-
root@testbed-
summary: |
- sometimes, vm object for pod is not having link to virtual-router + adding vrouter object later does not update VM object created by kube- + manager |
I applied workaround for https:/ /bugs.launchpad .net/juniperope nstack/ +bug/1711274 , and now, am seeing that when a pod got created during that failover time…its vm object did not have link to the vrouter
root@nodec1:~# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
busybox 0/1 ContainerCreating 0 2m <none> nodek2
nginx1 1/1 Running 0 2m 10.47.255.252 nodek3
root@nodec1:~#