calico-kube-controllers pod sometimes moves to worker node
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
StarlingX |
Fix Released
|
Medium
|
Joseph Richard |
Bug Description
Brief Description
-----------------
calico-
Severity
--------
Minor
Steps to Reproduce
------------------
- calico-
- system host-swact controller-0 (pod is still on controller-0)
- lock/unlock controller-0 (now active controller is controller-1)
- check calico-
Expected Behavior
------------------
- calico-
Actual Behavior
----------------
- calico-
Reproducibility
---------------
Intermittent
System Configuration
-------
Multi-node system
Branch/Pull Time/Commit
-------
2019-10-09_20-00-00
Last Pass
---------
Not sure
Timestamp/Logs
--------------
# Lock controller-0
[2019-10-10 19:00:34,544] 311 DEBUG MainThread ssh.send :: Send 'system --os-username 'admin' --os-password 'Li69nux*' --os-project-name admin --os-auth-url http://[face::2]:5000/v3 --os-user-
# Unlock controller-0
[2019-10-10 19:02:11,611] 311 DEBUG MainThread ssh.send :: Send 'system --os-username 'admin' --os-password 'Li69nux*' --os-project-name admin --os-auth-url http://[face::2]:5000/v3 --os-user-
# calico controller pod moved to worker:
[2019-10-10 19:08:48,602] 311 DEBUG MainThread ssh.send :: Send 'kubectl get pod -o=wide --all-namespaces'
kube-system calico-
Test Activity
-------------
Sanity
tags: | added: stx.retestneeded |
Changed in starlingx: | |
status: | Triaged → In Progress |
Minor issue as there is no system impact, but architecturally speaking, this pod should be tied to a label to run on the controller nodes only. Would be nice to fix for stx.3.0