The VIM fails to lock controller-0 because it can't apply the NoExecute taint: 2019-04-16T07:49:58.727 controller-1 VIM_Thread[110384] ERROR Caught exception while trying to enable controller-0 kubernetes host services, error=MaxRetryError: HTTPSConnectionPool(host='192.168.206.2', port=6443): Max retries exceeded with url: /api/v1/nodes/controller-0 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused',)). Traceback (most recent call last): File "/usr/lib64/python2.7/site-packages/nfv_plugins/nfvi_plugins/nfvi_infrastructure_api.py", line 907, in enable_host_services future.result = (yield) Exception: MaxRetryError: HTTPSConnectionPool(host='192.168.206.2', port=6443): Max retries exceeded with url: /api/v1/nodes/controller-0 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused',)) This appears to be caused because the kube-apiserver pod is not reachable. Looking at the controller-1 networking.info file in the collect, I can see that although controller-1 is active, the floating cluster IP (192.168.206.2) is not on the management interface: 9: ens801f0.139@ens801f0: mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 90:e2:ba:b0:dd:50 brd ff:ff:ff:ff:ff:ff inet 192.168.204.4/24 brd 192.168.204.255 scope global ens801f0.139:1 valid_lft forever preferred_lft forever inet 192.168.206.4/24 brd 192.168.206.255 scope global ens801f0.139:5 valid_lft forever preferred_lft forever inet 192.168.204.2/24 brd 192.168.204.255 scope global secondary ens801f0.139 valid_lft forever preferred_lft forever inet 192.168.204.6/24 brd 192.168.204.255 scope global secondary ens801f0.139 valid_lft forever preferred_lft forever inet 192.168.204.5/24 brd 192.168.204.255 scope global secondary ens801f0.139 valid_lft forever preferred_lft forever inet6 fe80::92e2:baff:feb0:dd50/64 scope link valid_lft forever preferred_lft forever Looking at the controller-0 networking.info file I can see that the floating cluster IP (192.168.206.2) is on the management interface: 9: ens801f0.139@ens801f0: mtu 9216 qdisc noqueue state UP group default qlen 1000 link/ether 90:e2:ba:b0:dc:2c brd ff:ff:ff:ff:ff:ff inet 192.168.204.3/24 brd 192.168.204.255 scope global ens801f0.139:1 valid_lft forever preferred_lft forever inet 192.168.206.3/24 brd 192.168.206.255 scope global ens801f0.139:5 valid_lft forever preferred_lft forever inet 192.168.206.2/24 scope global secondary ens801f0.139 valid_lft forever preferred_lft forever inet6 fe80::92e2:baff:feb0:dc2c/64 scope link valid_lft forever preferred_lft forever So in summary, when the swact was done, the floating cluster IP was not moved to controller-1. I can also see that the kubelet on controller-1 is unable to reach the kube-apiserver. So in addition to the cluster IP being on the wrong controller, it doesn't seem to be reachable: 2019-04-16T07:48:31.021 controller-1 kubelet[94159]: info E0416 07:48:31.021588 94159 kubelet_node_status.go:381] Error updating node status, will retry: error getting node "controller-1": Get https://192.168.206.2:6443/api/v1/nodes/controller-1?timeout=4s: context deadline exceeded