Master needs iptables target to reach cluster services

Bug #1831845 reported by Wouter van Bommel
20
This bug affects 3 people
Affects Status Importance Assigned to Milestone
Kubernetes Control Plane Charm
Fix Released
Undecided
Mike Wilson

Bug Description

Problem with iptables, if kubelets are not running on the same host as the master.

We have the following setup

api-loadbalancer <-> (2x) kubernetes-master <-> number of workers

For authentication keystone is used.

The problem experienced was that we could not do token authentication for the k8s cluster, while all components appeared to be installed and configured correctly, and the syslog on the k8s-masters where full with lines like:
Jun 5 06:25:10 juju-1b688f-okcupid-k8s-6 kube-proxy.daemon[2345]: E0605 06:25:10.844830 2345 proxier.go:1344] Failed to execute iptables-restore: exit status 2 (iptables-restore v1.6.1: Couldn't load target `KUBE-MARK-DROP':No such file or directory

We finally got this setup running by manually creating a KUBE-MARK-DROP rule with the commands:
sudo iptables -t NAT -N KUBE-MARK-DROP
sudo iptables -t NAT -A KUBE-MARK-DROP -j MARK --set-mark 0x8000/0x8000

When checking the code we found that this line should be inserted via the function in pkg/kubelet/kubelet_network_linux.go at line 34, called from pkg/kubelet/kubelet.go line 1427

As we are not running any kubelet on the kubernetes-master this code is not called, so the source of the origin is clear, but how can we prevent this problem from happening in the future, or after a host restart?

kubernetes running is stable/1.13 snap.
On the master:
# snap list
Name Version Rev Tracking Publisher Notes
canonical-livepatch 9.3.0 77 stable canonical✓ -
cdk-addons 1.13.5 875 1.13/edge canonical✓ -
core 16-2.39 6964 stable canonical✓ core
kube-apiserver 1.13.6 984 1.13/edge canonical✓ -
kube-controller-manager 1.13.6 980 1.13/edge canonical✓ -
kube-proxy 1.13.6 992 1.13/edge canonical✓ classic
kube-scheduler 1.13.6 985 1.13/edge canonical✓ -
kubectl 1.13.6 991 1.13/edge canonical✓ classic

On the workers:
sudo snap list
Name Version Rev Tracking Publisher Notes
canonical-livepatch 9.3.0 77 stable canonical✓ -
core 16-2.39 6964 stable canonical✓ core
kube-proxy 1.13.6 992 1.13 canonical✓ classic
kubectl 1.13.6 991 1.13 canonical✓ classic
kubelet 1.13.6 995 1.13 canonical✓ classic

All managed by charms:
canal 0.10.0/2.6.12 active 8 canal jujucharms 610 ubuntu
kubeapi-load-balancer 1.14.0 active 1 kubeapi-load-balancer jujucharms 628 ubuntu exposed
kubernetes-master 1.13.6 active 2 kubernetes-master jujucharms 646 ubuntu
kubernetes-worker 1.13.6 active 6 kubernetes-worker jujucharms 519 ubuntu exposed
openstack-integrator active 1 openstack-integrator jujucharms 22 ubuntu

(edit by afreiberger to update the KUBE-MARK-DROP table --set-mark option to that of comments #1 and #3)

tags: added: canonical-bootstack
Revision history for this message
Mike Wilson (knobby) wrote :

not trying to nitpick, but for clarity, the commands should be:

sudo iptables -t nat -N KUBE-MARK-DROP
sudo iptables -t nat -A KUBE-MARK-DROP -j MARK --set-mark 0x8000

Revision history for this message
Mike Wilson (knobby) wrote :

I should also say that this fixed my cluster as well. Since kubelet is what creates this table, I think it further encourages us to push kubelet to the master nodes.

Revision history for this message
Drew Freiberger (afreiberger) wrote :

Slight update, the iptables rule for marking is:

iptables -t NAT -A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000

Revision history for this message
Tim Van Steenburgh (tvansteenburgh) wrote :

Running kubelet on the masters may be where we end up, but that's not a change we'll be able to make quickly. In the meantime, why don't we make the master charm apply these iptables rules? Any reason not to do that?

Changed in charm-kubernetes-master:
status: New → Triaged
description: updated
Revision history for this message
Kevin W Monroe (kwmonroe) wrote :
Changed in charm-kubernetes-master:
status: Triaged → Fix Committed
Changed in charm-kubernetes-master:
milestone: none → 1.15+ck1
summary: - Master will not function without kubelet being active
+ Master needs iptables target to reach cluster services
Revision history for this message
George Kraft (cynerva) wrote :
George Kraft (cynerva)
Changed in charm-kubernetes-master:
assignee: nobody → Mike Wilson (knobby)
Changed in charm-kubernetes-master:
status: Fix Committed → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.