Kubernetes iptables interfere with lxd container on the same node

Bug #1932292 reported by Bartosz Woronicz
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Kubernetes Control Plane Charm
New
Undecided
Unassigned
Kubernetes Worker Charm
New
Undecided
Unassigned
kubernetes
New
Undecided
Unassigned
lxd
New
Undecided
Unassigned

Bug Description

Let's put for instance keystone service in lxd on vm with K8s master installed

and there k8s installs firewall

the packets go whole way from one lxd on first machine to another lxd but on the way back their way back from the 2nd vm and they splash on the vm firewall installed by k8s
all baremetal, vm, container share the same network 10.198.0.0/16

here's the exact travel of icmp packets (or any other packets

lxd1 -> vm1 > baremetal1 -> network layer2 -> baremetal2 -> vm2 -> lxd2 (echo request reached dst)
then travel back:
lxd2 -> vm2 ---!!!---> baremetal1

no communication, the packets hits the following rule:


Chain KUBE-FORWARD (1 references)
 pkts bytes target prot opt in out source destination
 1353 84652 DROP all -- * * 0.0.0.0/0 0.0.0.0/0 ctstate INVALID
    0 0 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes forwarding rules */ mark match 0x4000/0x4000
  248 43868 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes forwarding conntrack pod source rule */ ctstate RELATED,ESTABLISHED
    0 0 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes forwarding conntrack pod destination rule */ ctstate RELATED,ESTABLISHED

ubuntu@k8smaster-1:~$ sudo iptables -D KUBE-FORWARD -m conntrack --ctstate INVALID -j DROP

root@juju-587661-1-lxd-0:~# ping 10.198.0.249
PING 10.198.0.249 (10.198.0.249) 56(84) bytes of data.
64 bytes from 10.198.0.249: icmp_seq=1 ttl=64 time=0.720 ms
64 bytes from 10.198.0.249: icmp_seq=2 ttl=64 time=0.511 ms

Revision history for this message
Bartosz Woronicz (mastier1) wrote :

wrong project, fixed

affects: murano-applications → kubernetes
Revision history for this message
Pedro Guimarães (pguimaraes) wrote :

Also marking LXD project to get their opinion, given this is a conflict between kube-proxy and lxc on the way to manage iptables

Revision history for this message
Stéphane Graber (stgraber) wrote :

Can you show the entire firewall please?
Also the output of 'lxc config show --expanded NAME' for an affected instance would be useful.

description: updated
Revision history for this message
Bartosz Woronicz (mastier1) wrote :
Revision history for this message
Bartosz Woronicz (mastier1) wrote :

Here you go

Revision history for this message
Stéphane Graber (stgraber) wrote :

Thanks. This shows that you're using LXD with an externally defined bridge, so aren't using any of LXD's own integration with nft or xtables for firewalling.

So this isn't something that LXD would really be able to help with. When LXD itself creates a bridge, it tries to prepend some allow rules to mitigate such issues, though not always very successfully.

In general, we've been trying to push conflicting projects (k8s, docker, ...) to work on their firewalling rules so that they only apply to their own interfaces and don't go messing with other bridges or interfaces on the system.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.