k8s:Explicit firewall rules need to be added on the nodes for Nodeport Service to allow traffic on the Nodeport

Bug #1781319 reported by Venkatesh Velpula
8
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Juniper Openstack
Status tracked in Trunk
R5.0
Fix Released
High
Ramprakash R
Trunk
Fix Released
High
Ramprakash R

Bug Description

Explicit firewalls need to be created to allow the traffic on the NodePort on the node to access service from outside using the NodePort service

Either this need to be taken care as part of the provisioning or while creating the Nodeport Type Service

currently the FORWARD rule is set to DROP

==============================
Build :5.1.0-184
Deployment :Ansible_deployer
HOST OS: CENTOS7.5
=============================
Topology
=========
[root@nodei25 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
nodei25 NotReady master 19h v1.9.2
nodei26 Ready <none> 19h v1.9.2

[root@nodei25 ~]#
[root@nodei25 ~]#
[root@nodei25 ~]#
[root@nodei25 ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 19h
np-svc-test NodePort 10.105.223.229 <none> 80:30099/TCP 14h

[root@nodei25 ~]# kubectl describe svc np-svc-test
Name: np-svc-test
Namespace: default
Labels: run=load-balancer-test
Annotations: <none>
Selector: run=load-balancer-test
Type: NodePort
IP: 10.105.223.229
Port: <unset> 80/TCP
TargetPort: 80/TCP
NodePort: <unset> 30099/TCP
Endpoints: 10.47.255.250:80,10.47.255.251:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>

[root@nodei25 ~]# kubectl get ep
NAME ENDPOINTS AGE
kubernetes 10.204.217.137:6443 19h
np-svc-test 10.47.255.250:80,10.47.255.251:80 14h
[root@nodei25 ~]#

on the node
=================

[root@nodei26 ~]#
[root@nodei26 ~]# sysctl net.ipv4.ip_forward
net.ipv4.ip_forward = 1
[root@nodei26 ~]#
[root@nodei26 ~]# iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT tcp -- anywhere anywhere tcp dpt:30099 <<<<<<<<<

KUBE-FIREWALL all -- anywhere anywhere
KUBE-SERVICES all -- anywhere anywhere

Chain FORWARD (policy ACCEPT)<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<by default it was DROP
target prot opt source destination
KUBE-FORWARD all -- anywhere anywhere

Chain OUTPUT (policy ACCEPT)
target prot opt source destination
ACCEPT tcp -- anywhere anywhere tcp spt:30099<<<<<<<<<<<
ACCEPT tcp -- anywhere anywhere
KUBE-FIREWALL all -- anywhere anywhere
KUBE-SERVICES all -- anywhere anywhere
Chain DOCKER (0 references)
target prot opt source destination

Chain DOCKER-ISOLATION (0 references)
target prot opt source destination

Chain KUBE-FIREWALL (2 references)
target prot opt source destination
DROP all -- anywhere anywhere

Chain KUBE-FORWARD (1 references)
target prot opt source destination
ACCEPT all -- anywhere anywhere

Chain KUBE-SERVICES (2 references)
target prot opt source destination
[root@nodei26 ~]#

[root@nodei26 ~]# sysctl net.ipv4.ip_forward
net.ipv4.ip_forward = 1

[root@nodei26 ~]#
[root@nodei26 ~]# contrail-status
Pod Service Original Name State Status
vrouter agent contrail-vrouter-agent running Up 19 hours
vrouter nodemgr contrail-nodemgr running Up 19 hours

vrouter kernel module is PRESENT
== Contrail vrouter ==
nodemgr: active
agent: active

description: updated
Changed in juniperopenstack:
assignee: Sachchidanand Vaidya (vaidyasd) → Dinesh Bakiaraj (dineshb)
tags: added: sanityblocker
removed: blocker
Revision history for this message
Prasanna Mucharikar (mprasanna) wrote :

This problem was fixed in KubeProxy https://github.com/kubernetes/kubernetes/pull/52569

For NodePort to work, it requires that when starting, KubeProxy is passed "clusterCIDR" which corresponds to the Pod subnet. Since Kubeadm is used to start KubeProxy, clusterCIDR needs to be passed via config file parameter for " kubeadm init".

Config file:
apiVersion: kubeadm.k8s.io/v1alpha2
kind: MasterConfiguration
kubernetesVersion: v1.11.1
api:
  bindPort: 6443
kubeProxy:
  config:
    clusterCIDR: "10.32.0.0/12"

KubeAdm command:
kubeadm init --config config.yaml

This will make sure clusterCIDR is configured on every minion in the cluster. Cluster IP Tables rules will show that rules are added for clusterCIDR. FORWARD Poilicy is still "DROP".

[root@b4s404 ~]# iptables -nL
Chain INPUT (policy ACCEPT)
target prot opt source destination
KUBE-FIREWALL all -- 0.0.0.0/0 0.0.0.0/0
KUBE-EXTERNAL-SERVICES all -- 0.0.0.0/0 0.0.0.0/0 ctstate NEW /* kubernetes externally-visible service portals */

Chain FORWARD (policy DROP)
target prot opt source destination
KUBE-FORWARD all -- 0.0.0.0/0 0.0.0.0/0 /* kubernetes forwarding rules */

Chain OUTPUT (policy ACCEPT)
target prot opt source destination
KUBE-FIREWALL all -- 0.0.0.0/0 0.0.0.0/0
KUBE-SERVICES all -- 0.0.0.0/0 0.0.0.0/0 ctstate NEW /* kubernetes service portals */

Chain KUBE-EXTERNAL-SERVICES (1 references)
target prot opt source destination

Chain KUBE-FIREWALL (2 references)
target prot opt source destination
DROP all -- 0.0.0.0/0 0.0.0.0/0 /* kubernetes firewall for dropping marked packets */ mark match 0x8000/0x8000

Chain KUBE-FORWARD (1 references)
target prot opt source destination
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 /* kubernetes forwarding rules */ mark match 0x4000/0x4000
ACCEPT all -- 10.32.0.0/12 0.0.0.0/0 /* kubernetes forwarding conntrack pod source rule */ ctstate RELATED,ESTABLISHED
ACCEPT all -- 0.0.0.0/0 10.32.0.0/12 /* kubernetes forwarding conntrack pod destination rule */ ctstate RELATED,ESTABLISHED

Chain KUBE-SERVICES (1 references)
target prot opt source destination

Revision history for this message
Dinesh Bakiaraj (dineshb) wrote :

Hi Ram, this needs to be handled in ansible-deployer.
Assigning this per note from Jeba.
Basically, "kubeadm init" should be changed as follows.
If you need more info, please let me know. Thanks.

### Step 1. Create a yaml file with following config.

***NOTE: "10.32.0.0/12" here is the Pod network that the Contrail cluster was started with.***

Filename: kube-proxy-config.yaml <-- Any name of your chosing.

```
apiVersion: kubeadm.k8s.io/v1alpha2
kind: MasterConfiguration
kubernetesVersion: v1.11.1
api:
  bindPort: 6443
kubeProxy:
  config:
    clusterCIDR: "10.32.0.0/12"
```

### Step 2. Instantiate the kubernetes cluster.

```
kubeadm init --config kube-proxy-config.yaml
```

Revision history for this message
OpenContrail Admin (ci-admin-f) wrote : [Review update] master

Review in progress for https://review.opencontrail.org/45216
Submitter: Ramprakash R (<email address hidden>)

Revision history for this message
OpenContrail Admin (ci-admin-f) wrote : [Review update] R5.0

Review in progress for https://review.opencontrail.org/45218
Submitter: Ramprakash R (<email address hidden>)

Revision history for this message
OpenContrail Admin (ci-admin-f) wrote : A change has been merged

Reviewed: https://review.opencontrail.org/45216
Committed: http://github.com/Juniper/contrail-ansible-deployer/commit/b913328624cdeec06f09f06d43308a00b1396fbd
Submitter: Zuul v3 CI (<email address hidden>)
Branch: master

commit b913328624cdeec06f09f06d43308a00b1396fbd
Author: Ramprakash <email address hidden>
Date: Thu Aug 2 11:36:47 2018 -0700

Run kubeadm init with the --pod-network-cidr option

Use --pod-network-cidr to install appropriate iptable rules

Change-Id: I9e72e11b0e0c8941d89b84120f7c32d0d07f5a9d
Closes-Bug: #1781319

Revision history for this message
OpenContrail Admin (ci-admin-f) wrote :

Reviewed: https://review.opencontrail.org/45218
Committed: http://github.com/Juniper/contrail-ansible-deployer/commit/1a8ed2f2fb58f0c9cddbf42c18459f0765e88a07
Submitter: Zuul v3 CI (<email address hidden>)
Branch: R5.0

commit 1a8ed2f2fb58f0c9cddbf42c18459f0765e88a07
Author: Ramprakash <email address hidden>
Date: Thu Aug 2 11:36:47 2018 -0700

Run kubeadm init with the --pod-network-cidr option

Use --pod-network-cidr to install appropriate iptable rules

Change-Id: I9e72e11b0e0c8941d89b84120f7c32d0d07f5a9d
Closes-Bug: #1781319

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.