k8s:Explicit firewall rules need to be added on the nodes for Nodeport Service to allow traffic on the Nodeport
Affects | Status | Importance | Assigned to | Milestone | ||
---|---|---|---|---|---|---|
Juniper Openstack | Status tracked in Trunk | |||||
R5.0 |
Fix Released
|
High
|
Ramprakash R | |||
Trunk |
Fix Released
|
High
|
Ramprakash R |
Bug Description
Explicit firewalls need to be created to allow the traffic on the NodePort on the node to access service from outside using the NodePort service
Either this need to be taken care as part of the provisioning or while creating the Nodeport Type Service
currently the FORWARD rule is set to DROP
=======
Build :5.1.0-184
Deployment :Ansible_deployer
HOST OS: CENTOS7.5
=======
Topology
=========
[root@nodei25 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
nodei25 NotReady master 19h v1.9.2
nodei26 Ready <none> 19h v1.9.2
[root@nodei25 ~]#
[root@nodei25 ~]#
[root@nodei25 ~]#
[root@nodei25 ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 19h
np-svc-test NodePort 10.105.223.229 <none> 80:30099/TCP 14h
[root@nodei25 ~]# kubectl describe svc np-svc-test
Name: np-svc-test
Namespace: default
Labels: run=load-
Annotations: <none>
Selector: run=load-
Type: NodePort
IP: 10.105.223.229
Port: <unset> 80/TCP
TargetPort: 80/TCP
NodePort: <unset> 30099/TCP
Endpoints: 10.47.255.
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
[root@nodei25 ~]# kubectl get ep
NAME ENDPOINTS AGE
kubernetes 10.204.217.137:6443 19h
np-svc-test 10.47.255.
[root@nodei25 ~]#
on the node
=================
[root@nodei26 ~]#
[root@nodei26 ~]# sysctl net.ipv4.ip_forward
net.ipv4.ip_forward = 1
[root@nodei26 ~]#
[root@nodei26 ~]# iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT tcp -- anywhere anywhere tcp dpt:30099 <<<<<<<<<
KUBE-FIREWALL all -- anywhere anywhere
KUBE-SERVICES all -- anywhere anywhere
Chain FORWARD (policy ACCEPT)
target prot opt source destination
KUBE-FORWARD all -- anywhere anywhere
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
ACCEPT tcp -- anywhere anywhere tcp spt:30099<
ACCEPT tcp -- anywhere anywhere
KUBE-FIREWALL all -- anywhere anywhere
KUBE-SERVICES all -- anywhere anywhere
Chain DOCKER (0 references)
target prot opt source destination
Chain DOCKER-ISOLATION (0 references)
target prot opt source destination
Chain KUBE-FIREWALL (2 references)
target prot opt source destination
DROP all -- anywhere anywhere
Chain KUBE-FORWARD (1 references)
target prot opt source destination
ACCEPT all -- anywhere anywhere
Chain KUBE-SERVICES (2 references)
target prot opt source destination
[root@nodei26 ~]#
[root@nodei26 ~]# sysctl net.ipv4.ip_forward
net.ipv4.ip_forward = 1
[root@nodei26 ~]#
[root@nodei26 ~]# contrail-status
Pod Service Original Name State Status
vrouter agent contrail-
vrouter nodemgr contrail-nodemgr running Up 19 hours
vrouter kernel module is PRESENT
== Contrail vrouter ==
nodemgr: active
agent: active
description: | updated |
Changed in juniperopenstack: | |
assignee: | Sachchidanand Vaidya (vaidyasd) → Dinesh Bakiaraj (dineshb) |
tags: |
added: sanityblocker removed: blocker |
This problem was fixed in KubeProxy https:/ /github. com/kubernetes/ kubernetes/ pull/52569
For NodePort to work, it requires that when starting, KubeProxy is passed "clusterCIDR" which corresponds to the Pod subnet. Since Kubeadm is used to start KubeProxy, clusterCIDR needs to be passed via config file parameter for " kubeadm init".
Config file: k8s.io/ v1alpha2
apiVersion: kubeadm.
kind: MasterConfiguration
kubernetesVersion: v1.11.1
api:
bindPort: 6443
kubeProxy:
config:
clusterCIDR: "10.32.0.0/12"
KubeAdm command:
kubeadm init --config config.yaml
This will make sure clusterCIDR is configured on every minion in the cluster. Cluster IP Tables rules will show that rules are added for clusterCIDR. FORWARD Poilicy is still "DROP".
[root@b4s404 ~]# iptables -nL SERVICES all -- 0.0.0.0/0 0.0.0.0/0 ctstate NEW /* kubernetes externally-visible service portals */
Chain INPUT (policy ACCEPT)
target prot opt source destination
KUBE-FIREWALL all -- 0.0.0.0/0 0.0.0.0/0
KUBE-EXTERNAL-
Chain FORWARD (policy DROP)
target prot opt source destination
KUBE-FORWARD all -- 0.0.0.0/0 0.0.0.0/0 /* kubernetes forwarding rules */
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
KUBE-FIREWALL all -- 0.0.0.0/0 0.0.0.0/0
KUBE-SERVICES all -- 0.0.0.0/0 0.0.0.0/0 ctstate NEW /* kubernetes service portals */
Chain KUBE-EXTERNAL- SERVICES (1 references)
target prot opt source destination
Chain KUBE-FIREWALL (2 references)
target prot opt source destination
DROP all -- 0.0.0.0/0 0.0.0.0/0 /* kubernetes firewall for dropping marked packets */ mark match 0x8000/0x8000
Chain KUBE-FORWARD (1 references)
target prot opt source destination
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 /* kubernetes forwarding rules */ mark match 0x4000/0x4000
ACCEPT all -- 10.32.0.0/12 0.0.0.0/0 /* kubernetes forwarding conntrack pod source rule */ ctstate RELATED,ESTABLISHED
ACCEPT all -- 0.0.0.0/0 10.32.0.0/12 /* kubernetes forwarding conntrack pod destination rule */ ctstate RELATED,ESTABLISHED
Chain KUBE-SERVICES (1 references)
target prot opt source destination