[K8s-R5.0]: Traffic is not load balanced to a service when Network Policy Egress rule is created with cidr of Service IP .

Bug #1753257 reported by Pulkit Tandon
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Juniper Openstack
Won't Fix
High
Pulkit Tandon
Trunk
Won't Fix
High
Pulkit Tandon

Bug Description

Configuration:
K8s 1.9.2
master-centos7-ocata-bld-8

Setup:
3 node setup.
1 Kube master. 1 Controller.
2 Agent+ K8s slaves

Description:
Created 4 pods in a namespace
Created a service in the same namespace which bonds to 2 pods out of the 4 created
Created a network policy on a namespace as follows:
spec:
  egress:
  - to:
    - ipBlock:
        cidr: 10.104.95.53/32
  podSelector: {}
  policyTypes:
  - Egress

Tried to access the service from 3rd pod.
wget was done with a count vape of 10
Response was received only from a single Pod. Response was not returned by the 2nd pod.

Attempt 1:
2018-03-04 19:07:27,984 - WARNING - No http hit seen for one or more pods.Pls check. Hits: {'ctest-pod-45866649': 0, 'ctest-pod-64258198': 3}
2018-03-04 19:07:54,842 - WARNING - No http hit seen for one or more pods.Pls check. Hits: {'ctest-pod-45866649': 0, 'ctest-pod-64258198': 5}
2018-03-04 19:08:26,678 - WARNING - No http hit seen for one or more pods.Pls check. Hits: {'ctest-pod-45866649': 0, 'ctest-pod-64258198': 4}
2018-03-04 19:08:58,538 - WARNING - No http hit seen for one or more pods.Pls check. Hits: {'ctest-pod-45866649': 0, 'ctest-pod-64258198': 4}
2018-03-04 19:09:35,677 - WARNING - No http hit seen for one or more pods.Pls check. Hits: {'ctest-pod-45866649': 0, 'ctest-pod-64258198': 3}
2018-03-04 19:10:07,579 - WARNING - No http hit seen for one or more pods.Pls check. Hits: {'ctest-pod-45866649': 0, 'ctest-pod-64258198': 4}

Attempt 2:
2018-03-04 19:58:09,210 - WARNING - No http hit seen for one or more pods.Pls check. Hits: {'ctest-pod-45866649': 6, 'ctest-pod-64258198': 0}
2018-03-04 19:58:40,847 - WARNING - No http hit seen for one or more pods.Pls check. Hits: {'ctest-pod-45866649': 4, 'ctest-pod-64258198': 0}
2018-03-04 19:59:07,554 - WARNING - No http hit seen for one or more pods.Pls check. Hits: {'ctest-pod-45866649': 5, 'ctest-pod-64258198': 0}
2018-03-04 19:59:29,209 - WARNING - No http hit seen for one or more pods.Pls check. Hits: {'ctest-pod-45866649': 6, 'ctest-pod-64258198': 0}
2018-03-04 20:00:00,921 - WARNING - No http hit seen for one or more pods.Pls check. Hits: {'ctest-pod-45866649': 4, 'ctest-pod-64258198': 0}
2018-03-04 20:00:22,570 - WARNING - No http hit seen for one or more pods.Pls check. Hits: {'ctest-pod-45866649': 6, 'ctest-pod-64258198': 0}

When tried to access he service from a pod present in some different namespace, the response was correctly sent from both the pods.

Pulkit Tandon (pulkitt)
information type: Proprietary → Public
Revision history for this message
Dinesh Bakiaraj (dineshb) wrote :

Pulkit,

There are two aspects here:

1. With K8s network policy, user is responsible to setup policies so end-to-end flows will be successful. When using a VIP (which is essentially a service ip in k8s) in the CIDR, that policy only covers the point of flow from source to pre-loadbalanced traffic. There has to be another rule/policy that would drive the traffic from source to the actual endpoint. Here is my policy with one service ip ("10.97.51.192/32") and three endpoints implementing the service.

   "spec": {
        "egress": [
            {
                "to": [
                    {
                        "ipBlock": {
                            "cidr": "10.97.51.192/32"
                        }
                    },
                    {
                        "ipBlock": {
                            "cidr": "10.47.255.242/32"
                        }
                    },
                    {
                        "ipBlock": {
                            "cidr": "10.47.255.240/32"
                        }
                    },
                    {
                        "ipBlock": {
                            "cidr": "10.47.255.241/32"
                        }
                    }
                ]
            }
2. The load balancing of service bound traffic itself. With the above policy, I see that the load balancing is happening as expected. If you still see the issue, please get back to me, possibly with the setup. I can revisit this again.

Thanks.

Revision history for this message
Pulkit Tandon (pulkitt) wrote :

Hi Dinesh!

Thanks for the explanation.
I tried the same scenario again on build: contrail-5.0.0-25

The observations are as follows:

1. With policy as follows:
spec:
  egress:
  - to:
    - ipBlock:
        cidr: 10.104.95.53/32
  podSelector: {}
  policyTypes:
  - Egress

This time, there is no response from any of the pod. Possibly, some fix must have corrected this issue.

2. With Policy as follows:
spec:
  egress:
  - to:
    - ipBlock:
        cidr: 10.102.253.73/32
    - ipBlock:
        cidr: 10.47.255.248/32
    - ipBlock:
        cidr: 10.47.255.247/32
  podSelector: {}
  policyTypes:
  - Egress

This policy include 1 Service IP and 2 endpoint IPs. The request to service IP "10.102.253.73" is loadbalanced correctly and response is received from both the endpoints.

3. With policy as follows:
spec:
  egress:
  - to:
    - ipBlock:
        cidr: 10.47.255.248/32
    - ipBlock:
        cidr: 10.47.255.247/32
  podSelector: {}
  policyTypes:
  - Egress
This policy include 2 endpoint IPs, the request to service IP "10.102.253.73" fails and no response is received back.
Can you please confirm and explain this scenario ?

Revision history for this message
Dinesh Bakiaraj (dineshb) wrote : Re: [Bug 1753257] Re: [K8s-R5.0]: Traffic is not load balanced to a service when Network Policy Egress rule is created with cidr of Service IP .
Download full text (6.0 KiB)

Hi Pulkit,

I have initiated a discussion with Naveen on another thread to understand the expectation from forwarding plane.

Lets close on this once we get some clarity.

Thanks.

________________________________
From: <email address hidden> <email address hidden> on behalf of Pulkit Tandon <email address hidden>
Sent: Wednesday, March 14, 2018 3:49 AM
To: Dinesh Bakiaraj
Subject: [Bug 1753257] Re: [K8s-R5.0]: Traffic is not load balanced to a service when Network Policy Egress rule is created with cidr of Service IP .

Hi Dinesh!

Thanks for the explanation.
I tried the same scenario again on build: contrail-5.0.0-25

The observations are as follows:

1. With policy as follows:
spec:
  egress:
  - to:
    - ipBlock:
        cidr: 10.104.95.53/32
  podSelector: {}
  policyTypes:
  - Egress

This time, there is no response from any of the pod. Possibly, some fix
must have corrected this issue.

2. With Policy as follows:
spec:
  egress:
  - to:
    - ipBlock:
        cidr: 10.102.253.73/32
    - ipBlock:
        cidr: 10.47.255.248/32
    - ipBlock:
        cidr: 10.47.255.247/32
  podSelector: {}
  policyTypes:
  - Egress

This policy include 1 Service IP and 2 endpoint IPs. The request to
service IP "10.102.253.73" is loadbalanced correctly and response is
received from both the endpoints.

3. With policy as follows:
spec:
  egress:
  - to:
    - ipBlock:
        cidr: 10.47.255.248/32
    - ipBlock:
        cidr: 10.47.255.247/32
  podSelector: {}
  policyTypes:
  - Egress
This policy include 2 endpoint IPs, the request to service IP "10.102.253.73" fails and no response is received back.
Can you please confirm and explain this scenario ?

--
You received this bug notification because you are a member of Contrail
Systems engineering, which is subscribed to Juniper Openstack.
https://urldefense.proofpoint.com/v2/url?u=https-3A__bugs.launchpad.net_bugs_1753257&d=DwIFaQ&c=HAkYuh63rsuhr6Scbfh0UjBXeMK-ndb3voDTXcWzoCI&r=VZFOPEIZlgMjELV3arT8ZqN5KWF7hCk559iN66ZFra8&m=xbuZRWAWyfZG2DzWGH_wCd2DFZGYBSI7oYQCW58Jg1E&s=1o4frPeNPknP6nIvMRWvoVGdJedU3XVV7T_sk9GlPBE&e=
Bug #1753257 “[K8s-R5.0]: Traffic is not load balanced to a serv...” : Bugs : Juniper Openstack<https://urldefense.proofpoint.com/v2/url?u=https-3A__bugs.launchpad.net_bugs_1753257&d=DwIFaQ&c=HAkYuh63rsuhr6Scbfh0UjBXeMK-ndb3voDTXcWzoCI&r=VZFOPEIZlgMjELV3arT8ZqN5KWF7hCk559iN66ZFra8&m=xbuZRWAWyfZG2DzWGH_wCd2DFZGYBSI7oYQCW58Jg1E&s=1o4frPeNPknP6nIvMRWvoVGdJedU3XVV7T_sk9GlPBE&e=>
urldefense.proofpoint.com
Configuration: K8s 1.9.2 master-centos7-ocata-bld-8 Setup: 3 node setup. 1 Kube master. 1 Controller. 2 Agent+ K8s slaves Description: Created 4 pods in a namespace Created a service in the same namespace which bonds to 2 pods out of the 4 created Created a network policy on a namespace as follows: spec: egress: - to: - ipBlock: cidr: 10.104.95.53/32 podSelector: {} policyTypes: - Egress Tried to access the service from 3rd pod. wget was done with a count vape of 10 R...

Title:
  [K8s-R5.0]: Traffic is not load balanced to a service when Network
  Policy Egress rule is created with cidr of Service IP .

Status in Juniper Openstack:
  New
Status in Juniper Openstack trunk series:
  New

Bug d...

Read more...

Revision history for this message
Dinesh Bakiaraj (dineshb) wrote :

Clarification from Naveen:
------------------------

Case 1> Assume source VM and loadbalancer backend are on different compute node, source VM originating the packet doesn't know that its talking to VIP, hence there is no NAT as such at source compute node, as a result ACL needs to have rule matching VIP.

Case 2 > At destination compute node there is DNAT happening hence ACL are applied after DNAT.

Regards
Naveen N

Revision history for this message
Dinesh Bakiaraj (dineshb) wrote :

Hi Pulkit,

Per Naveen's reply, the behavior we support is as follows:
If egress cidr based rule is applied on service vip, we expect the endpoints to be listed as well.
Granted that the point of VIP is to not have to list endpoint IP's explicitly.
But the need to apply egress rule for a service VIP is not a typical use case. If such a behavior is required, then you would apply a egress rule based on service tags.

So am closing this as won't fix for now. We can revisit this, if we come across this specific requirement.

Thanks,
Dinesh

Changed in juniperopenstack:
status: Incomplete → Won't Fix
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.