[K8s-R5.0]: Traffic is not load balanced to a service when Network Policy Egress rule is created with cidr of Service IP .
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Juniper Openstack |
Won't Fix
|
High
|
Pulkit Tandon | ||
Trunk |
Won't Fix
|
High
|
Pulkit Tandon |
Bug Description
Configuration:
K8s 1.9.2
master-
Setup:
3 node setup.
1 Kube master. 1 Controller.
2 Agent+ K8s slaves
Description:
Created 4 pods in a namespace
Created a service in the same namespace which bonds to 2 pods out of the 4 created
Created a network policy on a namespace as follows:
spec:
egress:
- to:
- ipBlock:
cidr: 10.104.95.53/32
podSelector: {}
policyTypes:
- Egress
Tried to access the service from 3rd pod.
wget was done with a count vape of 10
Response was received only from a single Pod. Response was not returned by the 2nd pod.
Attempt 1:
2018-03-04 19:07:27,984 - WARNING - No http hit seen for one or more pods.Pls check. Hits: {'ctest-
2018-03-04 19:07:54,842 - WARNING - No http hit seen for one or more pods.Pls check. Hits: {'ctest-
2018-03-04 19:08:26,678 - WARNING - No http hit seen for one or more pods.Pls check. Hits: {'ctest-
2018-03-04 19:08:58,538 - WARNING - No http hit seen for one or more pods.Pls check. Hits: {'ctest-
2018-03-04 19:09:35,677 - WARNING - No http hit seen for one or more pods.Pls check. Hits: {'ctest-
2018-03-04 19:10:07,579 - WARNING - No http hit seen for one or more pods.Pls check. Hits: {'ctest-
Attempt 2:
2018-03-04 19:58:09,210 - WARNING - No http hit seen for one or more pods.Pls check. Hits: {'ctest-
2018-03-04 19:58:40,847 - WARNING - No http hit seen for one or more pods.Pls check. Hits: {'ctest-
2018-03-04 19:59:07,554 - WARNING - No http hit seen for one or more pods.Pls check. Hits: {'ctest-
2018-03-04 19:59:29,209 - WARNING - No http hit seen for one or more pods.Pls check. Hits: {'ctest-
2018-03-04 20:00:00,921 - WARNING - No http hit seen for one or more pods.Pls check. Hits: {'ctest-
2018-03-04 20:00:22,570 - WARNING - No http hit seen for one or more pods.Pls check. Hits: {'ctest-
When tried to access he service from a pod present in some different namespace, the response was correctly sent from both the pods.
information type: | Proprietary → Public |
Pulkit,
There are two aspects here:
1. With K8s network policy, user is responsible to setup policies so end-to-end flows will be successful. When using a VIP (which is essentially a service ip in k8s) in the CIDR, that policy only covers the point of flow from source to pre-loadbalanced traffic. There has to be another rule/policy that would drive the traffic from source to the actual endpoint. Here is my policy with one service ip ("10.97.51.192/32") and three endpoints implementing the service.
"spec": {
"to": [
{
"ipBlock" : {
"cidr": "10.97.51.192/32"
}
} ,
{
"ipBlock" : {
"cidr": "10.47.255.242/32"
}
} ,
{
"ipBlock" : {
"cidr": "10.47.255.240/32"
}
} ,
{
"ipBlock" : {
"cidr": "10.47.255.241/32"
}
}
"egress": [
{
]
}
2. The load balancing of service bound traffic itself. With the above policy, I see that the load balancing is happening as expected. If you still see the issue, please get back to me, possibly with the setup. I can revisit this again.
Thanks.