Removing network policy from namespace causes inability to access pods through loadbalancer.

Bug #1899148 reported by Roman Dobosz on 2020-10-09
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
kuryr-kubernetes
High
Roman Dobosz

Bug Description

This issue only applies for the Octavia with Amphora.

Creating a NetworkPolicy which have no selectors, which deny all the traffic on the specified namespace, and removing it afterwards will leave loadbalancer listener in offline state.

Steps to reproduce:

1. kubectl create namespace foo
2. kubectl run --image kuryr/demo -n foo server
3. kubectl expose pod/server -n foo --port 80 --target-port 8080
4. kubectl run --image kuryr/demo -n foo client
5. kubectl exec -ti -n foo client -- curl <server-pod-ip>
(should display: server: HELLO! I AM ALIVE!!!)
6. cat > policy_foo_deny_all.yaml << NIL
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-all
  namespace: foo
spec:
  podSelector: {}
  policyTypes:
    - Ingress
NIL
kubectl apply -f policy_foo_deny_all.yaml
7. kubectl exec -ti -n foo client -- curl <server-pod-ip>
(should display: curl: (7) Failed to connect to <server-pod-ip> port 80: Connection refused)
8. kubectl delete -n foo networkpolicies deny-all
9. kubectl exec -ti -n foo client -- curl <server-pod-ip>
(should display: server: HELLO! I AM ALIVE!!!, but it is not!)

Examining Octavia listener for this loadbalancer reveals it is in OFFLINE state and admin_state_up is false:

$ openstack loadbalancer listener show 6ce5cdb5-abbf-49bc-bcdc-81b5bf8d9276
+-----------------------------+--------------------------------------+
| Field | Value |
+-----------------------------+--------------------------------------+
| admin_state_up | False |
| connection_limit | -1 |
| created_at | 2020-10-08T12:41:30 |
| default_pool_id | 737f2de9-1265-472e-8a5c-4d2684ae2362 |
| default_tls_container_ref | None |
| description | |
| id | 6ce5cdb5-abbf-49bc-bcdc-81b5bf8d9276 |
| insert_headers | None |
| l7policies | |
| loadbalancers | ca14d544-78e5-4999-a29b-c3940ad0bc03 |
| name | foo/foosrvr:TCP:80 |
| operating_status | OFFLINE |
| project_id | 510c39a72a1d420e892324fa9c1dbea8 |
| protocol | TCP |
| protocol_port | 80 |
| provisioning_status | ACTIVE |
| sni_container_refs | [] |
| timeout_client_data | 50000 |
| timeout_member_connect | 5000 |
| timeout_member_data | 50000 |
| timeout_tcp_inspect | 0 |
| updated_at | 2020-10-09T10:02:59 |
| client_ca_tls_container_ref | None |
| client_authentication | NONE |
| client_crl_container_ref | None |
| allowed_cidrs | None |
| tls_ciphers | None |
| tls_versions | None |
| alpn_protocols | None |
+-----------------------------+--------------------------------------+

while it should be up, and online.

Changed in kuryr-kubernetes:
assignee: nobody → Roman Dobosz (roman-dobosz)

Fix proposed to branch: master
Review: https://review.opendev.org/757077

Changed in kuryr-kubernetes:
status: New → In Progress
Changed in kuryr-kubernetes:
importance: Undecided → High

Reviewed: https://review.opendev.org/757077
Committed: https://git.openstack.org/cgit/openstack/kuryr-kubernetes/commit/?id=d26133a02d9306e1561bca5f29dcb7203fe2b9c7
Submitter: Zuul
Branch: master

commit d26133a02d9306e1561bca5f29dcb7203fe2b9c7
Author: Roman Dobosz <email address hidden>
Date: Fri Oct 9 12:27:00 2020 +0200

    Fix restoring listener in case of removing NP.

    In case of using amphora with Octavia, and network policy, which
    blocking the traffic within the namespace, LB listener was set to
    offline state. After removal of the NP, listener state still was
    offline. In this patch we fix that case.

    Change-Id: I406cdc7d368122c6f828e9fa481d267e56b22ca6
    Closes-Bug: 1899148

Changed in kuryr-kubernetes:
status: In Progress → Fix Released
To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers