impossible to delete k8s services when octavia is detected

Bug #1990494 reported by Junien Fridrick
8
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Kubernetes Control Plane Charm
Fix Released
High
George Kraft
Openstack Integrator Charm
Fix Released
High
George Kraft

Bug Description

Hi,

I investigated a k8s service that wouldn't delete today, the "describe" was like this :
https://pastebin.ubuntu.com/p/wWjVSH3MZ6/

As it turns out, by default, openstack users don't have access to Octavia (at least in Ussuri, which is the openstack version we're using here) :
https://docs.openstack.org/octavia/ussuri/configuration/policy.html (first line).

I had to add the "load-balancer_observer" role to the openstack user to allow the k8s service to be deleted.

If a k8s service can be created without octavia access, I think one should be able to delete it as well without octavia access.

Thanks

PS : the version of openstack-integrator I have is :
App Version Status Scale Charm Channel Rev Exposed Message
openstack-integrator wallaby active 1 openstack-integrator stable 140 no Ready

$ kubectl version
WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version.
Client Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.1", GitCommit:"e4d4e1ab7cf1bf15273ef97303551b279f0920a9", GitTreeState:"clean", BuildDate:"2022-09-16T02:32:15Z", GoVersion:"go1.19", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v4.5.7
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.14", GitCommit:"0f77da5bd4809927e15d1658fb4aa8f13ad890a5", GitTreeState:"clean", BuildDate:"2022-06-17T21:57:40Z", GoVersion:"go1.16.15", Compiler:"gc", Platform:"linux/amd64"}
WARNING: version difference between client (1.25) and server (1.21) exceeds the supported minor version skew of +/-1

Revision history for this message
George Kraft (cynerva) wrote :

Interesting. The code that openstack-integrator uses to decide if Octavia should be used or not is in detect_octavia[1], which checks for the presence of octavia in `openstack catalog list`. I'm guessing this does not take into account the roles of the user.

That said, we require Octavia[2], and we require it because the upstream openstack-cloud-controller-manager project that we use to manage k8s LoadBalancer services also requires Octavia[3].

I recommend two fixes:
1. Update the openstack-integration documentation to list required user roles
2. Make the openstack-integrator charm check and verify user roles, and enter Blocked status if required roles are missing

[1]: https://github.com/juju-solutions/charm-openstack-integrator/blob/91984db6176c005340429061c2aef02654b543ad/lib/charms/layer/openstack.py#L131
[2]: https://ubuntu.com/kubernetes/docs/openstack-integration
[3]: https://github.com/kubernetes/cloud-provider-openstack/blob/ec0e52924d107a039524b29e19cb11937b37961e/docs/openstack-cloud-controller-manager/using-openstack-cloud-controller-manager.md

Changed in charm-openstack-integrator:
importance: Undecided → Medium
status: New → Triaged
George Kraft (cynerva)
Changed in charm-openstack-integrator:
status: Triaged → Confirmed
Revision history for this message
George Kraft (cynerva) wrote :

After discussing this with Junien, I understand that LoadBalancer support is not needed or desired in this cluster, so Octavia roles should not be required. The relation to openstack-integrator is still needed to provide PersistentVolume support via Cinder CSI.

We need to expose a way to disable LoadBalancer support. We can do this by passing the [LoadBalancer] enabled=false config[1] to openstack-cloud-controller-manager. We may need to do this in two places: one in layer-kubernetes-common's generate_openstack_cloud_config[2] and one in openstack-cloud-controller-operator's build_cloud_config[3].

New recommended fix:
1. Add a config option to openstack-integrator to disable LoadBalancer integration
2. Make the openstack-integrator charm check and verify user roles, and enter Blocked status if required roles are missing, BUT don't require Octavia roles if LoadBalancer support is disabled
3. Update the openstack-integration documentation to clarify what roles are required and when

[1]: https://github.com/kubernetes/cloud-provider-openstack/blob/ec0e52924d107a039524b29e19cb11937b37961e/docs/openstack-cloud-controller-manager/using-openstack-cloud-controller-manager.md#load-balancer
[2]: https://github.com/charmed-kubernetes/layer-kubernetes-common/blob/2d388da1bbdc78a82bb7fc649fe9a0cb09af0803/lib/charms/layer/kubernetes_common.py#L550
[3]: https://github.com/canonical/openstack-cloud-controller-operator/blob/5eafef3506344a741d81d26bbb4c6f4fb7caeff8/src/backend.py#L79

George Kraft (cynerva)
Changed in charm-openstack-integrator:
importance: Medium → High
status: Confirmed → Triaged
Revision history for this message
George Kraft (cynerva) wrote :
Changed in charm-openstack-integrator:
assignee: nobody → George Kraft (cynerva)
Changed in charm-kubernetes-master:
assignee: nobody → George Kraft (cynerva)
Changed in charm-openstack-integrator:
milestone: none → 1.26+ck1
Changed in charm-kubernetes-master:
milestone: none → 1.26+ck1
importance: Undecided → High
Changed in charm-openstack-integrator:
status: Triaged → In Progress
Changed in charm-kubernetes-master:
status: New → In Progress
George Kraft (cynerva)
Changed in charm-kubernetes-master:
status: In Progress → Fix Committed
Changed in charm-openstack-integrator:
status: In Progress → Fix Committed
Adam Dyess (addyess)
tags: added: backport-needed
Adam Dyess (addyess)
tags: removed: backport-needed
Changed in charm-kubernetes-master:
status: Fix Committed → Fix Released
Changed in charm-openstack-integrator:
status: Fix Committed → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.