openstack-cloud-controller-manager pods can't connect to k8s api after upgrade after upgrade

Bug #2063446 reported by Nishant Dash
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Openstack Integrator Charm
New
Undecided
Unassigned

Bug Description

After an upgrade of the openstack integrator charm to
1.28/stable 69 on a cloud using k8s 1.24 (upgraded from 1.23)

we see errors from most of the the openstack-cloud-controller-manager-* pods like so,
```
E0424 00:09:16.841262 1 leaderelection.go:330] error retrieving resource lock kube-system/cloud-controller-manager: Get "https://<IP>:443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/cloud-controller-manager?timeout=5s": dial tcp <IP>:443: connect: connection refused
```
and
```
E0424 13:45:42.209919 1 leaderelection.go:330] error retrieving resource lock kube-system/cloud-controller-manager: Get "https://<IP>:443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/cloud-controller-manager?timeout=5s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
```
and
```
E0424 13:53:02.277675 1 controller.go:310] error processing service kong/kong-deploy-kong-proxy-api (will retry): failed to ensure load balancer: failed to patch service object kong/kong-deploy-kong-proxy-api: services "kong-deploy-kong-proxy-api" is forbidden: User "system:serviceaccount:kube-system:cloud-provider-openstack" cannot patch resource "services" in API group "" in the namespace "kong"
```

These pods required a manual delete to get them back healthily talking to the k8s api.

Nishant Dash (dash3)
description: updated
Nishant Dash (dash3)
description: updated
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.