2024-04-25 12:13:10 |
Nishant Dash |
description |
After an upgrade of the openstack integrator charm from
-
to
- 1.28/stable 69
we see errors from most of the the openstack-cloud-controller-manager-* pods like so,
```
E0424 00:09:16.841262 1 leaderelection.go:330] error retrieving resource lock kube-system/cloud-controller-manager: Get "https://172.16.20.1:443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/cloud-controller-manager?timeout=5s": dial tcp 172.16.20.1:443: connect: connection refused
```
and
```
E0424 13:45:42.209919 1 leaderelection.go:330] error retrieving resource lock kube-system/cloud-controller-manager: Get "https://172.16.20.1:443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/cloud-controller-manager?timeout=5s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
```
and
```
E0424 13:53:02.277675 1 controller.go:310] error processing service kong/kong-deploy-kong-proxy-api (will retry): failed to ensure load balancer: failed to patch service object kong/kong-deploy-kong-proxy-api: services "kong-deploy-kong-proxy-api" is forbidden: User "system:serviceaccount:kube-system:cloud-provider-openstack" cannot patch resource "services" in API group "" in the namespace "kong"
```
These pods required a manual delete to get them back healthily talking to the k8s api. |
After an upgrade of the openstack integrator charm from
-
to
- 1.28/stable 69
we see errors from most of the the openstack-cloud-controller-manager-* pods like so,
```
E0424 00:09:16.841262 1 leaderelection.go:330] error retrieving resource lock kube-system/cloud-controller-manager: Get "https://<IP>:443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/cloud-controller-manager?timeout=5s": dial tcp <IP>:443: connect: connection refused
```
and
```
E0424 13:45:42.209919 1 leaderelection.go:330] error retrieving resource lock kube-system/cloud-controller-manager: Get "https://<IP>:443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/cloud-controller-manager?timeout=5s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
```
and
```
E0424 13:53:02.277675 1 controller.go:310] error processing service kong/kong-deploy-kong-proxy-api (will retry): failed to ensure load balancer: failed to patch service object kong/kong-deploy-kong-proxy-api: services "kong-deploy-kong-proxy-api" is forbidden: User "system:serviceaccount:kube-system:cloud-provider-openstack" cannot patch resource "services" in API group "" in the namespace "kong"
```
These pods required a manual delete to get them back healthily talking to the k8s api. |
|
2024-04-25 12:57:35 |
Nishant Dash |
description |
After an upgrade of the openstack integrator charm from
-
to
- 1.28/stable 69
we see errors from most of the the openstack-cloud-controller-manager-* pods like so,
```
E0424 00:09:16.841262 1 leaderelection.go:330] error retrieving resource lock kube-system/cloud-controller-manager: Get "https://<IP>:443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/cloud-controller-manager?timeout=5s": dial tcp <IP>:443: connect: connection refused
```
and
```
E0424 13:45:42.209919 1 leaderelection.go:330] error retrieving resource lock kube-system/cloud-controller-manager: Get "https://<IP>:443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/cloud-controller-manager?timeout=5s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
```
and
```
E0424 13:53:02.277675 1 controller.go:310] error processing service kong/kong-deploy-kong-proxy-api (will retry): failed to ensure load balancer: failed to patch service object kong/kong-deploy-kong-proxy-api: services "kong-deploy-kong-proxy-api" is forbidden: User "system:serviceaccount:kube-system:cloud-provider-openstack" cannot patch resource "services" in API group "" in the namespace "kong"
```
These pods required a manual delete to get them back healthily talking to the k8s api. |
After an upgrade of the openstack integrator charm to
1.28/stable 69 on a cloud using k8s 1.24 (upgraded from 1.23)
we see errors from most of the the openstack-cloud-controller-manager-* pods like so,
```
E0424 00:09:16.841262 1 leaderelection.go:330] error retrieving resource lock kube-system/cloud-controller-manager: Get "https://<IP>:443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/cloud-controller-manager?timeout=5s": dial tcp <IP>:443: connect: connection refused
```
and
```
E0424 13:45:42.209919 1 leaderelection.go:330] error retrieving resource lock kube-system/cloud-controller-manager: Get "https://<IP>:443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/cloud-controller-manager?timeout=5s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
```
and
```
E0424 13:53:02.277675 1 controller.go:310] error processing service kong/kong-deploy-kong-proxy-api (will retry): failed to ensure load balancer: failed to patch service object kong/kong-deploy-kong-proxy-api: services "kong-deploy-kong-proxy-api" is forbidden: User "system:serviceaccount:kube-system:cloud-provider-openstack" cannot patch resource "services" in API group "" in the namespace "kong"
```
These pods required a manual delete to get them back healthily talking to the k8s api. |
|