csi-cinder-controllerplugin CrashLoopBackOff
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Openstack Integrator Charm |
Incomplete
|
Medium
|
Unassigned |
Bug Description
$ juju version
3.3.1-genericli
charmed k8s v1.28.5
The exported bundle will be attached and it's based on
https:/
After the deployment is completed, csi-cinder-
$ kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
ingress-
ingress-
kube-system calico-
kube-system calico-node-9wp8g 1/1 Running 0 78m
kube-system calico-node-b9gn4 1/1 Running 0 77m
kube-system coredns-
kube-system csi-cinder-
kube-system csi-cinder-
kube-system csi-cinder-
kube-system kube-state-
kube-system metrics-
kube-system openstack-
kubernetes-
kubernetes-
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Unhealthy 45m (x3 over 70m) kubelet Liveness probe failed: Get "http://
Warning BackOff 5m52s (x265 over 70m) kubelet Back-off restarting failed container cinder-csi-plugin in pod csi-cinder-
Normal Pulled 52s (x18 over 71m) kubelet Container image "rocks.
Changed in charm-openstack-integrator: | |
milestone: | 1.29+ck1 → 1.30 |
Changed in charm-openstack-integrator: | |
milestone: | 1.30 → 1.31 |
Thanks for the report. A few things may be going on here:
First, from you bundle, it looks like the machine constraints are quite low (2G mem, 16G disk). I know that's the default for k8s-core; we're addressing those in lp:2053058. I'm concerned that disk/oom issues may be manifesting as failing pods.
Second, there has been quite a bit of o7k integration refactoring in ck8s 1.29 (charms are out; docs are pending publish):
https:/ /github. com/charmed- kubernetes/ kubernetes- docs/blob/ main/pages/ k8s/openstack- integration. md
Third, i see you have the o7k-integrator co-located with kubernetes- control- plane. This isn't a typical supported env and i fear network stack conflicts.
Please let us know if this resolves itself with either:
1) larger machines
2) upgraded openstack integrator charms
3) modified topology to put the integrator on a separate machine
I'm going to set this bug to incomplete and targeting 1.29+ck1 for now. It would be great if you could attach a juju crashdump if you're still able to repro:
https:/ /ubuntu. com/kubernetes/ docs/troublesho oting#collectin g-debug- information
Thanks!