Podspec charms stuck in waiting for container when deployed with juju 3.4
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Canonical Juju |
Fix Committed
|
High
|
Ian Booth | ||
3.3 |
Fix Committed
|
High
|
Ian Booth |
Bug Description
When deployed in an environment with juju 3.4.1, podspec charms get stuck in waiting state with message "waiting for container".
Steps to reproduce:
1. Deploy a podspec charm from Charmhub (kubeflow-volumes 1.8/stable, minio ckf-1.8/stable, mlmd 1.14/stable)
2. Get juju status and observe units go into the described status
Environment:
* microk8s 1.25-strict/stable
* juju 3.4.1 (from 3.4/stable channel)
* microk8s addons dns hostpath-storage rbac metallb:
Observations:
* The issue is not present when the mentioned charms are built and deployed in the same environment. For instance, when building kubeflow-volumes from branch gh:track/1.8 and then deployed with juju deploy ./path-to.charm --resource oci-image=
* The issue is intermittent, but I have been able to reproduce it deploying the aforementioned charms, some of our CIs are also reporting this behaviour:
1. https:/
2. https:/
* This behaviour only applies to podspec charms, from all our charms (30+) only the podspec ones seem to be failing.
Changed in juju: | |
assignee: | nobody → Ian Booth (wallyworld) |
Thanks for the bug report.
It is weird that there's no failure when deploying locally built charms. io/kubeflownote bookswg/ volumes- web-app: v1.8.0 whereas I suspect the charmbhub one would be pulling the oci image from the charmhub registry. Maybe this is unreliable and occasionally fails.
Have you downloaded the charmhub charm (eg using juju download) and compared it with the locally built one?
One possible additional thought is that the locally built one pulls from docker.
We'll look at the logs when we can, but if you had some info on the above questions that would help.