while deploying kubeflow atop microk8s, i encountered an issue where the controller didn't provide sufficient logs to what prevented the pods from starting.
controller-0: 18:34:02 ERROR juju.worker.dependency "caas-operator-provisioner" manifold worker returned unexpected error: failed to generate operator config for "kfp-api": updating agent config: no existing agent conf found and no new password generated for "kfp-api" operator
This same message ^^ above was presented for most of the operators starting in the model.
Studying the controller code around this [error](
https://github.com/juju/juju/blob/41051adf37af635239a42f0a3842e62f01d7ca77/worker/caasoperatorprovisioner/worker.go#L244) i discovered that there was a linkage with the kubernetes implementation of "Operator" [here](https://github.com/juju/juju/blob/41051adf37af635239a42f0a3842e62f01d7ca77/caas/kubernetes/provider/operator.go#L635)
I would ask that the controller bubble out errors from the Operator classes. In my case, the StatefulSet the k8s provider operator uses to create the pods was in error, and had a clear reason as to the issue. it wasn't clear from any of the debug-logs in either the controller or application models to point to an issue with the statefulsets being in a retry error loop.
Both the handling of this should be better and the error reporting should be more pronounced.
If you see this again, could we get the kubectl yaml output of the operator StatefulSet and the ConfigMap (if there is any).