YESSS, I run into this all the time. It's not easy to reproduce but I notice it often when deploying COS Lite to MicroK8s. @nvinuesa maybe try deploying COS Lite or its components (traefik, grafana, loki, etc) to see if you get this.
@nvinuesa I'm also not sure how long you waited between
$ juju remove-application postgresql-k8s
$ juju show-status-log postgresql-k8s
Often with k8s applications, the agent shuts down quickly but k8s takes a long time to remove the pod. That intermediate period is where you'll encounter this issue.
I guess since we've intentionally called remove-application here, the error message
agent lost: check `juju show status-log ...`
is not really useful. Maybe it would be better for the controller to record that we have intentionally removed the application (rather than the agent failing), and give a more useful status message like `pod shutting down`. Or, don't even show the unit anymore in `juju status`.
YESSS, I run into this all the time. It's not easy to reproduce but I notice it often when deploying COS Lite to MicroK8s. @nvinuesa maybe try deploying COS Lite or its components (traefik, grafana, loki, etc) to see if you get this.
@nvinuesa I'm also not sure how long you waited between
$ juju remove-application postgresql-k8s
$ juju show-status-log postgresql-k8s
Often with k8s applications, the agent shuts down quickly but k8s takes a long time to remove the pod. That intermediate period is where you'll encounter this issue.
I guess since we've intentionally called remove-application here, the error message
agent lost: check `juju show status-log ...`
is not really useful. Maybe it would be better for the controller to record that we have intentionally removed the application (rather than the agent failing), and give a more useful status message like `pod shutting down`. Or, don't even show the unit anymore in `juju status`.