I've been working on a proof of concept sidecar charm, and I've noticed that it's configured with a readiness probe as follows:
$ microk8s.kubectl describe pod gunicorn-0 -n gunicorn | grep Readiness
Readiness: http-get http://:3856/readiness delay=30s timeout=1s period=10s #success=1 #failure=2
I'm not sure how to change this to use the correct port for the workload it's running (in this case it should be port 80 rather than port 3856).
I've confirmed that if I have four instances of this charm running and connected to my ingress charm (which creates a kubernetes ingress and service) if I switch to a broken image (using `juju attach gunicorn gunicorn-image='gunicorncharmers/gunicorn-base:edge'`) then all four pods in turn are restarted to use the broken image. If we could configure the readiness probe then I'd have a way of stopping this process from breaking every unit in my application.
Pebble now includes service auto-restarting and custom health checks, with level="alive" and level="ready" checks being used for K8s liveness/readiness probes in the context of Juju. These features were released in Juju 2.9.26 and are documented from here on: https:/ /juju.is/ docs/sdk/ pebble# heading- -service- auto-restart
Hopefully this gives you the level (so to speak!) of control you need. Let me know if you find anything amiss or if the documentation is lacking.