I've tried to reproduce this issue locally by modifying the charm to do a "sleep 10" as part of and update status hook. However, this doesn't seem to cause container restarts:
In this case, pebble appears to be retrying the check until it fails. However, looking in the logs for the environment where this problem is exhibiting I see something similar, e.g.:
2022-09-07T08:13:22.934Z [pebble] GET /v1/changes/3/wait?timeout=4.000s 4.000201612s 504
2022-09-07T08:13:25.319Z [pebble] GET /v1/changes/3/wait?timeout=4.000s 2.383641707s 200
2022-09-07T08:13:30.000Z [pebble] GET /v1/changes/4/wait?timeout=4.000s 4.001198157s 504
2022-09-07T08:13:30.801Z [pebble] GET /v1/changes/4/wait?timeout=4.000s 800.091334ms 200
So it doesn't appear to be as straightforward as "if pebble is busy executing something, health checks will fail".
I've tried to reproduce this issue locally by modifying the charm to do a "sleep 10" as part of and update status hook. However, this doesn't seem to cause container restarts:
https:/ /paste. ubuntu. com/p/p3M47RdQT c/
In this case, pebble appears to be retrying the check until it fails. However, looking in the logs for the environment where this problem is exhibiting I see something similar, e.g.:
2022-09- 07T08:13: 22.934Z [pebble] GET /v1/changes/ 3/wait? timeout= 4.000s 4.000201612s 504 07T08:13: 25.319Z [pebble] GET /v1/changes/ 3/wait? timeout= 4.000s 2.383641707s 200
2022-09-
2022-09- 07T08:13: 30.000Z [pebble] GET /v1/changes/ 4/wait? timeout= 4.000s 4.001198157s 504 07T08:13: 30.801Z [pebble] GET /v1/changes/ 4/wait? timeout= 4.000s 800.091334ms 200
2022-09-
So it doesn't appear to be as straightforward as "if pebble is busy executing something, health checks will fail".