As commented with Ian, I also found that the termination of the pods takes longer with Juju 2.9.32 than in previous versions. In principle I thought it was related to Pebble, but I tried with a podspec charm and a reactive one and found the same issue.
As commented with Ian, I also found that the termination of the pods takes longer with Juju 2.9.32 than in previous versions. In principle I thought it was related to Pebble, but I tried with a podspec charm and a reactive one and found the same issue.
To test the issue:
``` image=opensourc emano/keystone: testing- daily -n 3 osm-mariadb- k8s -n 3
juju deploy osm-keystone --channel latest/candidate --resource keystone-
juju deploy charmed-
juju relate osm-keystone mariadb-k8s
```
Then I scale-in the pods to 1 unit:
```
juju scale-application osm-keystone 1
```
With Juju 2.9.32 the scale-in operation took 5 min, while with Juju 2.9.29 took 32 seconds.
I talked to I an to include this comment here, but maybe I need to open a new bug. If so, just tell me and I will create it.