Scaleback leaves 'lost' pod in juju status
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Canonical Juju |
Invalid
|
Undecided
|
Unassigned |
Bug Description
When scaling an application back in a microk8s model the unit that is removed is left in juju status despite the pod having been removed.
```
$ juju status keystone
Model Controller Cloud/Region Version SLA Timestamp
zaza-4f8c2f3d092b microk8s-localhost microk8s/localhost 3.1.0 unsupported 14:39:13Z
App Version Status Scale Charm Channel Rev Address Exposed Message
keystone active 1 keystone-k8s 0 10.152.183.96 no
Unit Workload Agent Address Ports Message
keystone/0* active idle 10.1.188.229
keystone/1 unknown lost 10.1.188.244 agent lost, see 'juju show-status-log keystone/1'
$ microk8s.kubectl get pods --all-namespaces | grep keystone
zaza-4f8c2f3d092b keystone-0 2/2 Running 0 19m
```
After 15+ minutes keystone/1 is still present
juju: 3.1.0-genericli
microk8s v1.25.6 (1.25-strict/stable snap channel)
summary: |
- Scale back leaves 'lost' pod in juju status + Scaleback leaves 'lost' pod in juju status |
This bug is a duplicate of this: https:/ /bugs.launchpad .net/juju/ +bug/1977582
Will mark this bug as invalid for now so we can track in one place - thanks for reporting!