To reproduce this:
1. `juju deploy nginx-ingress-integrator --channel=edge nginx-not-upgraded`
2. Confirm revision 31 is deployed and inspect kubectl to confirm the pod has 1/1 containers.
3. `juju deploy nginx-ingress-integrator --channel=edge --revision=29 --resource placeholder-image='google/pause'`
4. Confirm revision 29 is deployed and inspect kubectl to confirm the pod has 2/2 containers.
5 `juju refresh nginx-ingress-integrator`
6. Confirm the charm is now running revision 31, but still has 2/2 containers.
Here's some output showing the problem. In this case `ingress-edge` was deployed fresh, while `nginx-ingress-integrator` was deployed with revision 29 and then upgraded.
```
mthaddon@finistere:~$ juju status
Model Controller Cloud/Region Version SLA Timestamp
i-test microk8s-localhost microk8s/localhost 2.9.34 unsupported 16:45:53+02:00
App Version Status Scale Charm Channel Rev Address Exposed Message
ingress-edge active 1 nginx-ingress-integrator edge 31 10.152.183.93 no
nginx-ingress-integrator active 1 nginx-ingress-integrator edge 31 10.152.183.112 no Ingress with service IP(s): 10.152.183.185
Unit Workload Agent Address Ports Message
ingress-edge/0* active idle 10.1.129.147
nginx-ingress-integrator/0* active idle 10.1.129.146 Ingress with service IP(s): 10.152.183.185
mthaddon@finistere:~$ microk8s kubectl get pods -n i-test
NAME READY STATUS RESTARTS AGE
modeloperator-85bb89747-mvwfm 1/1 Running 0 22m
nginx-ingress-integrator-0 2/2 Running 0 10m
ingress-edge-0 1/1 Running 0 3m47s
```
This should get fixed in the 2.9 series. We did implement support for upgrading from a pod spec charm to a sidecar charm, it seems we need to look at how our sidecar charms themselves upgrade when the topology changes, and ensure that new versions of the charm get the new topology.