The migrated CaaS model will be left in the cluster after the model destroyed.
The root cause is we don't have strategies to update the `controller.juju.is/id` annotation to the new controller UUID.
Because of this reason, any global resources will be left as well.
This issue happens on any CaaS model.
$ juju bootstrap microk8s k1
$ juju bootstrap microk8s k2
$ juju add-model t1 -c k1 && juju deploy ch:ambassador -m k1:t1
$ juju migrate k1:t1 k2
$ yml2json ~/.local/share/juju/controllers.yaml | jq '.controllers|to_entries|map("\(.key) => \(.value.uuid)")'
[
"k2 => 48636994-6b13-46fa-8022-f3167ef7b619",
"k1 => 0d5e3e4e-06fe-4ee2-83b2-7b55025b1fa9"
]
$ mkubectl get ns t1 -o json | jq .metadata.annotations
{
"controller.juju.is/id": "0d5e3e4e-06fe-4ee2-83b2-7b55025b1fa9",
"model.juju.is/id": "cb627a31-84ce-4d02-8248-3b73a18ebb96"
}
$ juju destroy-model k2:t1 -y --debug --force --destroy-storage
$ mkubectl get ns t1 -o json | jq .metadata.annotations
{
"controller.juju.is/id": "0d5e3e4e-06fe-4ee2-83b2-7b55025b1fa9",
"model.juju.is/id": "cb627a31-84ce-4d02-8248-3b73a18ebb96"
}
i don't think this is critical, as it isn't a regression, but it is definitely important