I can see from the mongo db dump that in the nagios model, the relation for (at least) one of the units from the removed k8s model still exists. The relation is #165 and as well as the core relation entity existing, the nagios/0 unit state still references a number of the removed consuming units, remote-a57284f04d11431b8ad94b77de6ece98/1 ... remote-a57284f04d11431b8ad94b77de6ece98/14
The relation is marked as dying so what appears to have happened is the destroyed model on the consuming side got removed before the offering side was done removing its artefacts. One way this can happen is if --force is used but it's not clear the root cause.
The other artefacts for the removed consuming unit include the tokens to map the entities between the models.
So we have these mongo collections with "orphaned" data:
- unitstates
The record with id "82dcf2b0-8352-4695-873c-5f791b279bb7:u#nagios/0#charm" has a "relation-state" map with key "108"
To start with, you could try to remove the dying relation
juju remove-relation 165 --force
Depending on what gets cleaned up, you may then need to do some mongo surgery to manually remove the above records. The unitstates record it not removed, just the affected map entry in the "relation-state" map. Any manual db changes would need to have the 3 controller agents stopped first and restarted after.
I can see from the mongo db dump that in the nagios model, the relation for (at least) one of the units from the removed k8s model still exists. The relation is #165 and as well as the core relation entity existing, the nagios/0 unit state still references a number of the removed consuming units, remote- a57284f04d11431 b8ad94b77de6ece 98/1 ... remote- a57284f04d11431 b8ad94b77de6ece 98/14
The relation is marked as dying so what appears to have happened is the destroyed model on the consuming side got removed before the offering side was done removing its artefacts. One way this can happen is if --force is used but it's not clear the root cause.
The other artefacts for the removed consuming unit include the tokens to map the entities between the models.
So we have these mongo collections with "orphaned" data:
- relations 8352-4695- 873c-5f791b279b b7:nagios: monitors remote- a57284f04d11431 b8ad94b77de6ece 98:monitors"
id: "82dcf2b0-
- applicationOffe rConnections 8352-4695- 873c-5f791b279b b7:nagios: monitors remote- a57284f04d11431 b8ad94b77de6ece 98:monitors"
id: "82dcf2b0-
- relationscopes 8352-4695- 873c-5f791b279b b7:r#165# provider# remote- a57284f04d11431 b8ad94b77de6ece 98/19"
id: "82dcf2b0-
- remoteApplications 8352-4695- 873c-5f791b279b b7:remote- a57284f04d11431 b8ad94b77de6ece 98"
id: "82dcf2b0-
- remoteEntities 8352-4695- 873c-5f791b279b b7:application- remote- a57284f04d11431 b8ad94b77de6ece 98" 8352-4695- 873c-5f791b279b b7:relation- nagios. monitors# remote- a57284f04d11431 b8ad94b77de6ec
id: "82dcf2b0-
id: "82dcf2b0-
- settings 8352-4695- 873c-5f791b279b b7:r#165# remote- a57284f04d11431 b8ad94b77de6ece 98" 8352-4695- 873c-5f791b279b b7:r#165# provider# remote- a57284f04d11431 b8ad94b77de6ece 98/16" 8352-4695- 873c-5f791b279b b7:r#165# provider# remote- a57284f04d11431 b8ad94b77de6ece 98/17" 8352-4695- 873c-5f791b279b b7:r#165# provider# remote- a57284f04d11431 b8ad94b77de6ece 98/18" 8352-4695- 873c-5f791b279b b7:r#165# provider# remote- a57284f04d11431 b8ad94b77de6ece 98/19" 8352-4695- 873c-5f791b279b b7:r#165# provider# remote- a57284f04d11431 b8ad94b77de6ece 98/20" 8352-4695- 873c-5f791b279b b7:r#165# provider# remote- a57284f04d11431 b8ad94b77de6ece 98/21"
id: "82dcf2b0-
id: "82dcf2b0-
id: "82dcf2b0-
id: "82dcf2b0-
id: "82dcf2b0-
id: "82dcf2b0-
id: "82dcf2b0-
- unitstates 8352-4695- 873c-5f791b279b b7:u#nagios/ 0#charm" has a "relation-state" map with key "108"
The record with id "82dcf2b0-
To start with, you could try to remove the dying relation
juju remove-relation 165 --force
Depending on what gets cleaned up, you may then need to do some mongo surgery to manually remove the above records. The unitstates record it not removed, just the affected map entry in the "relation-state" map. Any manual db changes would need to have the 3 controller agents stopped first and restarted after.