Hmm, tried the following:
1) bootstrap a clean 2.5.4 controller; 2) deploy an openstack model; 3) upgrade to 2.6-rc1; 3) scale the model; 4) destroy the model (successful).
I still see a lot of similar errors in the controller log after destroy-model which eventually stop:
2019-05-03 10:43:02 TRACE juju.state.txn database.go:359 ran transaction in 0.022s (retries: 0) []txn.Op{
# ...
{ Name: "model-uuid", Value: "2108fac7-ea12-402c-84e3-cd7e1e15a762", }, { Name: "status", Value: "destroying", },
2019-05-03 10:44:59 ERROR juju.apiserver.common resource.go:118 error stopping *apiserver.pingTimeout resource: ping timeout 2019-05-03 10:44:59 TRACE juju.state.watcher hubwatcher.go:457 0xc000e1a0f0 got request: watcher.reqUnwatch{key:watcher.watchKey{c:"migrations.status", id:interface { }(nil)}, ch:(chan<- watcher.Change)(0xc00ddccd20)} 2019-05-03 10:44:59 TRACE juju.state.watcher hubwatcher.go:373 loop finished 2019-05-03 10:44:59 DEBUG juju.state open.go:219 closed state without error
Hmm, tried the following:
1) bootstrap a clean 2.5.4 controller;
2) deploy an openstack model;
3) upgrade to 2.6-rc1;
3) scale the model;
4) destroy the model (successful).
I still see a lot of similar errors in the controller log after destroy-model which eventually stop:
2019-05-03 10:43:02 TRACE juju.state.txn database.go:359 ran transaction in 0.022s (retries: 0) []txn.Op{
# ...
# ...
2019-05-03 10:44:59 ERROR juju.apiserver. common resource.go:118 error stopping *apiserver. pingTimeout resource: ping timeout reqUnwatch{ key:watcher. watchKey{ c:"migrations. status" , id:interface { Change) (0xc00ddccd20) }
2019-05-03 10:44:59 TRACE juju.state.watcher hubwatcher.go:457 0xc000e1a0f0 got request: watcher.
}(nil)}, ch:(chan<- watcher.
2019-05-03 10:44:59 TRACE juju.state.watcher hubwatcher.go:373 loop finished
2019-05-03 10:44:59 DEBUG juju.state open.go:219 closed state without error