Phantom controller after a failed bootstrap

Bug #2069659 reported by Leon
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Canonical Juju
New
Undecided
Unassigned

Bug Description

3.4.3-genericlinux-amd64.
I have the rook-ceph microk8s addon enabled, and hostpath-storage disabled.

A bootstrap fails,

```
k8s substrate "microk8s/localhost" added as cloud "microk8s-cluster" with storage provisioned
by the existing "ceph-rbd" storage class.
You can now bootstrap to this cloud by running 'juju bootstrap microk8s-cluster'.
Creating Juju controller "k8s" on microk8s-cluster/localhost
Bootstrap to Kubernetes cluster identified as microk8s/localhost
Creating k8s resources for controller "controller-k8s"
ERROR failed to bootstrap model: creating controller stack: creating statefulset for controller: timed out waiting for controller pod: pending: -
WARNING destroy k8s model timeout
ERROR error cleaning up: context deadline exceeded
ERROR No controllers registered.

Please either create a new controller using "juju bootstrap" or connect to
another controller that you have been given access to using "juju register".
```

And now the controller list is empty,

```
$ juju controllers
ERROR No controllers registered.

Please either create a new controller using "juju bootstrap" or connect to
another controller that you have been given access to using "juju register".

$ cat /home/ubuntu/.local/share/juju/controllers.yaml
controllers: {}
```

but the failed bootstrap left a phantom controller:

```
$ juju bootstrap microk8s-cluster k8s
ERROR a controller called "k8s" already exists on this k8s cluster.
Please bootstrap again and choose a different controller name.
```

The cloud init script to reproduce this:
https://github.com/canonical/cos-lite-bundle/pull/115

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.