Is there any chance that the VM's disk was not clean? I know we saw similar
failures in MAAS when it would re-use a machine that had been a controller
previously, and the disk was not wiped.
From the detailed log, it does look like this was a clean install (apt
install juju-mongodb3.2 took 1.5min to complete). So that doesn't seem to
explain it.
If it is a fresh install, then it would be very surprising for the hosted
model "(eg 'default')" to already exist.
I don't see anything in the above that would cause us to change the name of
either "controller" or "default" unless there is config somewhere in
~/.local/share/juju that is defining this cloud with something custom.
On Tue, Dec 19, 2017 at 2:01 PM, Kevin Wennemuth <<email address hidden>
> wrote:
> Public bug reported:
>
> juju bootstrap vsphere/esx.power.lab jujucontroller --to
> zone=esx.power.lab --config primary-network="VM Network" --config
> external-network="VM Network" --config datastore=ds00 --debug
>
> ...
>
> 10:03:52 INFO juju.cmd supercommand.go:56 running juju [2.3.1 gc go1.9.2]
> 10:03:52 DEBUG juju.cmd supercommand.go:57 args: []string{"juju",
> "bootstrap", "vsphere/esx.power.lab", "jujucontroller", "--to",
> "zone=esx.power.lab", "--config", "primary-network=VM Network", "--config",
> "external-network=VM Network", "--config", "datastore=ds00", "--debug"}
>
> 2017-12-19 09:42:59 DEBUG juju.agent.agentbootstrap bootstrap.go:302
> create new random password for machine 0
> 2017-12-19 09:42:59 DEBUG juju.state open.go:306 closed state without error
> 2017-12-19 09:42:59 DEBUG juju.worker runner.go:223 killing runner
> 0xc420373e10
> 2017-12-19 09:42:59 INFO juju.worker runner.go:313 runner is dying
> 2017-12-19 09:42:59 DEBUG juju.worker runner.go:456 killing "pingbatcher"
> 2017-12-19 09:42:59 DEBUG juju.worker runner.go:456 killing "leadership"
> 2017-12-19 09:42:59 DEBUG juju.worker runner.go:456 killing "singular"
> 2017-12-19 09:42:59 DEBUG juju.worker runner.go:456 killing "txnlog"
> 2017-12-19 09:42:59 DEBUG juju.worker runner.go:456 killing "presence"
> 2017-12-19 09:42:59 INFO juju.worker runner.go:483 stopped "pingbatcher",
> err: <nil>
> 2017-12-19 09:42:59 INFO juju.worker runner.go:483 stopped "presence",
> err: <nil>
> 2017-12-19 09:42:59 INFO juju.worker runner.go:483 stopped "singular",
> err: <nil>
> 2017-12-19 09:42:59 DEBUG juju.worker runner.go:332 "pingbatcher" done:
> <nil>
> 2017-12-19 09:42:59 DEBUG juju.worker runner.go:395 no restart, removing
> "pingbatcher" from known workers
> 2017-12-19 09:42:59 DEBUG juju.worker runner.go:332 "presence" done: <nil>
> 2017-12-19 09:42:59 DEBUG juju.worker runner.go:395 no restart, removing
> "presence" from known workers
> 2017-12-19 09:42:59 DEBUG juju.worker runner.go:332 "singular" done: <nil>
> 2017-12-19 09:42:59 DEBUG juju.worker runner.go:395 no restart, removing
> "singular" from known workers
> 2017-12-19 09:42:59 INFO juju.worker runner.go:483 stopped "txnlog", err:
> <nil>
> 2017-12-19 09:42:59 DEBUG juju.worker runner.go:332 "txnlog" done: <nil>
> 2017-12-19 09:42:59 DEBUG juju.worker runner.go:395 no restart, removing
> "txnlog" from known workers
> 2017-12-19 09:42:59 INFO juju.worker runner.go:483 stopped "leadership",
> err: <nil>
> 2017-12-19 09:42:59 DEBUG juju.worker runner.go:332 "leadership" done:
> <nil>
> 2017-12-19 09:42:59 DEBUG juju.worker runner.go:395 no restart, removing
> "leadership" from known workers
> 2017-12-19 09:42:59 DEBUG juju.state open.go:306 closed state without error
> ERROR creating hosted model: model already exists
> 2017-12-19 09:42:59 DEBUG cmd supercommand.go:459 error stack:
> github.com/juju/juju/state/model.go:428: model already exists
> github.com/juju/juju/state/model.go:432:
> github.com/juju/juju/agent/agentbootstrap/bootstrap.go:216: creating
> hosted model
> github.com/juju/juju/cmd/jujud/agent/agent.go:104:
> 2017-12-19 09:42:59 DEBUG juju.cmd.jujud main.go:187 jujud complete, code
> 0, err <nil>
> 10:42:59 ERROR juju.cmd.juju.commands bootstrap.go:519 failed to bootstrap
> model: subprocess encountered error code 1
> 10:42:59 DEBUG juju.cmd.juju.commands bootstrap.go:520 (error details: [{
> github.com/juju/juju/cmd/juju/commands/bootstrap.go:611: failed to
> bootstrap model} {subprocess encountered error code 1}])
> 10:42:59 DEBUG juju.cmd.juju.commands bootstrap.go:1117 cleaning up after
> failed bootstrap
> 10:43:00 INFO juju.provider.common destroy.go:20 destroying model
> "controller"
> 10:43:00 INFO juju.provider.common destroy.go:31 destroying instances
> 10:43:01 DEBUG juju.provider.vmware client.go:105 powering off
> "juju-e08976-0"
> 10:43:01 DEBUG juju.provider.vmware client.go:114 destroying
> "juju-e08976-0"
> 10:43:02 INFO juju.provider.common destroy.go:55 destroying storage
> 10:43:04 DEBUG juju.provider.vmware client.go:84 no VMs matching path
> "Juju Controller (2f0c0337-429d-4523-8327-32a0103fcae2)/Model \"*\" (*)/*"
> 10:43:05 DEBUG juju.provider.vmware environ.go:233 deleting: [ds03]
> juju-vmdks/2f0c0337-429d-4523-8327-32a0103fcae2
> 10:43:05 DEBUG juju.provider.vmware environ.go:233 deleting: [ds02]
> juju-vmdks/2f0c0337-429d-4523-8327-32a0103fcae2
> 10:43:05 DEBUG juju.provider.vmware environ.go:233 deleting: [ds00]
> juju-vmdks/2f0c0337-429d-4523-8327-32a0103fcae2
> 10:43:06 INFO cmd supercommand.go:465 command finished
>
> The full log is attached
>
> ** Affects: juju
> Importance: Undecided
> Status: New
>
> ** Attachment added: "detailed debug log"
> https://bugs.launchpad.net/bugs/1738993/+attachment/
> 5024550/+files/juju_model_already_exists.txt
>
> --
> You received this bug notification because you are subscribed to juju.
> Matching subscriptions: juju bugs
> https://bugs.launchpad.net/bugs/1738993
>
> Title:
> ERROR creating hosted model: model already exists
>
> To manage notifications about this bug go to:
> https://bugs.launchpad.net/juju/+bug/1738993/+subscriptions
>
Is there any chance that the VM's disk was not clean? I know we saw similar
failures in MAAS when it would re-use a machine that had been a controller
previously, and the disk was not wiped.
From the detailed log, it does look like this was a clean install (apt
install juju-mongodb3.2 took 1.5min to complete). So that doesn't seem to
explain it.
If it is a fresh install, then it would be very surprising for the hosted
model "(eg 'default')" to already exist.
I don't see anything in the above that would cause us to change the name of
either "controller" or "default" unless there is config somewhere in
~/.local/share/juju that is defining this cloud with something custom.
On Tue, Dec 19, 2017 at 2:01 PM, Kevin Wennemuth <<email address hidden>
> wrote:
> Public bug reported: esx.power. lab jujucontroller --to network= "VM Network" --config datastore=ds00 --debug esx.power. lab", "jujucontroller", "--to", power.lab" , "--config", "primary-network=VM Network", "--config", network= VM Network", "--config", "datastore=ds00", "--debug"} agentbootstrap bootstrap.go:302 com/juju/ juju/state/ model.go: 428: model already exists com/juju/ juju/state/ model.go: 432: com/juju/ juju/agent/ agentbootstrap/ bootstrap. go:216: creating com/juju/ juju/cmd/ jujud/agent/ agent.go: 104: juju.commands bootstrap.go:519 failed to bootstrap juju.commands bootstrap.go:520 (error details: [{ com/juju/ juju/cmd/ juju/commands/ bootstrap. go:611: failed to juju.commands bootstrap.go:1117 cleaning up after common destroy.go:20 destroying model common destroy.go:31 destroying instances vmware client.go:105 powering off vmware client.go:114 destroying common destroy.go:55 destroying storage vmware client.go:84 no VMs matching path 429d-4523- 8327-32a0103fca e2)/Model \"*\" (*)/*" vmware environ.go:233 deleting: [ds03] 2f0c0337- 429d-4523- 8327-32a0103fca e2 vmware environ.go:233 deleting: [ds02] 2f0c0337- 429d-4523- 8327-32a0103fca e2 vmware environ.go:233 deleting: [ds00] 2f0c0337- 429d-4523- 8327-32a0103fca e2 /bugs.launchpad .net/bugs/ 1738993/ +attachment/ +files/ juju_model_ already_ exists. txt /bugs.launchpad .net/bugs/ 1738993 /bugs.launchpad .net/juju/ +bug/1738993/ +subscriptions
>
> juju bootstrap vsphere/
> zone=esx.power.lab --config primary-network="VM Network" --config
> external-
>
> ...
>
> 10:03:52 INFO juju.cmd supercommand.go:56 running juju [2.3.1 gc go1.9.2]
> 10:03:52 DEBUG juju.cmd supercommand.go:57 args: []string{"juju",
> "bootstrap", "vsphere/
> "zone=esx.
> "external-
>
> 2017-12-19 09:42:59 DEBUG juju.agent.
> create new random password for machine 0
> 2017-12-19 09:42:59 DEBUG juju.state open.go:306 closed state without error
> 2017-12-19 09:42:59 DEBUG juju.worker runner.go:223 killing runner
> 0xc420373e10
> 2017-12-19 09:42:59 INFO juju.worker runner.go:313 runner is dying
> 2017-12-19 09:42:59 DEBUG juju.worker runner.go:456 killing "pingbatcher"
> 2017-12-19 09:42:59 DEBUG juju.worker runner.go:456 killing "leadership"
> 2017-12-19 09:42:59 DEBUG juju.worker runner.go:456 killing "singular"
> 2017-12-19 09:42:59 DEBUG juju.worker runner.go:456 killing "txnlog"
> 2017-12-19 09:42:59 DEBUG juju.worker runner.go:456 killing "presence"
> 2017-12-19 09:42:59 INFO juju.worker runner.go:483 stopped "pingbatcher",
> err: <nil>
> 2017-12-19 09:42:59 INFO juju.worker runner.go:483 stopped "presence",
> err: <nil>
> 2017-12-19 09:42:59 INFO juju.worker runner.go:483 stopped "singular",
> err: <nil>
> 2017-12-19 09:42:59 DEBUG juju.worker runner.go:332 "pingbatcher" done:
> <nil>
> 2017-12-19 09:42:59 DEBUG juju.worker runner.go:395 no restart, removing
> "pingbatcher" from known workers
> 2017-12-19 09:42:59 DEBUG juju.worker runner.go:332 "presence" done: <nil>
> 2017-12-19 09:42:59 DEBUG juju.worker runner.go:395 no restart, removing
> "presence" from known workers
> 2017-12-19 09:42:59 DEBUG juju.worker runner.go:332 "singular" done: <nil>
> 2017-12-19 09:42:59 DEBUG juju.worker runner.go:395 no restart, removing
> "singular" from known workers
> 2017-12-19 09:42:59 INFO juju.worker runner.go:483 stopped "txnlog", err:
> <nil>
> 2017-12-19 09:42:59 DEBUG juju.worker runner.go:332 "txnlog" done: <nil>
> 2017-12-19 09:42:59 DEBUG juju.worker runner.go:395 no restart, removing
> "txnlog" from known workers
> 2017-12-19 09:42:59 INFO juju.worker runner.go:483 stopped "leadership",
> err: <nil>
> 2017-12-19 09:42:59 DEBUG juju.worker runner.go:332 "leadership" done:
> <nil>
> 2017-12-19 09:42:59 DEBUG juju.worker runner.go:395 no restart, removing
> "leadership" from known workers
> 2017-12-19 09:42:59 DEBUG juju.state open.go:306 closed state without error
> ERROR creating hosted model: model already exists
> 2017-12-19 09:42:59 DEBUG cmd supercommand.go:459 error stack:
> github.
> github.
> github.
> hosted model
> github.
> 2017-12-19 09:42:59 DEBUG juju.cmd.jujud main.go:187 jujud complete, code
> 0, err <nil>
> 10:42:59 ERROR juju.cmd.
> model: subprocess encountered error code 1
> 10:42:59 DEBUG juju.cmd.
> github.
> bootstrap model} {subprocess encountered error code 1}])
> 10:42:59 DEBUG juju.cmd.
> failed bootstrap
> 10:43:00 INFO juju.provider.
> "controller"
> 10:43:00 INFO juju.provider.
> 10:43:01 DEBUG juju.provider.
> "juju-e08976-0"
> 10:43:01 DEBUG juju.provider.
> "juju-e08976-0"
> 10:43:02 INFO juju.provider.
> 10:43:04 DEBUG juju.provider.
> "Juju Controller (2f0c0337-
> 10:43:05 DEBUG juju.provider.
> juju-vmdks/
> 10:43:05 DEBUG juju.provider.
> juju-vmdks/
> 10:43:05 DEBUG juju.provider.
> juju-vmdks/
> 10:43:06 INFO cmd supercommand.go:465 command finished
>
> The full log is attached
>
> ** Affects: juju
> Importance: Undecided
> Status: New
>
> ** Attachment added: "detailed debug log"
> https:/
> 5024550/
>
> --
> You received this bug notification because you are subscribed to juju.
> Matching subscriptions: juju bugs
> https:/
>
> Title:
> ERROR creating hosted model: model already exists
>
> To manage notifications about this bug go to:
> https:/
>