Activity log for bug #1706196

Date Who What changed Old value New value Message
2017-07-24 20:43:25 Jason Hobbs bug added bug
2017-07-24 20:47:40 Jason Hobbs description I am using pods, and have VMs created from those pods in three zones that I've created (zone1, zone2, zone3). There is also a 'default' zone in MAAS, which I'm not allowed to edit or delete, and has no nodes in it. When I juju bootstrap without a 'zone' placement directive, juju tries first to allocate a vm from the 'default' zone in MAAS. MAAS sees that there is no matching node, then that I'm using pods, and creates a new VM to respond to juju's request for a node. My expected behavior here is for juju to end up bootstrapping onto one of the existing VMs in one of the zones with nodes in it. I can get that by using a '--to zone=zone1' placement directive, but I shouldn't have to - it should just work and use one of the existing VMs. This is with juju 2.2.2 and maas 2.2.2 (6094-g78d97d0-0ubuntu1~16.04.1). I am using pods, and have VMs created from those pods in three zones that I've created (zone1, zone2, zone3). There is also a 'default' zone in MAAS, which I'm not allowed to edit or delete, and has no nodes in it. When I juju bootstrap without a 'zone' placement directive, juju tries first to allocate a vm from the 'default' zone in MAAS. MAAS sees that there is no matching node, then that I'm using pods, and creates a new VM to respond to juju's request for a node. My expected behavior here is for juju to end up bootstrapping onto one of the existing VMs in one of the zones with nodes in it. I can get that by using a '--to zone=zone1' placement directive, but I shouldn't have to - it should just work and use one of the existing VMs. Here's part of the maas.log where the node was created: Jul 24 20:09:01 infra1 maas.api: [info] Request from user root to acquire a machine with constraints: [('agent_name', ['4fc971aa-41fc-4bae-8ac8-724fd407c338']), ('zone', ['default']), ('mem', ['3584']), ('tags', ['vm'])] Jul 24 20:09:03 infra1 maas.drivers.pod.virsh: [info] living-deer: Successfully set network boot order Jul 24 20:09:04 infra1 maas.node: [info] living-deer: Storage layout was set to flat. Jul 24 20:09:04 infra1 maas.node: [info] living-deer: Status transition from READY to ALLOCATED Jul 24 20:09:04 infra1 maas.node: [info] living-deer: allocated to user root Jul 24 20:09:05 infra1 maas.interface: [info] Allocated automatic IP address 10.245.222.20 for eth0 (physical) on living-deer. Jul 24 20:09:05 infra1 maas.node: [info] living-deer: Status transition from ALLOCATED to DEPLOYING Jul 24 20:09:06 infra1 maas.power: [info] Changing power state (on) of node: living-deer (pmmn6n) Here's the juju debug log showing it trying to acquire a node: 20:09:00 INFO cmd bootstrap.go:357 Starting new instance for initial controller Launching controller instance(s) on foundations-maas... 20:09:04 DEBUG juju.cloudconfig.instancecfg instancecfg.go:832 Setting numa ctl preference to false 20:09:05 DEBUG juju.service discovery.go:63 discovered init system "systemd" from series "xenial" 20:09:05 DEBUG juju.provider.maas environ.go:1018 maas user data; 3836 bytes 20:09:06 DEBUG juju.provider.maas environ.go:1050 started instance "pmmn6n" - pmmn6n (arch=amd64 mem=3.5G cores=1) 20:09:06 INFO juju.environs.bootstrap bootstrap.go:606 newest version: 2.2.2 20:09:06 INFO juju.environs.bootstrap bootstrap.go:621 picked bootstrap agent binary version: 2.2.2 20:09:06 INFO juju.environs.bootstrap bootstrap.go:393 Installing Juju agent on bootstrap instance 20:09:08 INFO cmd bootstrap.go:485 Fetching Juju GUI 2.7.5 This is with juju 2.2.2 and maas 2.2.2 (6094-g78d97d0-0ubuntu1~16.04.1).
2017-07-24 20:50:26 Jason Hobbs tags cdo-qa
2017-07-25 18:41:38 Jason Hobbs description I am using pods, and have VMs created from those pods in three zones that I've created (zone1, zone2, zone3). There is also a 'default' zone in MAAS, which I'm not allowed to edit or delete, and has no nodes in it. When I juju bootstrap without a 'zone' placement directive, juju tries first to allocate a vm from the 'default' zone in MAAS. MAAS sees that there is no matching node, then that I'm using pods, and creates a new VM to respond to juju's request for a node. My expected behavior here is for juju to end up bootstrapping onto one of the existing VMs in one of the zones with nodes in it. I can get that by using a '--to zone=zone1' placement directive, but I shouldn't have to - it should just work and use one of the existing VMs. Here's part of the maas.log where the node was created: Jul 24 20:09:01 infra1 maas.api: [info] Request from user root to acquire a machine with constraints: [('agent_name', ['4fc971aa-41fc-4bae-8ac8-724fd407c338']), ('zone', ['default']), ('mem', ['3584']), ('tags', ['vm'])] Jul 24 20:09:03 infra1 maas.drivers.pod.virsh: [info] living-deer: Successfully set network boot order Jul 24 20:09:04 infra1 maas.node: [info] living-deer: Storage layout was set to flat. Jul 24 20:09:04 infra1 maas.node: [info] living-deer: Status transition from READY to ALLOCATED Jul 24 20:09:04 infra1 maas.node: [info] living-deer: allocated to user root Jul 24 20:09:05 infra1 maas.interface: [info] Allocated automatic IP address 10.245.222.20 for eth0 (physical) on living-deer. Jul 24 20:09:05 infra1 maas.node: [info] living-deer: Status transition from ALLOCATED to DEPLOYING Jul 24 20:09:06 infra1 maas.power: [info] Changing power state (on) of node: living-deer (pmmn6n) Here's the juju debug log showing it trying to acquire a node: 20:09:00 INFO cmd bootstrap.go:357 Starting new instance for initial controller Launching controller instance(s) on foundations-maas... 20:09:04 DEBUG juju.cloudconfig.instancecfg instancecfg.go:832 Setting numa ctl preference to false 20:09:05 DEBUG juju.service discovery.go:63 discovered init system "systemd" from series "xenial" 20:09:05 DEBUG juju.provider.maas environ.go:1018 maas user data; 3836 bytes 20:09:06 DEBUG juju.provider.maas environ.go:1050 started instance "pmmn6n" - pmmn6n (arch=amd64 mem=3.5G cores=1) 20:09:06 INFO juju.environs.bootstrap bootstrap.go:606 newest version: 2.2.2 20:09:06 INFO juju.environs.bootstrap bootstrap.go:621 picked bootstrap agent binary version: 2.2.2 20:09:06 INFO juju.environs.bootstrap bootstrap.go:393 Installing Juju agent on bootstrap instance 20:09:08 INFO cmd bootstrap.go:485 Fetching Juju GUI 2.7.5 This is with juju 2.2.2 and maas 2.2.2 (6094-g78d97d0-0ubuntu1~16.04.1). I am using pods, and have VMs created from those pods in three zones that I've created (zone1, zone2, zone3). There is also a 'default' zone in MAAS, which I'm not allowed to edit or delete, and has no nodes in it. When I juju bootstrap without a 'zone' placement directive, juju tries first to allocate a vm from the 'default' zone in MAAS. MAAS sees that there is no matching node, then that I'm using pods, and creates a new VM to respond to juju's request for a node. My expected behavior here is for juju to end up bootstrapping onto one of the existing VMs in one of the zones with nodes in it. I can get that by using a '--to zone=zone1' placement directive, but I shouldn't have to - it should just work and use one of the existing VMs. This also affects juju's "enable-ha" command, where there is no ability to specify zone. In that case there is no workaround - it always tries to use the default zone and fails. The only solution here is to use the default zone, even though I don't want a zone named 'default'. Here's part of the maas.log where the node was created: Jul 24 20:09:01 infra1 maas.api: [info] Request from user root to acquire a machine with constraints: [('agent_name', ['4fc971aa-41fc-4bae-8ac8-724fd407c338']), ('zone', ['default']), ('mem', ['3584']), ('tags', ['vm'])] Jul 24 20:09:03 infra1 maas.drivers.pod.virsh: [info] living-deer: Successfully set network boot order Jul 24 20:09:04 infra1 maas.node: [info] living-deer: Storage layout was set to flat. Jul 24 20:09:04 infra1 maas.node: [info] living-deer: Status transition from READY to ALLOCATED Jul 24 20:09:04 infra1 maas.node: [info] living-deer: allocated to user root Jul 24 20:09:05 infra1 maas.interface: [info] Allocated automatic IP address 10.245.222.20 for eth0 (physical) on living-deer. Jul 24 20:09:05 infra1 maas.node: [info] living-deer: Status transition from ALLOCATED to DEPLOYING Jul 24 20:09:06 infra1 maas.power: [info] Changing power state (on) of node: living-deer (pmmn6n) Here's the juju debug log showing it trying to acquire a node: 20:09:00 INFO cmd bootstrap.go:357 Starting new instance for initial controller Launching controller instance(s) on foundations-maas... 20:09:04 DEBUG juju.cloudconfig.instancecfg instancecfg.go:832 Setting numa ctl preference to false 20:09:05 DEBUG juju.service discovery.go:63 discovered init system "systemd" from series "xenial" 20:09:05 DEBUG juju.provider.maas environ.go:1018 maas user data; 3836 bytes 20:09:06 DEBUG juju.provider.maas environ.go:1050 started instance "pmmn6n"  - pmmn6n (arch=amd64 mem=3.5G cores=1) 20:09:06 INFO juju.environs.bootstrap bootstrap.go:606 newest version: 2.2.2 20:09:06 INFO juju.environs.bootstrap bootstrap.go:621 picked bootstrap agent binary version: 2.2.2 20:09:06 INFO juju.environs.bootstrap bootstrap.go:393 Installing Juju agent on bootstrap instance 20:09:08 INFO cmd bootstrap.go:485 Fetching Juju GUI 2.7.5 This is with juju 2.2.2 and maas 2.2.2 (6094-g78d97d0-0ubuntu1~16.04.1).
2017-07-25 18:59:09 Jason Hobbs description I am using pods, and have VMs created from those pods in three zones that I've created (zone1, zone2, zone3). There is also a 'default' zone in MAAS, which I'm not allowed to edit or delete, and has no nodes in it. When I juju bootstrap without a 'zone' placement directive, juju tries first to allocate a vm from the 'default' zone in MAAS. MAAS sees that there is no matching node, then that I'm using pods, and creates a new VM to respond to juju's request for a node. My expected behavior here is for juju to end up bootstrapping onto one of the existing VMs in one of the zones with nodes in it. I can get that by using a '--to zone=zone1' placement directive, but I shouldn't have to - it should just work and use one of the existing VMs. This also affects juju's "enable-ha" command, where there is no ability to specify zone. In that case there is no workaround - it always tries to use the default zone and fails. The only solution here is to use the default zone, even though I don't want a zone named 'default'. Here's part of the maas.log where the node was created: Jul 24 20:09:01 infra1 maas.api: [info] Request from user root to acquire a machine with constraints: [('agent_name', ['4fc971aa-41fc-4bae-8ac8-724fd407c338']), ('zone', ['default']), ('mem', ['3584']), ('tags', ['vm'])] Jul 24 20:09:03 infra1 maas.drivers.pod.virsh: [info] living-deer: Successfully set network boot order Jul 24 20:09:04 infra1 maas.node: [info] living-deer: Storage layout was set to flat. Jul 24 20:09:04 infra1 maas.node: [info] living-deer: Status transition from READY to ALLOCATED Jul 24 20:09:04 infra1 maas.node: [info] living-deer: allocated to user root Jul 24 20:09:05 infra1 maas.interface: [info] Allocated automatic IP address 10.245.222.20 for eth0 (physical) on living-deer. Jul 24 20:09:05 infra1 maas.node: [info] living-deer: Status transition from ALLOCATED to DEPLOYING Jul 24 20:09:06 infra1 maas.power: [info] Changing power state (on) of node: living-deer (pmmn6n) Here's the juju debug log showing it trying to acquire a node: 20:09:00 INFO cmd bootstrap.go:357 Starting new instance for initial controller Launching controller instance(s) on foundations-maas... 20:09:04 DEBUG juju.cloudconfig.instancecfg instancecfg.go:832 Setting numa ctl preference to false 20:09:05 DEBUG juju.service discovery.go:63 discovered init system "systemd" from series "xenial" 20:09:05 DEBUG juju.provider.maas environ.go:1018 maas user data; 3836 bytes 20:09:06 DEBUG juju.provider.maas environ.go:1050 started instance "pmmn6n"  - pmmn6n (arch=amd64 mem=3.5G cores=1) 20:09:06 INFO juju.environs.bootstrap bootstrap.go:606 newest version: 2.2.2 20:09:06 INFO juju.environs.bootstrap bootstrap.go:621 picked bootstrap agent binary version: 2.2.2 20:09:06 INFO juju.environs.bootstrap bootstrap.go:393 Installing Juju agent on bootstrap instance 20:09:08 INFO cmd bootstrap.go:485 Fetching Juju GUI 2.7.5 This is with juju 2.2.2 and maas 2.2.2 (6094-g78d97d0-0ubuntu1~16.04.1). I am using pods, and have VMs created from those pods in three zones that I've created (zone1, zone2, zone3). There is also a 'default' zone in MAAS, which I'm not allowed to edit or delete, and has no nodes in it. When I juju bootstrap without a 'zone' placement directive, juju tries first to allocate a vm from the 'default' zone in MAAS. MAAS sees that there is no matching node, then that I'm using pods, and creates a new VM to respond to juju's request for a node. My expected behavior here is for juju to end up bootstrapping onto one of the existing VMs in one of the zones with nodes in it. I can get that by using a '--to zone=zone1' placement directive, but I shouldn't have to - it should just work and use one of the existing VMs. This also affects juju's "enable-ha" command, where there is no ability to specify zone. In that case there is no workaround - it always tries to use the default zone rather than the zones I'm using that already have machines available. The only solution here is to use the default zone, even though I don't want a zone named 'default'. Here's part of the maas.log where the node was created: Jul 24 20:09:01 infra1 maas.api: [info] Request from user root to acquire a machine with constraints: [('agent_name', ['4fc971aa-41fc-4bae-8ac8-724fd407c338']), ('zone', ['default']), ('mem', ['3584']), ('tags', ['vm'])] Jul 24 20:09:03 infra1 maas.drivers.pod.virsh: [info] living-deer: Successfully set network boot order Jul 24 20:09:04 infra1 maas.node: [info] living-deer: Storage layout was set to flat. Jul 24 20:09:04 infra1 maas.node: [info] living-deer: Status transition from READY to ALLOCATED Jul 24 20:09:04 infra1 maas.node: [info] living-deer: allocated to user root Jul 24 20:09:05 infra1 maas.interface: [info] Allocated automatic IP address 10.245.222.20 for eth0 (physical) on living-deer. Jul 24 20:09:05 infra1 maas.node: [info] living-deer: Status transition from ALLOCATED to DEPLOYING Jul 24 20:09:06 infra1 maas.power: [info] Changing power state (on) of node: living-deer (pmmn6n) Here's the juju debug log showing it trying to acquire a node: 20:09:00 INFO cmd bootstrap.go:357 Starting new instance for initial controller Launching controller instance(s) on foundations-maas... 20:09:04 DEBUG juju.cloudconfig.instancecfg instancecfg.go:832 Setting numa ctl preference to false 20:09:05 DEBUG juju.service discovery.go:63 discovered init system "systemd" from series "xenial" 20:09:05 DEBUG juju.provider.maas environ.go:1018 maas user data; 3836 bytes 20:09:06 DEBUG juju.provider.maas environ.go:1050 started instance "pmmn6n"  - pmmn6n (arch=amd64 mem=3.5G cores=1) 20:09:06 INFO juju.environs.bootstrap bootstrap.go:606 newest version: 2.2.2 20:09:06 INFO juju.environs.bootstrap bootstrap.go:621 picked bootstrap agent binary version: 2.2.2 20:09:06 INFO juju.environs.bootstrap bootstrap.go:393 Installing Juju agent on bootstrap instance 20:09:08 INFO cmd bootstrap.go:485 Fetching Juju GUI 2.7.5 This is with juju 2.2.2 and maas 2.2.2 (6094-g78d97d0-0ubuntu1~16.04.1).
2017-07-25 19:00:03 Jason Hobbs bug task added juju
2017-07-25 19:26:02 Andres Rodriguez maas: status New Won't Fix
2017-07-25 19:29:59 Andres Rodriguez maas: status Won't Fix Invalid
2017-07-25 22:00:54 Jason Hobbs maas: status Invalid New
2017-07-25 22:33:31 Jason Hobbs summary when using pods, during juju bootstrap maas creates vm in zone default when a vm already exists in another zone when using pods, maas creates vm's in response to allocation requests that include tag and zone constraints
2017-07-25 22:33:42 Jason Hobbs summary when using pods, maas creates vm's in response to allocation requests that include tag and zone constraints when using pods, maas creates VMs in response to allocation requests that include tag and zone constraints
2017-07-25 22:59:38 Jason Hobbs summary when using pods, maas creates VMs in response to allocation requests that include tag and zone constraints Maas creates a pod machine even when a machine tag is set as a constraint
2017-07-25 23:08:16 Jason Hobbs bug task deleted juju
2017-07-26 20:01:58 Andres Rodriguez nominated for series maas/2.2
2017-07-26 20:01:58 Andres Rodriguez bug task added maas/2.2
2017-07-26 20:02:38 Andres Rodriguez maas: milestone 2.3.0
2017-07-26 20:02:40 Andres Rodriguez maas/2.2: milestone 2.2.3
2017-07-26 20:02:43 Andres Rodriguez maas: importance Undecided High
2017-07-26 20:02:45 Andres Rodriguez maas/2.2: importance Undecided High
2017-07-26 20:02:47 Andres Rodriguez maas: status New Triaged
2017-07-26 20:02:50 Andres Rodriguez maas/2.2: status New Triaged
2017-07-26 20:02:57 Andres Rodriguez maas/2.2: assignee Newell Jensen (newell-jensen)
2017-07-26 20:03:04 Andres Rodriguez maas: assignee Newell Jensen (newell-jensen)
2017-07-26 20:03:07 Andres Rodriguez tags cdo-qa cdo-qa pod
2017-07-26 21:48:51 Newell Jensen maas: status Triaged In Progress
2017-07-27 03:06:54 Launchpad Janitor merge proposal linked https://code.launchpad.net/~newell-jensen/maas/+git/maas/+merge/328146
2017-07-27 19:45:30 MAAS Lander maas: status In Progress Fix Committed
2017-07-27 20:04:09 Launchpad Janitor merge proposal linked https://code.launchpad.net/~newell-jensen/maas/+git/maas/+merge/328188
2017-07-27 20:10:29 Newell Jensen maas/2.2: status Triaged In Progress
2017-07-27 20:34:56 MAAS Lander maas/2.2: status In Progress Fix Committed
2017-07-28 15:24:31 Andres Rodriguez maas: milestone 2.3.0 2.3.0alpha1
2017-08-02 12:09:40 Andres Rodriguez maas: status Fix Committed Fix Released
2017-08-06 15:53:29 Nobuto Murata bug added subscriber Nobuto Murata
2017-08-08 19:08:03 Jason Hobbs tags cdo-qa pod cdo-qa foundation-engine pod
2017-08-08 19:14:55 Jason Hobbs tags cdo-qa foundation-engine pod cdo-qa foundations-engine pod
2018-03-27 18:48:05 Andres Rodriguez maas/2.2: status Fix Committed Fix Released