Maas creates a pod machine even when a machine tag is set as a constraint
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
MAAS |
Fix Released
|
High
|
Newell Jensen | ||
2.2 |
Fix Released
|
High
|
Newell Jensen |
Bug Description
I am using pods, and have VMs created from those pods in three zones that I've created (zone1, zone2, zone3). There is also a 'default' zone in MAAS, which I'm not allowed to edit or delete, and has no nodes in it.
When I juju bootstrap without a 'zone' placement directive, juju tries first to allocate a vm from the 'default' zone in MAAS. MAAS sees that there is no matching node, then that I'm using pods, and creates a new VM to respond to juju's request for a node.
My expected behavior here is for juju to end up bootstrapping onto one of the existing VMs in one of the zones with nodes in it. I can get that by using a '--to zone=zone1' placement directive, but I shouldn't have to - it should just work and use one of the existing VMs.
This also affects juju's "enable-ha" command, where there is no ability to specify zone. In that case there is no workaround - it always tries to use the default zone rather than the zones I'm using that already have machines available. The only solution here is to use the default zone, even though I don't want a zone named 'default'.
Here's part of the maas.log where the node was created:
Jul 24 20:09:01 infra1 maas.api: [info] Request from user root to acquire a machine with constraints: [('agent_name', ['4fc971aa-
Jul 24 20:09:03 infra1 maas.drivers.
Jul 24 20:09:04 infra1 maas.node: [info] living-deer: Storage layout was set to flat.
Jul 24 20:09:04 infra1 maas.node: [info] living-deer: Status transition from READY to ALLOCATED
Jul 24 20:09:04 infra1 maas.node: [info] living-deer: allocated to user root
Jul 24 20:09:05 infra1 maas.interface: [info] Allocated automatic IP address 10.245.222.20 for eth0 (physical) on living-deer.
Jul 24 20:09:05 infra1 maas.node: [info] living-deer: Status transition from ALLOCATED to DEPLOYING
Jul 24 20:09:06 infra1 maas.power: [info] Changing power state (on) of node: living-deer (pmmn6n)
Here's the juju debug log showing it trying to acquire a node:
20:09:00 INFO cmd bootstrap.go:357 Starting new instance for initial controller
Launching controller instance(s) on foundations-maas...
20:09:04 DEBUG juju.cloudconfi
20:09:05 DEBUG juju.service discovery.go:63 discovered init system "systemd" from series "xenial"
20:09:05 DEBUG juju.provider.maas environ.go:1018 maas user data; 3836 bytes
20:09:06 DEBUG juju.provider.maas environ.go:1050 started instance "pmmn6n"
- pmmn6n (arch=amd64 mem=3.5G cores=1)
20:09:06 INFO juju.environs.
20:09:06 INFO juju.environs.
20:09:06 INFO juju.environs.
20:09:08 INFO cmd bootstrap.go:485 Fetching Juju GUI 2.7.5
This is with juju 2.2.2 and maas 2.2.2 (6094-g78d97d0-
Related branches
- Newell Jensen (community): Approve
-
Diff: 85 lines (+31/-5)3 files modifieddocs/changelog.rst (+2/-0)
src/maasserver/api/machines.py (+9/-5)
src/maasserver/api/tests/test_machines.py (+20/-0)
- Blake Rouse (community): Approve
-
Diff: 72 lines (+29/-5)2 files modifiedsrc/maasserver/api/machines.py (+9/-5)
src/maasserver/api/tests/test_machines.py (+20/-0)
tags: | added: cdo-qa |
description: | updated |
Changed in maas: | |
milestone: | none → 2.3.0 |
importance: | Undecided → High |
status: | New → Triaged |
assignee: | nobody → Newell Jensen (newell-jensen) |
tags: | added: pod |
Changed in maas: | |
status: | Triaged → In Progress |
Changed in maas: | |
status: | In Progress → Fix Committed |
Changed in maas: | |
milestone: | 2.3.0 → 2.3.0alpha1 |
Changed in maas: | |
status: | Fix Committed → Fix Released |
tags: | added: foundation-engine |
tags: |
added: foundations-engine removed: foundation-engine |
IMO pods should have zones associated with them - at least for virsh pods it makes sense. A machine hosting a virsh pod will be in a specific zone. If I could mark all of my pods with zones, in this case, maas would see there are no pods in the default zone and would not try to create a node there, and juju would move on to the zones with nodes, just like it normally would without pods.