juju should be able to use nodes acquired by the same user in MAAS

Bug #1450729 reported by Andreas Hasenack
14
This bug affects 2 people
Affects Status Importance Assigned to Milestone
Canonical Juju
Triaged
Wishlist
Unassigned
MAAS
Opinion
High
Unassigned
juju-core
Won't Fix
Medium
Unassigned

Bug Description

MAAS has the concept of allocating, or reserving, nodes in advance. It doesn't install anything on the node when that happens, nor powers it on.

This would be useful in the OpenStack Autopilot where the user first picks which machines he/she wants to use for the deployment, and then lets Autopilot deploy it. Autopilot bootstraps to one of those machines, but there is nothing preventing another MAAS user from picking one of the other nodes before bootstrap finishes and the Autopilot can issue add-machine calls to reserve the rest.

The problem is that juju thinks that an allocated node is not available, even when the same user that allocated it tells juju to use that node specifically.

Here is an example where I first acquire a node in MAAS, then tell juju to bootstrap on it:
$ maas andreas-atlas nodes acquire name=elkhart.scapestack
{
    "status": 6,
    "macaddress_set": [
(...)
    "hostname": "elkhart.scapestack",
    "owner": "andreas",
    "ip_addresses": [],
(...)
}

$ juju bootstrap --debug --to elkhart.scapestack
2015-05-01 07:36:12 INFO juju.cmd supercommand.go:37 running juju [1.23.2-trusty-amd64 gc]
(...)
2015-05-01 07:36:15 INFO juju.cmd cmd.go:113 Starting new instance for initial state server
2015-05-01 07:36:15 INFO juju.provider.maas environ.go:127 address allocation feature disabled; using "juju-br0" bridge for all containers
2015-05-01 07:36:15 DEBUG juju.provider.common bootstrap.go:87 using "juju-br0" as network bridge for all container types
Launching instance
2015-05-01 07:36:21 DEBUG juju.cmd.juju common.go:90 Destroying environment.
2015-05-01 07:36:21 INFO juju.cmd cmd.go:113 Bootstrap failed, destroying environment
2015-05-01 07:36:21 INFO juju.provider.common destroy.go:15 destroying environment "scapestack-precise"
2015-05-01 07:36:22 ERROR juju.cmd supercommand.go:430 failed to bootstrap environment: cannot start bootstrap instance: cannot run instances: cannot run instances: gomaasapi: got error back from server: 409 CONFLICT (No available node matches constraints: name=elkhart.scapestack)

Unless there is something I'm not foreseeing, I think that if a node is just allocated in MAAS, it should be seen as available by juju if juju has the same API credentials as the user who allocated the node. Perhaps we could restrict this a bit and only allow juju to take the node if specifically told so, like with --to in the bootstrap case, or in the add-machine case (after bootstrap).

Curtis Hovey (sinzui)
tags: added: manual-provider
Changed in juju-core:
status: New → Triaged
importance: Undecided → Medium
tags: added: deploy
tags: added: maas-provider
removed: manual-provider
Changed in maas:
milestone: none → 1.8.0
Revision history for this message
Raphaël Badin (rvb) wrote :

> Unless there is something I'm not foreseeing, I think that if a node is just allocated in MAAS, it should be seen as available by juju
> if juju has the same API credentials as the user who allocated the node.

Technically there is nothing preventing Juju from using a previously allocated node. Now, it gets a bit complicated if you consider that the constraints are used during the acquisition phase: once a node is acquired, there is no record of the set of constraints that got it acquired in the first place.

Revision history for this message
Raphaël Badin (rvb) wrote :

Re-using a pre-allocated node goes a bit against the general design that if a client acquires a node, it's to use it later. Only if we check that the agent_name is the same does this make sense —I think.

Revision history for this message
Adam Collard (adam-collard) wrote :

If it were to be qualified by agent_name then it wouldn't satisfy our needs (but I understand why you would want to do that). We query MAAS over the API, let the user pick a subset of the nodes and then bootstrap, add-machine and deploy to those specific nodes (targeted by name).

Another way of interpreting the request is that we be able to delegate an acquired node to a different agent (e.g. Landscape acquires but gives permission to Juju to use). As far as I know it's not possible to know what Juju will use as the agent_name so even this ability might be tricky without some way of either instruction Juju to use a given name or be able to generate it in advance. Until JES is ready, we are bootstrapping on to one of the selected nodes so we won't even have an environment to query to get the agent_name.

Note that the only constraints we need for this use case are the name one, I understand that general support for this with Juju would hit some issues with other constraints.

Changed in maas:
status: New → Opinion
importance: Undecided → High
tags: added: cloud-installer
Revision history for this message
John A Meinel (jameinel) wrote :

I believe in the discussion today we came to the idea that
a) Autopilot can create a UUID for the agent name when it does the reservation
b) Autopilot can then bootstrap Juju with the same UUID to allow Juju to use those reserved instances
c) We can ask MAAS to make it a no-op to acquire a node by name as long as the agent_name matches.

Ideally you would actually use a token system to avoid race conditions, but node-name + agent-name is probably sufficient.

I don't think we want it for any case where the agent_name matches and the constraints happens to match, as that could easily lead to cases where Juju races with itself to acquire the same node. (And you can probably come up with a case still on 2 clients requesting the same named node, but the race should be small.)

Changed in juju-core:
status: Triaged → Won't Fix
Pen Gale (pengale)
Changed in juju:
importance: Undecided → Wishlist
status: New → Triaged
tags: added: community-feedback field-feedback
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.