The metadata address mentioned in the preseed is wrong.

Bug #1081701 reported by Raphaël Badin on 2012-11-21
This bug affects 2 people
Affects Status Importance Assigned to Milestone
Raphaël Badin
Julian Edwards
Raphaël Badin
maas (Ubuntu)

Bug Description

The metadata address mentioned in the preseeds is built using MAAS_DEFAULT_URL. That host part of that address is wrong if the cluster controller is on a different machine than the region controller and is configured to reach the region controller through a different interface.

Same problem for the proxy address mentioned in the 'generic' preseed file.

Related branches

Changed in maas:
milestone: none → 12.10-stabilization
Julian Edwards (julian-edwards) wrote :

I'm not sure how to approach this at the moment. The preseed is rendered on the region controller which has no knowledge of how each cluster accesses it. The only way I can see around this is for the cluster to send the region the API URL that it is using, which the region can then store on the nodegroup for later use.

Thoughts? Is there an easier way?

Julian Edwards (julian-edwards) wrote :

Or perhaps this is an argument to start rendering preseeds on the clusters.

Gavin Panella (allenap) wrote :
Download full text (3.4 KiB)

Raphaël and I discussed two options:

1. Render the preseed in the cluster.

2. Send the cluster address to the region during start-up so that it
   can render the preseed with the correct address.

We decided that #1 better divides responsibilities, but that we would
go for option #2 for two reasons: it's a lot less work, and it doesn't
preclude doing #1 at a later date and may be useful to that effort.

I made some notes about what it would take to do #1, included below,
which may be worth tracking as a blueprint (Julian, what is the best
way to propose work like this?).

.. -*- mode: rst -*-

Things needed from the region to render a preseed:

- From ``maasserver.compose_preseed``:

  - Metadata URL: ``absolute_reverse('metadata')``
  - Token: ``NodeKey.objects.get_token_for_node(node)``
  - Node status: ``node.status`` (only if commissioning or not)

- From ``maasserver.preseed``:

  - Node status: ``node.status`` (only if commissioning or not)

  - Enum ``PRESEED_TYPE``.

    ⇒ Move to ``provisioningserver``?


    ⇒ Move to ``provisioningserver``.

  - Config:

     - ``commissioning_distro_series``
     - ``main_archive``
     - ``ports_archive``
     - ``http_proxy``

    ⇒ Can be obtained via API.

  - Server host: ``get_maas_facing_server_host`` (used for?)

    ⇒ Already known on cluster?

  - Server URL: ``absolute_reverse('nodes_handler')`` (used for?)

    ⇒ Already known on cluster?

  - Enlistment URL: ``absolute_reverse('enlist')``

  - Disable netboot URL::

          args=['latest', node.system_id])

  - Enlistment preseed URL::

          'metadata-enlist-preseed', args=[version],
          query={'op': 'get_enlist_preseed'})

    ⇒ Becomes responsibility of ``provisioningserver``.

  - Regular preseed URL::

          'metadata-node-by-id', args=[version, node.system_id],
          query={'op': 'get_preseed'})

    ⇒ Becomes responsibility of ``provisioningserver``.

- From ``contrib/preseeds_v2/generic``:

  - Node architecture: ``node.architecture``

Which boils down to:

Code moves (see below_) and a view that returns:

- Metadata URL: ``absolute_reverse('metadata')``

- Token: ``NodeKey.objects.get_token_for_node(node)``

- Node status: ``node.status`` (only if commissioning or not)

- Node architecture: ``node.architecture``

- Enlistment URL: ``absolute_reverse('enlist')``

- Disable netboot URL::

        'metadata-node-by-id', args=['latest', node.system_id])

- *Optional:* return config items (e.g. ``ports_archive``)

.. _below: `Adding rendering server to the provisioning server:`_

Adding rendering server to the provisioning server:

- Resurrect HTTP server in ``provisioningserver.plugin``.

- Move code, as detailed earlier_, to ``provisioningserver``, and
  perform some moderate refactoring.

- Add handler to the ``provisioningserver`` HTTP server to query the
  view_ detailed above, perhaps query the API for config, ...


Julian Edwards (julian-edwards) wrote :

see also bug 1081701

Julian Edwards (julian-edwards) wrote :

grar, I meant see also bug 1081696

Raphaël Badin (rvb) wrote :

Right, when this will be fixed, fixing bug 1081696 will be simply a matter of using that value (stored on the nodegroup object).

no longer affects: maas/12.04-nocobbler
Raphaël Badin (rvb) wrote :

Here is a description what it will take to implement solution 2, broken down into individual tasks:

Once a) is done, all the other tasks can be done independently.

a) add a new colum on nodegroup to store the maas_url as seen by the cluster

b) have the cluster send that maas_url when cluster starts up, populate the new column on nodegroup

c) change the preseeds:
c1) update absolute_reverse to make it accept another base that will be used in lieu of MAAS_DEFAULT_URL
c2) use the value stored in nodegroup.maas_url in the preseed (use the method 'absolute_reverse' as modified above to generate the proper urls)

d) update the UI: the enlistement preseed is now node-specific: put the link to it on the node page (it currently resides on the node listing page). (Note that change in the preseeds themselves will be tackled in c))

e) use the value stored in nodegroup.maas_url when generating the dhcp config (bug 1081696)

Raphaël Badin (rvb) wrote :

Status update:

- a) is landed

- c) has been reviewed (but not yet landed)

- Gavin is working on b)

- d) and e) are still up for grabs :)

Raphaël Badin (rvb) wrote :

This is not fixed yet, the enlistment user data is not fixed.

Changed in maas (Ubuntu):
status: New → Confirmed
importance: Undecided → Critical
Changed in maas (Ubuntu):
status: Confirmed → Fix Released
Changed in maas (Ubuntu Quantal):
status: New → Fix Released
Changed in maas (Ubuntu Precise):
status: New → Fix Released
sba (stephane-baziak) wrote :

I have resolved enlistment issues by editing manually generator variable in pserv.yaml for a standalone cluster controller and now the standalone cluster controller seem to be used for enlistment. However install fails and clearly in the last installation steps when cloud-init runs modules:final image root-tgz is downloading from region controller and not cluster controller as one would expect. At some point installation fails as described in

To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers

Related blueprints