Support for nested placements

Bug #1733392 reported by Alvaro Uria
12
This bug affects 2 people
Affects Status Importance Assigned to Milestone
Canonical Juju
Triaged
Low
Unassigned

Bug Description

Bug #1567169 will solve application placements by index instead of real unit number (ie. dbserver=0, dbserver=1 vs dbserver/1, dbserver/7).

A customer is running into a special case where nested placements would be needed. Use case is:

juju deploy cs:ubuntu -n 3 compute-baremetal
juju deploy specialconfig-charm --to \
  lxd:compute-baremetal=0,lxd:compute-baremetal=1,lxd:compute-baremetal=3

juju deploy myapp-charm -n 3 --to \
  specialconfig-charm=0,specialconfig-charm=1,specialconfig-charm=2

Would this be possible? "myapp-charm" could be the different OpenStack Charms while "specialconfig-charm" would be a basenode configuration to enable special options (ie. install certain common packages, download certain CA files, etc.)

Revision history for this message
Tim Penhey (thumper) wrote :

The recent work with bundle placements may well give you what you need, but it isn't clear to me exactly what you are after.

How about you explain the starting place, along with what you want to end up with, and I'll see if I can give you a bundle definition that would work.

Changed in juju:
status: New → Incomplete
Revision history for this message
Tim Penhey (thumper) wrote :

applications:
  compute-baremetal:
    charm: cs:ubuntu
    num_units: 3
  special-config-charm:
    charm: special-config-charm
    num_units: 3
    to: ["compute-baremetal/0", "compute-baremetal/1", "compute-baremetal/2"]
    # to: [compute-baremetal] would probably work too
  myapp-charm:
    charm: myapp-charm
    num_untis: 3
    to: ["special-config-charm/0", "special-config-charm/1", "special-config-charm/2"]
    # to: [special-config-charm] would probably work too

Revision history for this message
Tim Penhey (thumper) wrote :

Damn, missed the nestedness.

applications:
  compute-baremetal:
    charm: cs:ubuntu
    num_units: 3
  special-config-charm:
    charm: special-config-charm
    num_units: 3
    to: ["lxd:compute-baremetal/0", "lxd:compute-baremetal/1", "lxd:compute-baremetal/2"]
    # to: [lxd:compute-baremetal] would probably work too
  myapp-charm:
    charm: myapp-charm
    num_untis: 3
    # This placement will create a sibling lxd next to the lxd used for the compute-baremetal
    # unit.
    to: ["lxd:special-config-charm/0", "lxd:special-config-charm/1", "lxd:special-config-charm/2"]
    # to: [lxd:special-config-charm] would probably work too
    # To co-locate the special-config-charm and myapp-charm, lose the lxd.
    # to: ["special-config-charm/0", "special-config-charm/1", "special-config-charm/2"]

Revision history for this message
Xavier Esteve (xesteve) wrote :

This is how it looks our nested config:

- stage0.yaml: hardware is deployed with charm to config routes
  services:
    node-mgm-openstack-a-fc1:
      charm: static-routes
      num_units: 2
      constraints: "tags=management,FC1"

- stage1.yaml: kvm is created with same charm
config: &LXD_NODE_MGM_OPENSTACK_A_0 'lxd:node-mgm-openstack-a-fc1=0'
config: &LXD_NODE_MGM_OPENSTACK_A_1 'lxd:node-mgm-openstack-a-fc1=1'
config: &LXD_NODE_MGM_OPENSTACK_A_2 'lxd:node-mgm-openstack-a-fc2=0'
config: &LXD_NODE_MGM_OPENSTACK_A_TO [ *LXD_NODE_MGM_OPENSTACK_A_0, *LXD_NODE_MGM_OPENSTACK_A_1, *LXD_NODE_MGM_OPENSTACK_A_2 ]
..
..
  services:
    pod01-kvm-rabbitmq-openstack:
      charm: static-routes
      bindings:
        "": *oam-space
      num_units: 3
      constraints: "cpu-cores=6 mem=32G root-disk=25G spaces=oam,clo"
      to: *KVM_NODE_MGM_OPENSTACK_A_TO

- stage2.yaml: rabbitmq-server is installed in the KVM server
  services:
    rabbitmq-server:
      charm: rabbitmq-server
      source: *os-origin
      num_units: 3
      binding:
        "": *oam-space
        amqp: *internal-space
        cluster: *internal-space
      options:
        min-cluster-size: 2
        access-network: XXXX/20
        cluster-network:XXXX/20
      to:
        - 'pod01-kvm-rabbitmq-openstack=0'
        - 'pod01-kvm-rabbitmq-openstack=1'
        - 'pod01-kvm-rabbitmq-openstack=2'

And here an example of the error:
2017-11-20 10:55:22 [ERROR] deployer.deploy: Nested placement not supported rabbitmq-server -> pod01-kvm-rabbitmq-openstack -> ['kvm:node-mgm-openstack-a-fc1=0', 'kvm:node-mgm-openstack-a-fc1=1', 'kvm:node-mgm-openstack-a-fc2=0']

Revision history for this message
John A Meinel (jameinel) wrote : Re: [Bug 1733392] Re: Support for nested placements

That actually looks like you're trying to place rabbitmq-server inside a
container inside a KVM container, which is the "nested placement not
supported".
If you were trying to colocate it inside the KVM, or if you were just
trying to put the rabbitmq-openstack as a host machine it might work.
I'm not positive around this, but nested *should* mean
containers-inside-containers.

On Tue, Nov 21, 2017 at 1:28 PM, Xavier Esteve <email address hidden>
wrote:

> This is how it looks our nested config:
>
> - stage0.yaml: hardware is deployed with charm to config routes
> services:
> node-mgm-openstack-a-fc1:
> charm: static-routes
> num_units: 2
> constraints: "tags=management,FC1"
>
> - stage1.yaml: kvm is created with same charm
> config: &LXD_NODE_MGM_OPENSTACK_A_0 'lxd:node-mgm-openstack-a-fc1=0'
> config: &LXD_NODE_MGM_OPENSTACK_A_1 'lxd:node-mgm-openstack-a-fc1=1'
> config: &LXD_NODE_MGM_OPENSTACK_A_2 'lxd:node-mgm-openstack-a-fc2=0'
> config: &LXD_NODE_MGM_OPENSTACK_A_TO [ *LXD_NODE_MGM_OPENSTACK_A_0,
> *LXD_NODE_MGM_OPENSTACK_A_1, *LXD_NODE_MGM_OPENSTACK_A_2 ]
> ..
> ..
> services:
> pod01-kvm-rabbitmq-openstack:
> charm: static-routes
> bindings:
> "": *oam-space
> num_units: 3
> constraints: "cpu-cores=6 mem=32G root-disk=25G spaces=oam,clo"
> to: *KVM_NODE_MGM_OPENSTACK_A_TO
>
> - stage2.yaml: rabbitmq-server is installed in the KVM server
> services:
> rabbitmq-server:
> charm: rabbitmq-server
> source: *os-origin
> num_units: 3
> binding:
> "": *oam-space
> amqp: *internal-space
> cluster: *internal-space
> options:
> min-cluster-size: 2
> access-network: XXXX/20
> cluster-network:XXXX/20
> to:
> - 'pod01-kvm-rabbitmq-openstack=0'
> - 'pod01-kvm-rabbitmq-openstack=1'
> - 'pod01-kvm-rabbitmq-openstack=2'
>
> And here an example of the error:
> 2017-11-20 10:55:22 [ERROR] deployer.deploy: Nested placement not
> supported rabbitmq-server -> pod01-kvm-rabbitmq-openstack ->
> ['kvm:node-mgm-openstack-a-fc1=0', 'kvm:node-mgm-openstack-a-fc1=1',
> 'kvm:node-mgm-openstack-a-fc2=0']
>
> --
> You received this bug notification because you are subscribed to juju.
> Matching subscriptions: juju bugs
> https://bugs.launchpad.net/bugs/1733392
>
> Title:
> Support for nested placements
>
> To manage notifications about this bug go to:
> https://bugs.launchpad.net/juju/+bug/1733392/+subscriptions
>

Revision history for this message
Alvaro Uria (aluria) wrote :

Hi Tim, John,

This is a bug in juju-deployer (nested placements), but we'd like to stop using it (juju-deployer) in favor of Juju. Hence, the question about its support once bug #1567169 fix lands on Juju 2.3.

Initially, we used the following excerpt:
"""
node-mgm-openstack-cmp-fc1:
  charm: /path/to/static-routes
  num_units: 2
  constraints: "tags=management,cmp,FC1"
node-mgm-openstack-cmp-fc2:
  charm: /path/to/static-routes
  num_units: 2
  constraints: "tags=management,cmp,FC2"

lxd-memcached:
  charm: /path/to/static-routes
  num_units: 3
  to:
    - lxd:node-mgm-openstack-cmp-fc1/1
    - lxd:node-mgm-openstack-cmp-fc2/1
    - lxd:node-mgm-openstack-cmp-fc1/0

memcached:
  charm: cs:memcached
  num_units: 3
  to:
    - 'lxd-memcached/0'
    - 'lxd-memcached/1'
    - 'lxd-memcached/2'
"""

However, we were running into the issue about indexed placements not being supported: when a unit failed, re-deploys of the Juju bundle failed.

To solve this, we started using mojo+juju-deployer, which supports indexed placements (no worries if units deployed are /7, /11, /23).

Furthermore, since juju-deployer doesn't support nested placements, we prepared a couple of deployer stages, and a fake initial stage:
- stage0: deploys node-mgm-openstack-cmp-fc1, node-mgm-openstack-cmp-fc2, and lxd-memcached
- fakestage0: uses lxd-memcached with the same number of units that have already been deployed (so, juju-deployer will skip the step, since it is already completed)
- stage1: uses fakestage0 definition, to place "memcached" application

The described workaround works because "fakestage0" doesn't point to metal definitions in "stage0".

My initial question is if such nested definition would work on Juju 2.3 bundles (replacing /N by =N, to use indexed placements).

Please let me know if you'd need further details.

Changed in juju:
status: Incomplete → New
Revision history for this message
Xavier Esteve (xesteve) wrote :

John, I've copied the wrong part of the stage1, the real config is not pointing to LXD it to KVM. Alvaro explained better than me

- stage1.yaml
config: &KVM_NODE_MGM_OPENSTACK_A_0 'kvm:node-mgm-openstack-a-fc1=0'
config: &KVM_NODE_MGM_OPENSTACK_A_1 'kvm:node-mgm-openstack-a-fc1=1'
config: &KVM_NODE_MGM_OPENSTACK_A_2 'kvm:node-mgm-openstack-a-fc2=0'
config: &KVM_NODE_MGM_OPENSTACK_A_TO [ *KVM_NODE_MGM_OPENSTACK_A_0, *KVM_NODE_MGM_OPENSTACK_A_1, *KVM_NODE_MGM_OPENSTACK_A_2 ]
..
..
  services:
    pod01-kvm-rabbitmq-openstack:
      charm: static-routes
      bindings:
        "": *oam-space
      num_units: 3
      constraints: "cpu-cores=6 mem=32G root-disk=25G spaces=oam,clo"
      to: *KVM_NODE_MGM_OPENSTACK_A_TO

Revision history for this message
Tim Penhey (thumper) wrote :

Instead of showing the current bundle YAML, how about you explain what you are deploying, and how you'd like to see it end up, and we can then show the placement directives to do that, or say it can't be done.

Revision history for this message
Alvaro Uria (aluria) wrote :

Hey Tim,

I'm sorry if I didn't explain myself more clearly. We're currently running a bundle as shown in comment #6. However, such bundle only works when units used for a later placement match /0, /1 and /2.

If for some reason, a node failed to PXE boot or we had to remove one of the units used for the final placements (ie. of lxd-memcached), we will end up with units such as /0, /3, /7, which we won't be able to use for placements (since the bundle specifies /0, /1, /2).

How could we work around this issue and make a Juju bundle reusable no matter the unit numbers?

The outcome of such bundle is:
1) Deploy multiple metals with specific configurations defined on static-routes charm
2) Deploy containers or KVMs on top of such metals, with specific configurations defined on the same static-routes charm
3) Place OpenStack Charms on those containers or KVMs, once steps #1 and #2 have settled

Please let me know if this clarifies the use case. Thank you.

Revision history for this message
John A Meinel (jameinel) wrote :

It feels like the real request is for "indexed" placements, rather than "nested" placements.
However, I'd also like to understand why you need to concretely address the "2nd" unit of the application, rather than "all" units of the application.
Is there something you want to be on the first instance, but not on the 3rd?
I also wonder if this is *really* what you want. As an example.

I deployed, 'blah', and had to play around with it a bit, so ultimately I have blah/2, and blah/4
If I then deploy a bundle that says: add "bar" to "blah=0", and "foo", to "blah=1", then you would end up with "bar" on blah/2 and "foo" on blah/4.
So far, that's fine.
But if you end up having to kill "blah/2" and create another "blah/6". And you then deployed the bundle again. It would say that it now needs to put "bar" on "blah=0" which is now blah/4 which has "foo" on it.

*indexed* placement sounds like a misfeature, though maybe reasonable to support in lieu of being able to specify what you *actually* want, that we can actually treat as an invariant.
A better statement would be something like "deploy one unit of bar, somewhere that blah exists, and one unit of foo somewhere that blah exists [but not where bar is]".

Changed in juju:
importance: Undecided → Wishlist
status: New → Triaged
Revision history for this message
Canonical Juju QA Bot (juju-qa-bot) wrote :

This bug has not been updated in 2 years, so we're marking it Low importance. If you believe this is incorrect, please update the importance.

Changed in juju:
importance: Wishlist → Low
tags: added: expirebugs-bot
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.