juju deploy --dry-run takes different decisions on multiple executions

Bug #1883645 reported by Felipe Reyes
16
This bug affects 2 people
Affects Status Importance Assigned to Milestone
Canonical Juju
Fix Released
High
Simon Richardson

Bug Description

I have a juju model with an OpenStack deployment that was deployed with a
bundle[0], said bundle was modified to go from 3 nova-compute-kvm units to 6
units[1], when running the new bundle with --dry-run to verify the changes
will do what the operator expected it was noticed that the command will
produce different outputs on subsequent executions:

$ juju deploy -m foundations-maas:openstack ~/fce-lab/cpe-deployments/config/bundle.yaml --overlay ~/fce-lab/cpe-deployments/config/overlays/ovs.yaml --overlay ~/fce-lab/cpe-deployments/config/overlays/openstack_versioned_overlay.yaml --dry-run --debug 2>&1 | pastebinit
http://paste.ubuntu.com/p/rYnSzbrc2d/
$ juju deploy -m foundations-maas:openstack ~/fce-lab/cpe-deployments/config/bundle.yaml --overlay ~/fce-lab/cpe-deployments/config/overlays/ovs.yaml --overlay ~/fce-lab/cpe-deployments/config/overlays/openstack_versioned_overlay.yaml --dry-run --debug 2>&1 | pastebinit
http://paste.ubuntu.com/p/4jQf2KkWzP/
$ juju deploy -m foundations-maas:openstack ~/fce-lab/cpe-deployments/config/bundle.yaml --overlay ~/fce-lab/cpe-deployments/config/overlays/ovs.yaml --overlay ~/fce-lab/cpe-deployments/config/overlays/openstack_versioned_overlay.yaml --dry-run --debug 2>&1 | pastebinit
http://paste.ubuntu.com/p/qfGcJTrTRB/
$ juju deploy -m foundations-maas:openstack ~/fce-lab/cpe-deployments/config/bundle.yaml --overlay ~/fce-lab/cpe-deployments/config/overlays/ovs.yaml --overlay ~/fce-lab/cpe-deployments/config/overlays/openstack_versioned_overlay.yaml --dry-run --debug 2>&1 | pastebinit
http://paste.ubuntu.com/p/sD86CJncp3/

Some executions will decide this:

- add unit nova-compute-kvm/3 to existing machine 5
- add unit nova-compute-kvm/4 to new machine 12
- add unit nova-compute-kvm/5 to new machine 13

While others will decide this other thing:

- add unit nova-compute-kvm/3 to existing machine 4
- add unit nova-compute-kvm/4 to existing machine 5
- add unit nova-compute-kvm/5 to new machine 12

The expected is that juju will add nova-compute-kvm/{3,4,5} to 3 new machines.

The logs can be found at
https://private-fileshare.canonical.com/~freyes/dry-run-machine-0.log.gz , the
output of "juju dump-model" is at
https://private-fileshare.canonical.com/~freyes/dry-run-dump-model.txt.gz and
finally a "juju status --format yaml" is at http://paste.ubuntu.com/p/DyHGj5tw8c/

[0] https://paste.ubuntu.com/p/4nrsRqw9zs/
[1] https://paste.ubuntu.com/p/wp3K4ysJSY/

Tags: sts
Revision history for this message
Felipe Reyes (freyes) wrote :

Using a simpler bundle that @dnegreira wrote it's possible to see a similar issue:

$ juju deploy --dry-run ./reproducer-bundle-2.yaml
Resolving charm: cs:memcached
Resolving charm: ubuntu
Changes to deploy bundle:
- add new machine 6 (bundle machine 21)
- add new machine 7 (bundle machine 22)
- add unit memcached/3 to new machine 6
- add unit memcached/4 to new machine 7
- add unit memcached/5 to new machine 8

Versus:

$ juju deploy --dry-run ./reproducer-bundle-2.yaml
Resolving charm: cs:memcached
Resolving charm: ubuntu
Changes to deploy bundle:
- add unit memcached/3 to new machine 6
- add unit memcached/4 to new machine 7
- add unit memcached/5 to new machine 8

This bundle doesn't exhibit the message "add unit XXX to existing machine N", so it could be a different issue, but certainly part of the same category.

The full output of my terminal is at https://pastebin.ubuntu.com/p/H42YGdRGRJ/

Ian Booth (wallyworld)
Changed in juju:
milestone: none → 2.8.1
importance: Undecided → High
status: New → Triaged
Revision history for this message
David Negreira (dnegreira) wrote :

I would like to ask to backport this fix to 2.6 and 2.7 - if at all possible.

Revision history for this message
Ian Booth (wallyworld) wrote :

juju 2.6 is EOL and won't get any more updates unless critical security fixes.

We have 2.7.7 going through final testing at the time of writing, and it is expected, but not yet 100% confirmed, that will likely be the final 2.7 series release. If we have a need to land any release blocking fixes to 2.7 prior to 2.7.7 going out, we can look at landing any fix for this issue in 2.7. Ideally, with 2.8.0 out the door and 2.8.1 soon to go out, we'd look to start upgrading to 2.8 to get some of these fixes. An upgrade to 2.7.7 would be required to get this fix anyway, so it may be prudent to make the jump to 2.8.1 instead.

Changed in juju:
assignee: nobody → Simon Richardson (simonrichardson)
status: Triaged → In Progress
milestone: 2.8.1 → 2.7.7
Revision history for this message
Simon Richardson (simonrichardson) wrote :
Changed in juju:
status: In Progress → Fix Committed
Revision history for this message
David Negreira (dnegreira) wrote :

Did a test running juju from snap channel 2.7/edge and I can confirm that this fixes the issue.

Tim Penhey (thumper)
Changed in juju:
milestone: 2.7.7 → 2.7.8
Changed in juju:
status: Fix Committed → Fix Released
Ian Booth (wallyworld)
Changed in juju:
status: Fix Released → Fix Committed
Changed in juju:
status: Fix Committed → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.