juju 2.3 incorrect unit placement
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Canonical Juju |
Fix Released
|
High
|
Tim Penhey |
Bug Description
After deploying openstack bundle with "juju deploy bundle.yaml" if we remove a unit with "juju remove-unit ceilometer/0" and rerun "juju deploy bundle.yaml" the new unit is deployed in a machine where a unit is already deployed instead of the original one.
Output from the first deployment, ceilometer was deployed to machines: 0, 1 and 12:
ceilometer/0* waiting idle 0/lxd/0 100.84.4.14 8777/tcp
ceilometer/1 waiting idle 1/lxd/0 100.84.5.22 8777/tcp
ceilometer/2 waiting idle 12/lxd/0 100.84.6.25 8777/tcp
After removing the unit and reruning deploy, the placement is 1, 12 and 1:
ceilometer/1* waiting idle 1/lxd/0 100.84.5.22 8777/tcp
ceilometer/2 waiting idle 12/lxd/0 100.84.6.25 8777/tcp
ceilometer/3 maintenance executing 1/lxd/14 100.84.5.23
We tried to remove unit ceilometer/3 and deploy again and this time the unit was deployed to machine 12, here are the logs from --debug
10:38:19 DEBUG juju.cmd.
10:38:19 DEBUG juju.cmd.
10:38:19 DEBUG juju.cmd.
10:38:19 DEBUG juju.cmd.
10:38:19 DEBUG juju.cmd.
10:38:19 DEBUG juju.cmd.
10:38:19 INFO cmd bundle.go:382 Deploy of bundle completed.
10:38:19 DEBUG juju.api monitor.go:35 RPC connection died
10:38:19 INFO cmd supercommand.go:465 command finished`
We've also tried to run the deployment with: "juju deploy bundle.yaml --map-machines=
Example of the ceilometer part of the bundle:
machines:
"0":
constraints: tags=4-management
series: xenial
"1":
constraints: tags=5-management
series: xenial
"2":
constraints: tags=6-management
series: xenial
...
...
ceilometer:
charm: ../charms/
num_units: 3
bindings:
"": *oam-space
public: *public-space
admin: *admin-space
internal: *internal-space
options:
openstack
...
to:
- lxd:0
- lxd:1
- lxd:2
juju version is 2.3.6
tags: | added: cpe-onsite |
Changed in juju: | |
status: | New → Triaged |
importance: | Undecided → Low |
tags: | added: bundles |
Changed in juju: | |
importance: | Low → High |
assignee: | nobody → Tim Penhey (thumper) |
status: | Triaged → In Progress |
Changed in juju: | |
milestone: | none → 2.3.8 |
status: | In Progress → Fix Committed |
Changed in juju: | |
status: | Fix Committed → Fix Released |
I would guess that we count how many containers exist and then likely take
the last one from the list if the count didn't match.
I do wonder if we are just treating ids as logical. But aren't trying to
match up the machines after the fact.
We certainly could notice the count doesn't match and then try to see which
one is missing. We've never needed to do that for machines because if one
is missing that means we have to remap the ID to a newly created machine.
John
=:->
On Fri, Apr 20, 2018, 18:05 Gábor Mészáros <email address hidden>
wrote:
> ** Tags added: cpe-onsite /bugs.launchpad .net/bugs/ 1765719 /bugs.launchpad .net/juju/ +bug/1765719/ +subscriptions
>
> --
> You received this bug notification because you are subscribed to juju.
> Matching subscriptions: juju bugs
> https:/
>
> Title:
> juju 2.3 incorrect unit placement
>
> To manage notifications about this bug go to:
> https:/
>