It is not possible to actively use two offers of the same application at the same time
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Canonical Juju |
Fix Released
|
High
|
Ian Booth |
Bug Description
I am in a process of implementing a patch for OpenStack Swift charms and I am facing an issue with setting up Juju cross-controller relations. The patched charms I am using can be found at:
https:/
https:/
When deploying the environment within the same Juju model everything works fine. The bundle I am using for the single-model scenario looks as follows:
series: bionic
machines:
"0":
constraints: tags=swift
series: bionic
"1":
constraints: tags=swift
series: bionic
services:
keystone:
charm: cs:keystone
num_units: 1
options:
admin-
token-
worker-
to:
- lxd:0
mysql:
charm: cs:percona-cluster
num_units: 1
options:
innodb-
max-
to:
- lxd:0
swift-
charm: /home/guardian/
num_units: 1
options:
enable-
read-
region: "RegionOne"
replicas: 2
write-
write-
zone-
to:
- lxd:0
swift-
charm: /home/guardian/
num_units: 1
options:
enable-
read-
region: "RegionTwo"
replicas: 2
write-
write-
zone-
to:
- lxd:1
swift-
charm: /home/guardian/
num_units: 1
options:
block-device: sdb sdc sdd
region: 1
zone: 1
to:
- 0
swift-
charm: /home/guardian/
num_units: 1
options:
block-device: sdb sdc sdd
region: 2
zone: 1
to:
- 1
relations:
- [ "keystone:
- [ "keystone:
- [ "keystone:
- [ "swift-
- [ "swift-
- [ "swift-
- [ "swift-
- [ "swift-
The problem starts when I try to segregate the applications so that:
- keystone, mysql, swift-proxy-region1 and swift-storage-
- swift-proxy-region2 and swift-storage-
The controllers use different MaaS clouds and there is full network communication between them and hosted machines. Everything runs on my laptop on two separate bridges.
The following part works:
cat <<EOF > /tmp/swift-
series: bionic
machines:
"0":
constraints: tags=swift
series: bionic
services:
keystone:
charm: cs:keystone
num_units: 1
options:
admin-
token-
worker-
to:
- lxd:0
mysql:
charm: cs:percona-cluster
num_units: 1
options:
innodb-
max-
to:
- lxd:0
swift-
charm: /home/guardian/
num_units: 1
options:
enable-
read-
region: "RegionOne"
replicas: 1
write-
write-
zone-
to:
- lxd:0
swift-
charm: /home/guardian/
num_units: 1
options:
block-device: sdb sdc sdd
region: 1
zone: 1
to:
- 0
relations:
- [ "keystone:
- [ "keystone:
- [ "swift-
EOF
cat <<EOF > /tmp/swift-
series: bionic
machines:
"0":
constraints: tags=swift
series: bionic
services:
swift-
charm: /home/guardian/
num_units: 1
options:
enable-
read-
region: "RegionTwo"
replicas: 1
write-
write-
zone-
to:
- lxd:0
swift-
charm: /home/guardian/
num_units: 1
options:
block-device: sdb sdc sdd
region: 2
zone: 1
to:
- 0
relations:
- [ "swift-
EOF
juju switch maas-region1
juju add-model swift-region1
juju deploy /tmp/swift-
juju switch maas-region2
juju add-model swift-region2
juju deploy /tmp/swift-
juju switch maas-region1
juju offer keystone:
juju switch maas-region2
juju consume maas-region1:
juju relate swift-proxy-region2 keystone
juju switch maas-region1
juju offer swift-proxy-
juju switch maas-region2
juju consume maas-region1:
juju relate swift-proxy-
juju switch maas-region1
juju offer swift-proxy-
juju switch maas-region2
juju consume maas-region1:
Now when I try to run the following command:
juju relate swift-storage-
the following things happen:
- "swift-
- "swift-
- the following error messages are displayed in maas-region2 controller logs:
bf1c6107-
bf1c6107-
bf1c6107-
Same thing happens when I change the order of deployment:
juju switch maas-region1
juju offer swift-proxy-
juju switch maas-region2
juju consume maas-region1:
juju relate swift-storage-
juju switch maas-region1
juju offer swift-proxy-
juju switch maas-region2
juju consume maas-region1:
juju relate swift-proxy-
Just the error messages are slightly different.
It looks like it is not possible to actively use two offers of the same application at the same time. Attaching full logs from controllers and hosted units.
Changed in juju: | |
milestone: | none → 2.5.2 |
assignee: | nobody → Ian Booth (wallyworld) |
importance: | Undecided → High |
status: | New → Triaged |
Changed in juju: | |
milestone: | 2.5.2 → 2.5.1 |
status: | Triaged → Fix Committed |
Changed in juju: | |
status: | Fix Committed → Fix Released |
To help debug the issue, we need the log level set to DEBUG for the following packages, before the bundles are deployed and offers created etc:
juju.worker. remoterelations common. crossmodel crossmodelrelat ions
juju.apiserver.
juju.apiserver.