DNS HA broken with percona-cluster
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
percona-cluster (Juju Charms Collection) |
Fix Released
|
High
|
David Ames |
Bug Description
Proxying via email:
Using the maas provider today, juju rc1, and calls to charms with the following syntax, cs:xenial/<charm name>, it appears as though lxd networking behaves ok from juju now as described this morning (which is great), however dns-ha is an issue still.
Charm config:
percona-cluster2:
root-password: **********
sst-password: **********
os-access-
dns-ha: true
hacluster:
cluster_count=2
Deploying with the following command(s), from both lxd and bare metal, with appropriate differences to --to format:
juju deploy --config percona-cluster cs:xenial/
juju add-unit percona-cluster2 --to lxd:2
juju deploy --config percona-cluster cs:xenial/hacluster hacluster
juju status
juju add-relation hacluster percona-cluster2
When I configured the percona-cluster charm to set os-access-
2016-09-22 03:59:11 INFO worker.uniter.jujuc server.go:174 running hook tool "juju-log" ["-l" "DEBUG" "DNS HA: At least one hostname is set os-access-hostname xxx.box.net"]
2016-09-22 03:59:11 DEBUG juju-log ha:9: DNS HA: At least one hostname is set os-access-hostname: xxx.box.net
2016-09-22 03:59:11 INFO worker.uniter.jujuc server.go:174 running hook tool "config-get" ["sst-password" "--format=json"]
2016-09-22 03:59:11 INFO worker.uniter.jujuc server.go:174 running hook tool "juju-log" ["-l" "DEBUG" "DNS HA: Hostname setting os-admin-hostname is None. Ignoring."]
2016-09-22 03:59:11 DEBUG juju-log ha:9: DNS HA: Hostname setting os-admin-hostname is None. Ignoring.
2016-09-22 03:59:11 INFO worker.uniter.jujuc server.go:174 running hook tool "juju-log" ["-l" "DEBUG" "DNS HA: Hostname setting os-internal-
2016-09-22 03:59:11 DEBUG juju-log ha:9: DNS HA: Hostname setting os-internal-
2016-09-22 03:59:11 INFO worker.uniter.jujuc server.go:174 running hook tool "juju-log" ["-l" "DEBUG" "DNS HA: Hostname setting os-public-hostname is None. Ignoring."]
2016-09-22 03:59:11 DEBUG juju-log ha:9: DNS HA: Hostname setting os-public-hostname is None. Ignoring.
2016-09-22 03:59:11 INFO ha-relation-joined Traceback (most recent call last):
2016-09-22 03:59:11 INFO ha-relation-joined File "/var/lib/
2016-09-22 03:59:11 INFO ha-relation-joined main()
2016-09-22 03:59:11 INFO ha-relation-joined File "/var/lib/
2016-09-22 03:59:11 INFO ha-relation-joined hooks.execute(
2016-09-22 03:59:11 INFO ha-relation-joined File "/var/lib/
2016-09-22 03:59:11 INFO ha-relation-joined self._hooks[
2016-09-22 03:59:11 INFO ha-relation-joined File "/var/lib/
2016-09-22 03:59:11 INFO ha-relation-joined resource_
2016-09-22 03:59:11 INFO ha-relation-joined File "/var/lib/
ms
2016-09-22 03:59:11 INFO ha-relation-joined override=False)))
2016-09-22 03:59:11 INFO ha-relation-joined File "/var/lib/
2016-09-22 03:59:11 INFO ha-relation-joined net_type = ADDRESS_
2016-09-22 03:59:11 INFO ha-relation-joined KeyError: 'access'
2016-09-22 03:59:11 ERROR juju.worker.
2016-09-22 03:59:11 INFO juju.worker.uniter resolver.go:100 awaiting error resolution for "relation-joined" hook
Looking in the source file, ip.py, I see that ACCESS is not defined within ADDRESS_MAP at the start of the file, so I added it, updated some settings for os-access-network, but then it failed again at a network-get call, as I assume the rest of the properties, which I copied from PUBLIC, are incorrect.
Changed in percona-cluster (Juju Charms Collection): | |
importance: | Undecided → High |
milestone: | none → 16.10 |
status: | New → Triaged |
Changed in percona-cluster (Juju Charms Collection): | |
status: | Triaged → In Progress |
assignee: | nobody → James Page (james-page) |
Changed in percona-cluster (Juju Charms Collection): | |
status: | Fix Committed → Fix Released |
OK - so percona is a bit of an odditity in that it does not have an explicit extra-binding for the access networking; its use the OS helper that assumes this exists; we'll have to refactor a little to fix this.