Setting RabbitMQ NODENAME to non-FQDN breaks on MaaS 2.0

Bug #1584902 reported by Bert JW Regeer
82
This bug affects 16 people
Affects Status Importance Assigned to Milestone
OpenStack RabbitMQ Server Charm
Fix Released
High
David Ames
rabbitmq-server (Juju Charms Collection)
Invalid
High
David Ames

Bug Description

In `rabbit_utils.py` in `get_node_hostname` the nodename is requested to not be a FQDN:

nodename = get_hostname(ip_addr, fqdn=False)

However, this causes issues with MaaS 2.0 because it has two PTR records for a single reverse DNS name.

This is an example entry:

```
53.69.189.10.in-addr.arpa. 30 IN CNAME 53.0-25.69.189.10.in-addr.arpa.
53.0-25.69.189.10.in-addr.arpa. 30 IN PTR eth0.unnarrated-edgardo.maas.
53.0-25.69.189.10.in-addr.arpa. 30 IN PTR unnarrated-edgardo.maas.
```

In our deployment it is picking up `eth0.unarrated-edgardo.maas`, which in `ip.py` in `get_hostname` gets stripped to just the first part, returning `eth0`.

This leads to a `RABBITMQ_NODENAME` of `rabbit@eth0` which fails to resolve when it starts up.

Instead the entry should be set to `rabbit@unarrated-edgardo`.

I tried using `RABBITMQ_USE_LONGNAME` thereby allowing the FQDN to be used and replacing the flag in `get_node_hostname` to `True` from `False`, however at this point because RabbitMQ is running in a container that is named `juju-machine-17-lxc-0`, and that is not FQDN it fails to set the long name and won't start.

Matt Rae (mattrae)
tags: added: cpec
Revision history for this message
Bert JW Regeer (bertjwregeer) wrote :
tags: added: canonical-bootstack
Revision history for this message
James Page (james-page) wrote :

linked to bug 1584569

Changed in rabbitmq-server (Juju Charms Collection):
status: New → Triaged
importance: Undecided → High
assignee: nobody → James Page (james-page)
milestone: none → 16.07
Revision history for this message
James Page (james-page) wrote :

Question: what does the LXD container think its hostname is? I'm assuming that the DNS records should reflect the actual hostname of the container, but from your last remark, it would appear that might not be the case.

Revision history for this message
James Page (james-page) wrote :

OK - so I've done some refactoring and come up with:

  cs:~james-page/xenial/rabbitmq-server-bug1584902

This changes quite a bit about the assumptions that the charm does to resolve the hostname of the local unit; specifically it now completely skips any direct queries to DNS and just uses socket.gethostname() to set the local nodename - this also gets stored by the leader in leader storage for clustering of followers.

This makes the huge assumption that socket.gethostname() is always resolvable across the peers within a service, but I don't think that is unreasonable.

Revision history for this message
James Page (james-page) wrote :

I've successfully tested:

Juju 2.0:

Local LXD provider (beta 7)
MAAS 1.9 (beta 7) - Juju 2.0 writes extra entries into /etc/hosts for peers in the same service (at least), so even though the machine hostname does not match the MAAS DNS hostname for the same IP, it appears to just work.

Juju 1.25.5:

OpenStack provider with full forward/reverse DNS

Revision history for this message
James Page (james-page) wrote :

Tested OK with Juju 1.25.5 and MAAS 1.9 as well; Juju is not managing the hosts file, the charm is doing it actively when peers join the cluster relation (ref update_hosts_file function).

Side effect of this is that hostname resolution between peers is fully functional, even if the DNS in the environment is not as the charm would like it to be.

Revision history for this message
James Page (james-page) wrote :

I'd like to see a Juju 2.0/MAAS 2.0 test as well to confirm this is all OK, but I'll start the review process for the change now.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to charm-rabbitmq-server (master)

Fix proposed to branch: master
Review: https://review.openstack.org/320450

Changed in rabbitmq-server (Juju Charms Collection):
status: Triaged → In Progress
Felipe Reyes (freyes)
tags: added: sts
Revision history for this message
James Page (james-page) wrote :

Felipe Reyes
4:00 PM

Patch Set 1: Code-Review+1
I tested the patch using a Xenial, juju 2.0 and maas 2.0, and it fixes the issue
Evidence:
Without the patch I could reproduce the bug:
unit-rabbitmq-server-0: 2016-05-24 14:29:16 DEBUG unit.rabbitmq-server/0.juju-log server.go:269 Running ['/usr/sbin/rabbitmqctl', 'wait', '/<email address hidden>']
unit-rabbitmq-server-0: 2016-05-24 14:29:16 INFO unit.rabbitmq-server/0.config-changed logger.go:40 Waiting for 'rabbit@xenia-node00' ...
Applying the fix via 'juju upgrade-charm'
2016-05-24 14:34:38 INFO worker.uniter.jujuc server.go:173 running hook tool "leader-set" ["leader_node_ip=" "leader_nodename=xenia-node00" "cookie=XPPZQTZGTDHOVYKNQNXZ"]
2016-05-24 14:34:27 DEBUG juju-log Running ['/usr/sbin/rabbitmqctl', 'wait', '/<email address hidden>']
2016-05-24 14:34:27 INFO upgrade-charm Waiting for 'rabbit@xenia-node00' ...
2016-05-24 14:34:27 INFO upgrade-charm pid is 24977 ...
2016-05-24 14:34:28 INFO worker.uniter.jujuc server.go:173 running hook tool "juju-log" ["Confirmed rabbitmq app is running"]
$ grep RABBITMQ_NODENAME /etc/rabbitmq/rabbitmq-env.conf
# per machine - RABBITMQ_NODENAME should be unique per erlang-node-and-machine
RABBITMQ_NODENAME=rabbit@xenia-node00
[Units]
ID WORKLOAD-STATUS JUJU-STATUS VERSION MACHINE PORTS PUBLIC-ADDRESS MESSAGE
rabbitmq-server/0 active idle 2.0-beta6 0 5672/tcp 192.168.123.3 Unit is ready and clustered
rabbitmq-server/1 active idle 2.0-beta6 1 5672/tcp 192.168.123.4 Unit is ready and clustered
rabbitmq-server/2 active idle 2.0-beta6 2 5672/tcp 192.168.123.5 Unit is ready and clustered
$ juju ssh rabbitmq-server/0 sudo rabbitmqctl cluster_status
Warning: Permanently added '192.168.123.3' (ECDSA) to the list of known hosts.
Cluster status of node 'rabbit@xenia-node00' ...
[{nodes,[{disc,['rabbit@xenia-node00','rabbit@xenia-node01',
                'rabbit@xenia-node02']}]},
 {running_nodes,['rabbit@xenia-node02','rabbit@xenia-node01',
                 'rabbit@xenia-node00']},
 {cluster_name,<<"<email address hidden>">>},
 {partitions,[]}]

Revision history for this message
Matt Rae (mattrae) wrote :

Thanks James, in reply to comment #3, the hostname of the container is juju-machine-17-lxc-0, but MAAS DNS has a generated name for that container like aware-jackrabbit.

Maybe container hostnames are not correctly getting set to same name as DNS?

When I was testing to see if I could start rabbitmq, I needed to add RABBITMQ_NODENAM=rabbit@juju-machine-17-lxc-0, before rabbitmq would start.

Revision history for this message
Bert JW Regeer (bertjwregeer) wrote :
Download full text (7.3 KiB)

I grabbed the patch and tried to deploy just a single instance. That single instance came up without issues.

Then using juju add-unit, I started another two nodes. One joined the cluster successfully, the other seems to be hanging on:

Clustering with remote rabbit host (rabbit@juju-machine-17-lxc-0).

Tail end of the JuJu logs:

Reading package lists... Doneter-relation-changed
Building dependency tree r-relation-changed
Reading state information... Donerelation-changed
2016-05-24 14:40:37 INFO worker.uniter.jujuc server.go:173 running hook tool "status-set" ["maintenance" "Clustering with remote rabbit host (rabbit@juju-machine-17-lxc-0)."]
2016-05-24 14:40:37 DEBUG worker.uniter.jujuc server.go:174 hook context id "rabbitmq/3-cluster-relation-changed-383633683969241899"; dir "/var/lib/juju/agents/unit-rabbitmq-3/charm"
2016-05-24 14:40:37 INFO worker.uniter.jujuc server.go:173 running hook tool "juju-log" ["-l" "DEBUG" "Running ['/usr/sbin/rabbitmqctl', 'stop_app']"]
2016-05-24 14:40:37 DEBUG worker.uniter.jujuc server.go:174 hook context id "rabbitmq/3-cluster-relation-changed-383633683969241899"; dir "/var/lib/juju/agents/unit-rabbitmq-3/charm"
2016-05-24 14:40:37 DEBUG juju-log cluster:65: Running ['/usr/sbin/rabbitmqctl', 'stop_app']
2016-05-24 14:40:37 INFO cluster-relation-changed Stopping node 'rabbit@juju-machine-13-lxc-3' ...

It never seems to stop.

Full process list:

root@juju-machine-13-lxc-3:/var/log/juju# ps auxwww
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 37540 5676 ? Ss 14:36 0:01 /sbin/init
root 48 0.0 0.0 35276 8056 ? Ss 14:36 0:00 /lib/systemd/systemd-journald
root 86 0.0 0.0 274488 6208 ? Ssl 14:36 0:00 /usr/lib/accountsservice/accounts-daemon
syslog 87 0.0 0.0 260632 3244 ? Ssl 14:36 0:00 /usr/sbin/rsyslogd -n
root 88 0.0 0.0 27728 2972 ? Ss 14:36 0:00 /usr/sbin/cron -f
root 92 0.0 0.0 28548 3024 ? Ss 14:36 0:00 /lib/systemd/systemd-logind
daemon 97 0.0 0.0 26044 2284 ? Ss 14:36 0:00 /usr/sbin/atd -f
message+ 98 0.0 0.0 42948 4016 ? Ss 14:36 0:00 /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation
root 103 0.0 0.0 277180 6120 ? Ssl 14:36 0:00 /usr/lib/policykit-1/polkitd --no-debug
root 274 0.0 0.0 65612 6244 ? Ss 14:36 0:00 /usr/sbin/sshd -D
root 278 0.0 0.0 5224 160 ? Ss 14:36 0:00 /sbin/iscsid
root 279 0.0 0.0 5724 3536 ? S<Ls 14:36 0:00 /sbin/iscsid
root 303 0.0 0.0 14476 2180 pts/3 Ss+ 14:36 0:00 /sbin/agetty --noclear --keep-baud pts/3 115200 38400 9600 vt220
root 305 0.0 0.0 14476 2284 lxc/console Ss+ 14:36 0:00 /sbin/agetty --noclear --keep-baud console 115200 38400 9600 vt220
root 308 0.0 0.0 14476 2140 pts/0 Ss+ 14:36 0:00 /sbin/agetty --noclear --keep-baud pts/0 115200 38400 9600 vt220
root 310 0.0 0.0 14476 2180 pts/2 Ss+ 14:36 0:00 /sbin/agetty --noclear --keep-baud pts/2 115200 38400 9600 vt220
root 31...

Read more...

Revision history for this message
Bert JW Regeer (bertjwregeer) wrote :

Since launchpad completely destroyed any and all formatting, here it is up on Github as a gist:

https://gist.github.com/bertjwregeer/716d0c664142c18f09b0f4dab1bdad06#file-patchtest-md

Revision history for this message
Bert JW Regeer (bertjwregeer) wrote :

ubuntu@maas-region-controller:~/charms$ juju status rabbitmq
[Services]
NAME STATUS EXPOSED CHARM
rabbitmq maintenance false local:xenial/rabbitmq-server-152

[Relations]
SERVICE1 SERVICE2 RELATION TYPE
ceilometer rabbitmq amqp regular
cinder rabbitmq amqp regular
neutron-api rabbitmq amqp regular
neutron-calico rabbitmq amqp regular
nova-cloud-controller rabbitmq amqp regular
nova-compute rabbitmq amqp regular
rabbitmq rabbitmq cluster peer

[Units]
ID WORKLOAD-STATUS JUJU-STATUS VERSION MACHINE PORTS PUBLIC-ADDRESS MESSAGE
rabbitmq/1 active idle 2.0-beta7 17/lxc/0 5672/tcp 10.189.69.55 Unit is ready and clustered
rabbitmq/2 active idle 2.0-beta7 12/lxc/3 5672/tcp 10.189.69.56 Unit is ready and clustered
rabbitmq/3 maintenance executing 2.0-beta7 13/lxc/3 5672/tcp 10.189.69.57 Clustering with remote rabbit host (rabbit@juju-machine-17-lxc-0).

[Machines]
ID STATE DNS INS-ID SERIES AZ
12 started 10.189.69.17 4y3hdc xenial default
13 started 10.189.69.26 4y3hdq xenial default
17 started 10.189.69.18 4y3hdd xenial default

Revision history for this message
James Page (james-page) wrote :

@Bert

I think the problem you saw was due to a race I think I've now fixed, where the hostname for the lead unit is not present in /etc/hosts at the point where the third unit tried to cluster.

Pushed updates to:

  cs:~james-page/xenial/rabbitmq-server-bug1584902

Please re-test!

Revision history for this message
James Page (james-page) wrote :

Re-tested using Juju 2.0 LXD local provider - worked OK in the following tests

single unit, joined by two others - clustered OK
3 unit deployment - clustered OK
removal of lead unit, addition of new unit - clustered OK.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to charm-rabbitmq-server (master)

Reviewed: https://review.openstack.org/320450
Committed: https://git.openstack.org/cgit/openstack/charm-rabbitmq-server/commit/?id=7ffdcd204e6550cf5dcad57ed07ea302c1eedcce
Submitter: Jenkins
Branch: master

commit 7ffdcd204e6550cf5dcad57ed07ea302c1eedcce
Author: James Page <email address hidden>
Date: Tue May 24 14:18:37 2016 +0100

    Refactor hostname resolution via DNS

    Due to oddities in the way that DNS used to work on older MAAS
    versions, specifically when using LXC containers, the charm
    would endeavour to discover a full forward/reverse resolvable
    hostname/ip address to use when clustering with peers, and to
    set the internal nodename for each RMQ instance.

    Changes in MAAS 2.0 mean that an IP address may resolve to more
    than 1 DNS record, making identification via DNS of the FQDN
    problematic.

    The charm actively manages /etc/hosts with IP address/hostname for
    all peer units in a RMQ cluster using the cluster relation; make
    use of this feature to allow the internal nodename of each unit
    to actually be the hostname of the server, as the charm is ensuring
    the resolvability of hostnames within the cluster.

    To avoid races where the lead unit has not had its hostname written
    to /etc/hosts, only cluster with the lead unit in the hook execution
    where this actually happens.

    Change-Id: Ia400c3b6e2cb1a5f2ee6f5fe98b5437033e02024
    Closes-Bug: 1584902

Changed in rabbitmq-server (Juju Charms Collection):
status: In Progress → Fix Committed
Revision history for this message
Dimiter Naydenov (dimitern) wrote :

I successfully tested deploying 3 units of rabbitmq-server with the fix on MAAS 2.0.0 (beta5+bzr5026) with Juju 2.0-beta8-xenial-amd64 using 3 NUCs, each with multiple VLANs and 2 physical NICs. Here's the status output after settling: http://paste.ubuntu.com/16727654/ No hook errors were observed.

Also tried with 8 units on the LXD provider, one of which happened to get IPv6 address. This still caused a `hook failed: "cluster-relation-joined"` like described in bug 1574844, but the rest of the units are fine: http://paste.ubuntu.com/16727679/

Here are the logs from machine-7 in the paste above: http://paste.ubuntu.com/16727698/ (machine-7.log) http://paste.ubuntu.com/16727703/ (unit-rabbitmq-server-6.log).

Revision history for this message
Dimiter Naydenov (dimitern) wrote :

As requested, here is the output of: juju run --service rabbitmq-server -- 'cat /etc/hosts'
http://paste.ubuntu.com/16727753/

tags: added: conjure
Felipe Reyes (freyes)
tags: added: backport-potential
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to charm-rabbitmq-server (stable/16.04)

Fix proposed to branch: stable/16.04
Review: https://review.openstack.org/322340

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to charm-rabbitmq-server (stable/16.04)

Reviewed: https://review.openstack.org/322340
Committed: https://git.openstack.org/cgit/openstack/charm-rabbitmq-server/commit/?id=284bd30809ee31b5ba6311c4054aa12f34dfb23f
Submitter: Jenkins
Branch: stable/16.04

commit 284bd30809ee31b5ba6311c4054aa12f34dfb23f
Author: James Page <email address hidden>
Date: Tue May 24 14:18:37 2016 +0100

    Refactor hostname resolution via DNS

    Due to oddities in the way that DNS used to work on older MAAS
    versions, specifically when using LXC containers, the charm
    would endeavour to discover a full forward/reverse resolvable
    hostname/ip address to use when clustering with peers, and to
    set the internal nodename for each RMQ instance.

    Changes in MAAS 2.0 mean that an IP address may resolve to more
    than 1 DNS record, making identification via DNS of the FQDN
    problematic.

    The charm actively manages /etc/hosts with IP address/hostname for
    all peer units in a RMQ cluster using the cluster relation; make
    use of this feature to allow the internal nodename of each unit
    to actually be the hostname of the server, as the charm is ensuring
    the resolvability of hostnames within the cluster.

    To avoid races where the lead unit has not had its hostname written
    to /etc/hosts, only cluster with the lead unit in the hook execution
    where this actually happens.

    Change-Id: Ia400c3b6e2cb1a5f2ee6f5fe98b5437033e02024
    Closes-Bug: 1584902
    (cherry picked from commit 7ffdcd204e6550cf5dcad57ed07ea302c1eedcce)

Liam Young (gnuoy)
Changed in rabbitmq-server (Juju Charms Collection):
status: Fix Committed → Fix Released
Revision history for this message
James Page (james-page) wrote :

This change was reverted as it made clustering of RMQ completely racey and broken on 1.25.x series.

We need to find a better solution for this.

Changed in rabbitmq-server (Juju Charms Collection):
status: Fix Released → New
milestone: 16.07 → 16.10
Revision history for this message
Bert JW Regeer (bregeer-ctl) wrote :

James:

What can we do to make sure that this doesn't break on 2.0? If this change is reverted then deployments on 2.0 will fail.

This is tied to a support case that we opened, and this would be a massive regression because it means rabbitmq does not come online/cluster.

Bert JW Regeer

James Page (james-page)
Changed in rabbitmq-server (Juju Charms Collection):
milestone: 16.10 → 17.01
Revision history for this message
James Page (james-page) wrote :

For now use:

  cs:xenial/rabbitmq-server-5

which contains the fix that works with Juju 2.0, but breaks 1.25.x

Revision history for this message
Tom-Erik Røberg (tom-erik-roberg) wrote :
Download full text (4.2 KiB)

This bug is back again in cs:xenial/rabbitmq-server-54.

2016-10-22 12:32:49 INFO juju-log Changing perms of path /var/lib/rabbitmq
2016-10-22 12:32:49 INFO juju-log getting local nodename for ip address: 10.42.129.252
2016-10-22 12:32:49 INFO juju-log local nodename: eth3
2016-10-22 12:32:49 INFO juju-log configuring nodename
2016-10-22 12:32:50 INFO juju-log forcing nodename=eth3
2016-10-22 12:32:50 INFO juju-log Stopping rabbitmq-server.
2016-10-22 12:32:50 INFO juju-log Updating /etc/rabbitmq/rabbitmq-env.conf, RABBITMQ_NODENAME=rabbit@eth3
2016-10-22 12:32:50 INFO juju-log Starting rabbitmq-server.
2016-10-22 12:32:50 INFO config-changed Job for rabbitmq-server.service failed because the control process exited with error code. See "systemctl status rabbitmq-server.service" and "journalctl -xe" for details.
2016-10-22 12:32:50 INFO juju-log getting local nodename for ip address: 10.42.129.252
2016-10-22 12:32:50 INFO juju-log local nodename: eth3
2016-10-22 12:32:50 DEBUG juju-log Waiting for rabbitmq app to start: /<email address hidden>
2016-10-22 12:32:50 DEBUG juju-log Running ['timeout', '180', '/usr/sbin/rabbitmqctl', 'wait', '/<email address hidden>']
2016-10-22 12:32:51 INFO config-changed Waiting for rabbit@eth3 ...
2016-10-22 12:32:51 INFO config-changed pid is 11926 ...
2016-10-22 12:32:51 INFO config-changed Error: process_not_running
2016-10-22 12:32:51 INFO config-changed Error: unable to connect to node rabbit@eth3: nodedown
2016-10-22 12:32:51 INFO config-changed
2016-10-22 12:32:51 INFO config-changed DIAGNOSTICS
2016-10-22 12:32:51 INFO config-changed ===========
2016-10-22 12:32:51 INFO config-changed
2016-10-22 12:32:51 INFO config-changed attempted to contact: [rabbit@eth3]
2016-10-22 12:32:51 INFO config-changed
2016-10-22 12:32:51 INFO config-changed rabbit@eth3:
2016-10-22 12:32:51 INFO config-changed * unable to connect to epmd (port 4369) on eth3: nxdomain (non-existing domain)
2016-10-22 12:32:51 INFO config-changed
2016-10-22 12:32:51 INFO config-changed
2016-10-22 12:32:51 INFO config-changed current node details:
2016-10-22 12:32:51 INFO config-changed - node name: 'rabbitmq-cli-12279@juju-ebcca1-0-lxd-2'
2016-10-22 12:32:51 INFO config-changed - home dir: /var/lib/rabbitmq
2016-10-22 12:32:51 INFO config-changed - cookie hash: kpajU/F+sdPkKp79KbzF2A==
2016-10-22 12:32:51 INFO config-changed
2016-10-22 12:32:51 INFO config-changed Traceback (most recent call last):
2016-10-22 12:32:51 INFO config-changed File "/var/lib/juju/agents/unit-rabbitmq-server-2/charm/hooks/config-changed", line 765, in <module>
2016-10-22 12:32:51 INFO config-changed hooks.execute(sys.argv)
2016-10-22 12:32:51 INFO config-changed File "/var/lib/juju/agents/unit-rabbitmq-server-2/charm/hooks/charmhelpers/core/hookenv.py", line 715, in execute
2016-10-22 12:32:51 INFO config-changed self._hooks[hook_name]()
2016-10-22 12:32:51 INFO config-changed File "/var/lib/juju/agents/unit-rabbitmq-server-2/charm/hooks/rabbit_utils.py", line 764, in wrapped_f
2016-10-22 12:32:51 INFO config-changed f(*args, **kwargs)
2016-10-22 12:32:51 INFO config-changed File "/var/lib/juju/agents/unit-rabb...

Read more...

Revision history for this message
Bert JW Regeer (bertjwregeer) wrote :

Tom-Erik Røberg see comment #21 on this bug. The change was reverted. See comment #23 for a work-around (i.e. pin the version of the Charm).

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to charm-rabbitmq-server (master)

Reviewed: https://review.openstack.org/389387
Committed: https://git.openstack.org/cgit/openstack/charm-rabbitmq-server/commit/?id=0c2f3e6b1b598df9a4dd5068b34b20bd38ef37fd
Submitter: Jenkins
Branch: master

commit 0c2f3e6b1b598df9a4dd5068b34b20bd38ef37fd
Author: David Ames <email address hidden>
Date: Thu Oct 20 15:59:38 2016 -0700

    Fix DNS resolution problems

    There are two distinct DNS resolution problems. One, the nodename
    reverse DNS resolution. And two, resolution of peers. This attempts
    to fix both problems.

    Nodename is only required when rabbitmq runs multiple instances on
    the same host. Having this set complicates the DNS requirements.
    Removing this setting may simplify the DNS problems this charm has
    faced.

    For peer resolution force the use of hostname rather than dynamically
    attempting to resolve one from an IP. Set /etc/hosts with the local
    hostname and with each peer.

    Standardize the way an IPv4 or IPv6 address is selected throughout
    the charm. Standardize the way a hostname is selected throughout the
    charm.

    Partial-Bug: 1584902
    Change-Id: I105eb2684e61a553a52c5a944e8c562945e2a6eb

David Ames (thedac)
Changed in rabbitmq-server (Juju Charms Collection):
assignee: James Page (james-page) → David Ames (thedac)
status: New → Fix Committed
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to charm-rabbitmq-server (stable/16.10)

Fix proposed to branch: stable/16.10
Review: https://review.openstack.org/410292

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to charm-rabbitmq-server (stable/16.10)

Reviewed: https://review.openstack.org/410292
Committed: https://git.openstack.org/cgit/openstack/charm-rabbitmq-server/commit/?id=dbd667f44f65e2d9823696ec107b7b189fdf562f
Submitter: Jenkins
Branch: stable/16.10

commit dbd667f44f65e2d9823696ec107b7b189fdf562f
Author: David Ames <email address hidden>
Date: Thu Oct 20 15:59:38 2016 -0700

    Fix DNS resolution problems

    There are two distinct DNS resolution problems. One, the nodename
    reverse DNS resolution. And two, resolution of peers. This attempts
    to fix both problems.

    Nodename is only required when rabbitmq runs multiple instances on
    the same host. Having this set complicates the DNS requirements.
    Removing this setting may simplify the DNS problems this charm has
    faced.

    For peer resolution force the use of hostname rather than dynamically
    attempting to resolve one from an IP. Set /etc/hosts with the local
    hostname and with each peer.

    Standardize the way an IPv4 or IPv6 address is selected throughout
    the charm. Standardize the way a hostname is selected throughout the
    charm.

    Partial-Bug: 1584902
    Change-Id: I105eb2684e61a553a52c5a944e8c562945e2a6eb
    (cherry picked from commit 0c2f3e6b1b598df9a4dd5068b34b20bd38ef37fd)

James Page (james-page)
Changed in charm-rabbitmq-server:
assignee: nobody → David Ames (thedac)
importance: Undecided → High
status: New → Fix Committed
Changed in rabbitmq-server (Juju Charms Collection):
status: Fix Committed → Invalid
James Page (james-page)
Changed in charm-rabbitmq-server:
milestone: none → 17.02
James Page (james-page)
Changed in charm-rabbitmq-server:
status: Fix Committed → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Duplicates of this bug

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.