3-node native rabbitmq cluster race

Bug #1486177 reported by Ryan Beisner
56
This bug affects 8 people
Affects Status Importance Assigned to Milestone
Landscape Server
Fix Released
High
Andreas Hasenack
15.07
Fix Released
High
Andreas Hasenack
Cisco-odl
Fix Released
High
Andreas Hasenack
rabbitmq-server (Juju Charms Collection)
Fix Released
Critical
David Ames

Bug Description

With a 3-node native cluster in Vivid-Kilo, Trusty-Juno, and Precise-Icehouse, in greater than 50% of all attempts, one of the rabbitmq-server units fails to cluster. When this happens, we end up with a 2-node cluster, a 1-node cluster, while juju status indicates happiness. In Trusty-Icehouse, the race is much less frequent.

The min-cluster-size and max-cluster-tries code does not appear to be hit. The above is observed with juju 1.24.5 with LE.

When I try with juju 1.22.1 (fallback cluster approach), I get no clustered units (ie. 3 separate single-node clusters).

Test scenario: a basic 3-node rabbitmq-server native cluster, with nrpe as a subordinate to exercise nrpe-external-master functionality, and with cinder to exercise and inspect amqp relation data.

DNS does not appear to play a role here, as all machines can resolve all other machines, forward and reverse, when this cluster failure is observed.

FYI, when the cluster does succeed on V-K, a separate, seemingly-unrelated bug is consistently hit (bug 1485722).

# VK amulet results
2015-08-18 17:49:03,637 test_300_rmq_config INFO: OK
2015-08-18 17:49:03,637 test_400_rmq_cluster_running_nodes DEBUG: Checking that all units are in cluster_status running nodes...
2015-08-18 17:49:08,219 get_unit_hostnames DEBUG: Unit host names: {'rabbitmq-server/2': 'juju-beis0-machine-4', 'rabbitmq-server/0': 'juju-beis0-machine-2', 'rabbitmq-server/1': 'juju-beis0-machine-3'}
2015-08-18 17:49:09,932 run_cmd_unit DEBUG: rabbitmq-server/0 `rabbitmqctl cluster_status` command returned 0 (OK)

2015-08-18 17:49:09,932 get_rmq_cluster_status DEBUG: rabbitmq-server/0 cluster_status:
Cluster status of node 'rabbit@juju-beis0-machine-2' ...
[{nodes,[{disc,['rabbit@juju-beis0-machine-2']}]},
 {running_nodes,['rabbit@juju-beis0-machine-2']},
 {cluster_name,<<"<email address hidden>">>},
 {partitions,[]}]
2015-08-18 17:49:11,578 run_cmd_unit DEBUG: rabbitmq-server/1 `rabbitmqctl cluster_status` command returned 0 (OK)

2015-08-18 17:49:11,578 get_rmq_cluster_status DEBUG: rabbitmq-server/1 cluster_status:
Cluster status of node 'rabbit@juju-beis0-machine-3' ...
[{nodes,[{disc,['rabbit@juju-beis0-machine-3',
                'rabbit@juju-beis0-machine-4']}]},
 {running_nodes,['rabbit@juju-beis0-machine-4','rabbit@juju-beis0-machine-3']},
 {cluster_name,<<"<email address hidden>">>},
 {partitions,[]}]
2015-08-18 17:49:13,224 run_cmd_unit DEBUG: rabbitmq-server/2 `rabbitmqctl cluster_status` command returned 0 (OK)

2015-08-18 17:49:13,226 get_rmq_cluster_status DEBUG: rabbitmq-server/2 cluster_status:
Cluster status of node 'rabbit@juju-beis0-machine-4' ...
[{nodes,[{disc,['rabbit@juju-beis0-machine-3',
                'rabbit@juju-beis0-machine-4']}]},
 {running_nodes,['rabbit@juju-beis0-machine-3','rabbit@juju-beis0-machine-4']},
 {cluster_name,<<"<email address hidden>">>},
 {partitions,[]}]

Cluster member check failed on rabbitmq-server/0: rabbit@juju-beis0-machine-3 not in [u'rabbit@juju-beis0-machine-2']
Cluster member check failed on rabbitmq-server/0: rabbit@juju-beis0-machine-4 not in [u'rabbit@juju-beis0-machine-2']
Cluster member check failed on rabbitmq-server/1: rabbit@juju-beis0-machine-2 not in [u'rabbit@juju-beis0-machine-4', u'rabbit@juju-beis0-machine-3']
Cluster member check failed on rabbitmq-server/2: rabbit@juju-beis0-machine-2 not in [u'rabbit@juju-beis0-machine-3', u'rabbit@juju-beis0-machine-4']

# VK rabbitmq-server/2 unit failed to cluster:
2015-08-18 17:44:27 INFO juju-log cluster:1: Clustering with remote rabbit host (juju-beis0-machine-2).
2015-08-18 17:44:27 INFO cluster-relation-changed Stopping node 'rabbit@juju-beis0-machine-4' ...
2015-08-18 17:44:28 INFO cluster-relation-changed Clustering node 'rabbit@juju-beis0-machine-4' with 'rabbit@juju-beis0-machine-2' ...
2015-08-18 17:44:28 INFO cluster-relation-changed Error: unable to connect to nodes ['rabbit@juju-beis0-machine-2']: nodedown
2015-08-18 17:44:28 INFO cluster-relation-changed
2015-08-18 17:44:28 INFO cluster-relation-changed DIAGNOSTICS
2015-08-18 17:44:28 INFO cluster-relation-changed ===========
2015-08-18 17:44:28 INFO cluster-relation-changed
2015-08-18 17:44:28 INFO cluster-relation-changed attempted to contact: ['rabbit@juju-beis0-machine-2']
2015-08-18 17:44:28 INFO cluster-relation-changed
2015-08-18 17:44:28 INFO cluster-relation-changed rabbit@juju-beis0-machine-2:
2015-08-18 17:44:28 INFO cluster-relation-changed * connected to epmd (port 4369) on juju-beis0-machine-2
2015-08-18 17:44:28 INFO cluster-relation-changed * epmd reports node 'rabbit' running on port 25672
2015-08-18 17:44:28 INFO cluster-relation-changed * TCP connection succeeded but Erlang distribution failed
2015-08-18 17:44:28 INFO cluster-relation-changed * suggestion: hostname mismatch?
2015-08-18 17:44:28 INFO cluster-relation-changed * suggestion: is the cookie set correctly?
2015-08-18 17:44:28 INFO cluster-relation-changed
2015-08-18 17:44:28 INFO cluster-relation-changed current node details:
2015-08-18 17:44:28 INFO cluster-relation-changed - node name: 'rabbitmqctl-17379@juju-beis0-machine-4'
2015-08-18 17:44:28 INFO cluster-relation-changed - home dir: /var/lib/rabbitmq
2015-08-18 17:44:28 INFO cluster-relation-changed - cookie hash: j7UJuJx3ZktAni0tPfaRxw==
2015-08-18 17:44:28 INFO cluster-relation-changed
2015-08-18 17:44:28 INFO juju-log cluster:1: Failed to cluster with juju-beis0-machine-2.

# rabbitmq-server/2 (juju-beis0-machine-4)
Name resolution is fine. Attempted to cluster with rabbitmq-server/0 (juju-beis0-machine-2), failed. Clustered ok with rabbitmq-server/1 (juju-beis0-machine-3).
Full unit log: http://paste.ubuntu.com/12119674/

root@juju-beis0-machine-4:/var/log/juju# cat /etc/hostname
juju-beis0-machine-4
root@juju-beis0-machine-4:/var/log/juju# ip a | grep gl
    inet 172.18.99.100/24 brd 172.18.99.255 scope global eth0
root@juju-beis0-machine-4:/var/log/juju# host juju-beis0-machine-2
juju-beis0-machine-2.openstacklocal has address 172.18.99.98
root@juju-beis0-machine-4:/var/log/juju# host juju-beis0-machine-3
juju-beis0-machine-3.openstacklocal has address 172.18.99.99
root@juju-beis0-machine-4:/var/log/juju# host juju-beis0-machine-4
juju-beis0-machine-4.openstacklocal has address 172.18.99.100
root@juju-beis0-machine-4:/var/log/juju# host 172.18.99.98
98.99.18.172.in-addr.arpa domain name pointer juju-beis0-machine-2.openstacklocal.
root@juju-beis0-machine-4:/var/log/juju# host 172.18.99.99
99.99.18.172.in-addr.arpa domain name pointer juju-beis0-machine-3.openstacklocal.
root@juju-beis0-machine-4:/var/log/juju# host 172.18.99.100
100.99.18.172.in-addr.arpa domain name pointer juju-beis0-machine-4.openstacklocal.

# rabbitmq-server/0 (juju-beis0-machine-2)
Name resolution is fine. cluster-releation-* hooks never fired.
Full unit log: http://paste.ubuntu.com/12119672/

root@juju-beis0-machine-2:/var/log/juju# cat /etc/hostname
juju-beis0-machine-2
root@juju-beis0-machine-2:/var/log/juju# ip a | grep gl
    inet 172.18.99.98/24 brd 172.18.99.255 scope global eth0
root@juju-beis0-machine-2:/var/log/juju# host juju-beis0-machine-2
juju-beis0-machine-2.openstacklocal has address 172.18.99.98
root@juju-beis0-machine-2:/var/log/juju# host juju-beis0-machine-3
juju-beis0-machine-3.openstacklocal has address 172.18.99.99
root@juju-beis0-machine-2:/var/log/juju# host juju-beis0-machine-4
juju-beis0-machine-4.openstacklocal has address 172.18.99.100
root@juju-beis0-machine-2:/var/log/juju# host 172.18.99.98
98.99.18.172.in-addr.arpa domain name pointer juju-beis0-machine-2.openstacklocal.
root@juju-beis0-machine-2:/var/log/juju# host 172.18.99.99
99.99.18.172.in-addr.arpa domain name pointer juju-beis0-machine-3.openstacklocal.
root@juju-beis0-machine-2:/var/log/juju# host 172.18.99.100
100.99.18.172.in-addr.arpa domain name pointer juju-beis0-machine-4.openstacklocal.

# rabbitmq-server/1 (juju-beis0-machine-3)
Name resolution is fine. Clustered ok with rabbitmq-server/2 (juju-beis0-machine-4).
Full unit log: http://paste.ubuntu.com/12119695/

root@juju-beis0-machine-3:/var/log/juju# cat /etc/hostname
juju-beis0-machine-3
root@juju-beis0-machine-3:/var/log/juju# ip a | grep gl
    inet 172.18.99.99/24 brd 172.18.99.255 scope global eth0
root@juju-beis0-machine-3:/var/log/juju# host juju-beis0-machine-2
juju-beis0-machine-2.openstacklocal has address 172.18.99.98
root@juju-beis0-machine-3:/var/log/juju# host juju-beis0-machine-3
juju-beis0-machine-3.openstacklocal has address 172.18.99.99
root@juju-beis0-machine-3:/var/log/juju# host juju-beis0-machine-4
juju-beis0-machine-4.openstacklocal has address 172.18.99.100
root@juju-beis0-machine-3:/var/log/juju# host 172.18.99.98
98.99.18.172.in-addr.arpa domain name pointer juju-beis0-machine-2.openstacklocal.
root@juju-beis0-machine-3:/var/log/juju# host 172.18.99.99
99.99.18.172.in-addr.arpa domain name pointer juju-beis0-machine-3.openstacklocal.
root@juju-beis0-machine-3:/var/log/juju# host 172.18.99.100
100.99.18.172.in-addr.arpa domain name pointer juju-beis0-machine-4.openstacklocal.

# VK juju stat
http://paste.ubuntu.com/12119730/

# rmq verions
ubuntu@beisner-bastion:~/bzr/next/rabbitmq-server/tests$ juju run --service rabbitmq-server "apt-cache policy rabbitmq-server"
- MachineId: "2"
  Stdout: |
    rabbitmq-server:
      Installed: 3.4.3-2
      Candidate: 3.4.3-2
      Version table:
     *** 3.4.3-2 0
            500 http://nova.clouds.archive.ubuntu.com/ubuntu/ vivid/main amd64 Packages
            100 /var/lib/dpkg/status
  UnitId: rabbitmq-server/0
- MachineId: "3"
  Stdout: |
    rabbitmq-server:
      Installed: 3.4.3-2
      Candidate: 3.4.3-2
      Version table:
     *** 3.4.3-2 0
            500 http://nova.clouds.archive.ubuntu.com/ubuntu/ vivid/main amd64 Packages
            100 /var/lib/dpkg/status
  UnitId: rabbitmq-server/1
- MachineId: "4"
  Stdout: |
    rabbitmq-server:
      Installed: 3.4.3-2
      Candidate: 3.4.3-2
      Version table:
     *** 3.4.3-2 0
            500 http://nova.clouds.archive.ubuntu.com/ubuntu/ vivid/main amd64 Packages
            100 /var/lib/dpkg/status
  UnitId: rabbitmq-server/2

Related branches

Ryan Beisner (1chb1n)
summary: - cluster-relation-changed Error: unable to connect to nodes
- ['rabbit@juju-beis0-machine-2']: nodedown
+ vivid-kilo 3-node native cluster race: cluster-relation-changed Error:
+ unable to connect to nodes ['rabbit@juju-X-machine-N']: nodedown
Ryan Beisner (1chb1n)
summary: - vivid-kilo 3-node native cluster race: cluster-relation-changed Error:
- unable to connect to nodes ['rabbit@juju-X-machine-N']: nodedown
+ 3-node native cluster doesn't always cluster race: cluster-relation-
+ changed Error: unable to connect to nodes ['rabbit@juju-X-machine-N']:
+ nodedown
Ryan Beisner (1chb1n)
description: updated
description: updated
description: updated
Revision history for this message
Nobuto Murata (nobuto) wrote :

Hello Ryan,

FWIW, I can reproduce cluster setup failure with Juju 1.22 more reliably than 1.24, LP: #1483949.

Revision history for this message
Ryan Beisner (1chb1n) wrote :

FYI, this was happening on 1.24.4, and currently the test rig is using 1.24.5 (we use whatever is in ppa:juju/stable):

jenkins@juju-osci-machine-11:~$ apt-cache policy juju
juju:
  Installed: 1.24.5-0ubuntu1~14.04.1~juju1
  Candidate: 1.24.5-0ubuntu1~14.04.1~juju1
  Version table:
 *** 1.24.5-0ubuntu1~14.04.1~juju1 0
        500 http://ppa.launchpad.net/juju/stable/ubuntu/ trusty/main amd64 Packages
        100 /var/lib/dpkg/status
     1.22.6-0ubuntu1~14.04.1 0
        500 http://nova.clouds.archive.ubuntu.com/ubuntu/ trusty-updates/universe amd64 Packages
     1.18.1-0ubuntu1 0
        500 http://nova.clouds.archive.ubuntu.com/ubuntu/ trusty/universe amd64 Packages

Revision history for this message
Ryan Beisner (1chb1n) wrote :

For clarity, this is occurring even though min-cluster-size is 3.

From juju get...
  min-cluster-size:
    description: |
      Minimum number of units expected to exist before charm will attempt to
      form a rabbitmq cluster.
    type: int
    value: 3

Liam Young (gnuoy)
Changed in rabbitmq-server (Juju Charms Collection):
status: New → Confirmed
importance: Undecided → High
Liam Young (gnuoy)
Changed in rabbitmq-server (Juju Charms Collection):
importance: High → Critical
Ryan Beisner (1chb1n)
description: updated
summary: - 3-node native cluster doesn't always cluster race: cluster-relation-
- changed Error: unable to connect to nodes ['rabbit@juju-X-machine-N']:
- nodedown
+ 3-node native rabbitmq cluster race
Revision history for this message
Ryan Beisner (1chb1n) wrote :

I've been able to reproduce the cluster race condition occurs on multiple providers and different environments:
 - OpenStack-on-Openstack
 - On bare metal, with 1 phys. machine per juju unit
 - In lxc with the local provider

Reproduce by iterating this bundle over precise-icehouse, trusty-icehouse, trusty-juno and vivid-kilo:
http://bazaar.launchpad.net/~ost-maintainers/openstack-charm-testing/trunk/view/head:/bundles/dev/rabbitmq-cluster.yaml

Iteration script may be helpful:
http://bazaar.launchpad.net/~1chb1n/+junk/yard/view/head:/charm-tests/rmq-cluster-cycle.sh

Or reproduce and exercise by enabling and running the new amulet tests in WIP merge proposal:
https://code.launchpad.net/~1chb1n/charms/trusty/rabbitmq-server/next.amulet-fix-20-delay

Revision history for this message
Ryan Beisner (1chb1n) wrote :

FYI, output from a few iterations:
    lxc: http://paste.ubuntu.com/12136350/
    metal: http://paste.ubuntu.com/12136367/

Revision history for this message
David Ames (thedac) wrote :
Download full text (3.8 KiB)

This may be stating the obvious but the issue appears to be a race between leader election and the last run of cluster-relation-changed on a given node.

In this example the leader doesn't set the cookie until 20:51:35 but unit 0 has finished all cluster-relation-* runs at 20:51:33

Failed unit log (unit 0)
2015-08-20 20:51:31 INFO juju-log cluster:1: getting local nodename for ip address: 10.5.11.166
2015-08-20 20:51:31 INFO juju-log cluster:1: local nodename: 10-5-11-166
2015-08-20 20:51:31 INFO juju-log cluster:1: configuring nodename
2015-08-20 20:51:31 INFO juju-log cluster:1: Not the leader, deferring cookie propagation to leader
2015-08-20 20:51:32 INFO juju-log cluster:1: cluster_joined: cookie not yet set.
2015-08-20 20:51:32 INFO juju-log cluster:1: getting local nodename for ip address: 10.5.11.166
2015-08-20 20:51:32 INFO juju-log cluster:1: local nodename: 10-5-11-166
2015-08-20 20:51:32 INFO juju-log cluster:1: configuring nodename
2015-08-20 20:51:32 INFO juju-log cluster:1: Not the leader, deferring cookie propagation to leader
2015-08-20 20:51:33 INFO juju-log cluster:1: cluster_joined: cookie not yet set.

When cookie is not set it just stops. If no further cluster-relation-changed hooks get fired it will remain out of the cluster.
def cluster_changed():
    cookie = peer_retrieve('cookie')
    if not cookie:
        log('cluster_joined: cookie not yet set.', level=INFO)
        return

Poking another run of relation-set from the leader brings everything into the cluster
juju run --unit rabbitmq-server/1 -- "relation-set -r cluster:1 poke=1"
2015-08-20 21:39:47 INFO juju-log cluster:1: Synchronizing erlang cookie from peer.

Leader's log (unit 1)
2015-08-20 20:51:10 INFO juju-log cluster:1: getting local nodename for ip address: 10.5.11.167
2015-08-20 20:51:10 INFO juju-log cluster:1: local nodename: 10-5-11-167
2015-08-20 20:51:10 INFO juju-log cluster:1: configuring nodename
2015-08-20 20:51:10 INFO juju-log cluster:1: Insufficient number of peer units to form cluster (expected=3, got=2)
2015-08-20 20:51:11 INFO juju-log cluster:1: cluster_joined: cookie not yet set.
2015-08-20 20:51:35 INFO juju-log cluster:1: getting local nodename for ip address: 10.5.11.167
2015-08-20 20:51:35 INFO juju-log cluster:1: local nodename: 10-5-11-167
2015-08-20 20:51:35 INFO juju-log cluster:1: configuring nodename
2015-08-20 20:51:37 INFO juju-log cluster:1: Cookie already synchronized with peer.
2015-08-20 22:31:37 INFO juju-log cluster:1: getting local nodename for ip address: 10.5.11.167
2015-08-20 22:31:37 INFO juju-log cluster:1: local nodename: 10-5-11-167
2015-08-20 22:31:37 INFO juju-log cluster:1: configuring nodename
2015-08-20 22:31:38 INFO juju-log cluster:1: Cookie already synchronized with peer.
2015-08-20 22:31:41 INFO juju-log cluster:1: Setting HA policy to vhost 'openstack'
2015-08-20 22:31:41 DEBUG juju-log cluster:1: setting policy: ['/usr/sbin/rabbitmqctl', 'set_policy', '-p', u'openstack', 'HA', '^(?!amq\\.).*', '{"ha-mode": "all"}']

3rd unit's log (unit 2)
2015-08-20 20:51:09 INFO juju-log cluster:1: getting local nodename for ip address: 10.5.11.168
2015-08-20 20:51:09 INFO juju-log cluster:1: local nodename: 10-5...

Read more...

Revision history for this message
David Ames (thedac) wrote :

For versions of juju with leader election we may be able to have the leader_elected hook fire off another cluster-relation-* hook run. For juju without we will need another solution.

David Ames (thedac)
Changed in rabbitmq-server (Juju Charms Collection):
assignee: nobody → David Ames (thedac)
Revision history for this message
David Ames (thedac) wrote :

Revno 106 fixed the race condition with juju >= 1.24
Revno 107 should fix the condition with juju < 1.24. Still testing.

lp:~thedac/charms/trusty/rabbitmq-server/native-cluster-race-fixes

Revision history for this message
Ryan Beisner (1chb1n) wrote :

FYI, juju 1.24.5 (from ppa:juju/stable) with revno 107 of lp:~thedac/charms/trusty/rabbitmq-server/native-cluster-race-fixes still tests OK. 0 cluster fails, 5 iterations of each: P-I, T-I, T-J, T-K, V-K.

Will report back with results of the same revno using juju 1.22.6 (from trusty/updates).

Revision history for this message
Ryan Beisner (1chb1n) wrote :

FYI, juju 1.22.6 (from trusty/updates) with revno 107 of lp:~thedac/charms/trusty/rabbitmq-server/native-cluster-race-fixes

ie. without leadership election capability

2 of 41 iterations failed to cluster (both were trusty-kilo). On both of those, the cluster-relation-changed hook failed. With this iteration, I don't have unit logs, as this repro check is not in the usual CI.

2015-08-25 12:45:47 [DEBUG] deployer.env: Delta unit: rabbitmq-server/1 change:installing
2015-08-25 12:45:47 [DEBUG] deployer.env: Delta unit: rabbitmq-server/1 change:started
2015-08-25 12:46:27 [DEBUG] deployer.env: Delta unit: rabbitmq-server/2 change:error
2015-08-25 12:46:27 [ERROR] deployer.env: The following units had errors:
   unit: rabbitmq-server/2: machine: 4 agent-state: error details: hook failed: "cluster-relation-changed"
2015-08-25 12:46:27 [INFO] deployer.cli: Deployment stopped. run time: 747.65
Tue Aug 25 12:46:27 UTC 2015
- MachineId: "2"
  Stderr: "Warning: Permanently added '10.245.168.14' (ECDSA) to the list of known
    hosts.\r\n"
  Stdout: |
    Cluster status of node 'rabbit@cylindrical-base' ...
    [{nodes,[{disc,['rabbit@cylindrical-base','rabbit@grizzled-family',
                    'rabbit@imaginative-error']}]},
     {running_nodes,['rabbit@grizzled-family','rabbit@cylindrical-base']},
     {cluster_name,<<"<email address hidden>">>},
     {partitions,[]}]
  UnitId: rabbitmq-server/0
- MachineId: "3"
  Stderr: "Warning: Permanently added '10.245.168.15' (ECDSA) to the list of known
    hosts.\r\n"
  Stdout: |
    Cluster status of node 'rabbit@grizzled-family' ...
    [{nodes,[{disc,['rabbit@cylindrical-base','rabbit@grizzled-family',
                    'rabbit@imaginative-error']}]},
     {running_nodes,['rabbit@cylindrical-base','rabbit@grizzled-family']},
     {cluster_name,<<"<email address hidden>">>},
     {partitions,[]}]
  UnitId: rabbitmq-server/1
- MachineId: "4"
  Stderr: "Warning: Permanently added '10.245.168.16' (ECDSA) to the list of known
    hosts.\r\n"
  Stdout: |
    Cluster status of node 'rabbit@imaginative-error' ...
    [{nodes,[{disc,['rabbit@cylindrical-base','rabbit@grizzled-family',
                    'rabbit@imaginative-error']}]}]
  UnitId: rabbitmq-server/2

tags: added: backport-potential
Revision history for this message
Ryan Beisner (1chb1n) wrote :

@thedac: Can I suggest that you propose and land just the fix for leadership-election scenarios? That would unblock the current deployment story with next and I can be working on a backport to the stable charm, which is also affected. That would also unblock my WIP test refactor branches, which are clearly needed in this charm ;-)

Meanwhile you can continue work on the fix for non-LE, and propose those fixes separately.

Make sense?

Revision history for this message
Ryan Beisner (1chb1n) wrote :

FWIW, the cluster issues in this bug still exist @ rabbitmq-server/next rev107. I merged the updated tests into a fresh checkout from r107 to confirm.

So, we still need thedac's fixes (after he rebases and resolves merge conflicts caused by r107).

And we still need my tests landed.

Changed in rabbitmq-server (Juju Charms Collection):
status: Confirmed → Fix Committed
milestone: none → 15.10
tags: added: sts
Revision history for this message
Ryan Beisner (1chb1n) wrote :

Setting back to NEW status, as the fix was prematurely landed, then reverted in rmq/next.

Changed in rabbitmq-server (Juju Charms Collection):
status: Fix Committed → Confirmed
tags: added: cisco landscape
tags: added: landscape-release-29
Liam Young (gnuoy)
Changed in rabbitmq-server (Juju Charms Collection):
status: Confirmed → Fix Committed
Changed in landscape:
importance: Undecided → High
Changed in landscape:
status: New → In Progress
assignee: nobody → Andreas Hasenack (ahasenack)
Changed in landscape:
status: In Progress → Fix Committed
milestone: none → 15.08
David Britton (dpb)
tags: removed: landscape-release-29
Changed in landscape:
status: Fix Committed → Fix Released
milestone: 15.08 → 15.07
James Page (james-page)
Changed in rabbitmq-server (Juju Charms Collection):
status: Fix Committed → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Duplicates of this bug

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.