There are 2 rabbitmq cluster after failover of one controller that was rabbit master previosly
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Fuel for OpenStack |
Fix Released
|
High
|
MOS Oslo |
Bug Description
Steps to reproduce:
1. Deploy ha environment with 3 controller 1 compute and cinder nodes.
2. When cluster is ready, run health check, all tests should be passed
3. ssh on first controller and find controller where rabbit master is running using pcs status
4. Destroy it (virsh destroy <domain_name>)
6. wait for 10 minutes while cluster recovers
7. Run ostf
Expected result:
New rabbit master re-election happens, there is one rabbit cluster, pacs status on the last 2 controllers shows the same, rabbitmqctl cluster_status shows the same on each online controller
Actual:
Master rabbit was not elected, for each online controller rabbitmqctl shows that there is single one node cluster:
root@node-3:~# rabbitmqctl cluster_status
Cluster status of node 'rabbit@node-3' ...
[{nodes,
root@node-3:~# OCF_ROOT=
0
on other online controller
root@node-1:~# rabbitmqctl cluster_status
Cluster status of node 'rabbit@node-1' ...
[{nodes,
{running_
{cluster_
{partitions,[]}]
root@node-1:~# OCF_ROOT=
0
root@node-1:~#
pcs status
Cluster name:
Last updated: Fri Nov 20 10:55:27 2015
Last change: Fri Nov 20 05:42:14 2015
Stack: corosync
Current DC: node-1.
Version: 1.1.12-561c4cf
3 Nodes configured
46 Resources configured
Online: [ node-1.
OFFLINE: [ node-2.
Full list of resources:
Clone Set: clone_p_vrouter [p_vrouter]
Started: [ node-1.
vip__management (ocf::fuel:
vip__vrouter_pub (ocf::fuel:
vip__vrouter (ocf::fuel:
vip__public (ocf::fuel:
Clone Set: clone_p_haproxy [p_haproxy]
Started: [ node-1.
Clone Set: clone_p_mysql [p_mysql]
Started: [ node-1.
Clone Set: clone_p_dns [p_dns]
Started: [ node-1.
Master/Slave Set: master_
Slaves: [ node-1.
Clone Set: clone_p_
Started: [ node-1.
Clone Set: clone_p_
Started: [ node-1.
Clone Set: clone_p_
Started: [ node-1.
Clone Set: clone_p_
Started: [ node-1.
Clone Set: clone_p_heat-engine [p_heat-engine]
Started: [ node-1.
Master/Slave Set: master_p_conntrackd [p_conntrackd]
Masters: [ node-1.
Slaves: [ node-3.
sysinfo_
sysinfo_
sysinfo_
Clone Set: clone_ping_
Started: [ node-1.
Clone Set: clone_p_ntp [p_ntp]
Started: [ node-1.
healthcheck tests are failed
[root@nailgun ~]# cat /etc/fuel/
VERSION:
feature_groups:
- mirantis
production: "docker"
release: "8.0"
openstack_
api: "1.0"
build_number: "181"
build_id: "181"
fuel-nailgun_sha: "e5600e8a877453
python-
fuel-agent_sha: "6faa1e0ba836ef
fuel-
astute_sha: "c8400f51b0b922
fuel-library_sha: "f9281f9c3b08ed
fuel-ostf_sha: "7e24fc802a95d2
fuel-
fuelmenu_sha: "d12061b1aee82f
shotgun_sha: "c377d163519f6d
network-
fuel-upgrade_sha: "1e894e26d4e142
fuelmain_sha: "cd084cf5c4372a
[root@nailgun ~]#
tags: | added: swarm-blocker |
Changed in fuel: | |
status: | New → Confirmed |
tags: | added: blocker-for-qa |
tags: |
added: swarm-fail-driver removed: swarm-blocker |
tags: | removed: swarm-fail-driver |
tags: | added: on-verification |
https:/ /product- ci.infra. mirantis. net/job/ 8.0.system_ test.ubuntu. ha_neutron_ destructive/ 47/consoleFull