HA doesn't work: Galera cluster can't automatically recover when several controllers gone down
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Fuel for OpenStack |
Invalid
|
Critical
|
Fuel Library (Deprecated) |
Bug Description
Note: bug was found on MOS 6.1, but looks like it can be reproduced on MOS 5.x and 6.x releases as well.
It is reproduced for VirtualBox environment but looks like it can be reproduced for KVM/baremetal as well.
The diagnostic snapshot is available by the link: https:/
Steps To Reproduce:
1. Take the fresh MOS image (in my case in was 202 image: http://
2. Deploy OpenStack cloud with the following configuration: Ubuntu, HA, 3 controllers, 1 compute, Neutron VLAN, Swift file storage backend. (using VirtualBox scripts)
3. Shutdown 2 controllers (to make sure that it will be reproduced, let's shutdown primary and non-primary controller). Example for VirtualBox:
VBoxManage controlvm "fuel-slave-1" poweroff
VBoxManage controlvm "fuel-slave-2" poweroff
4. Wait 10 minutes
5. Try to open Horizon dashboard. It will not available, 404 code.
6. Login to the existing controller node and try to run any OpenStack CLI commands, it will fail:
source openrc ; keystone user-lists
7. Check status of Galera cluster:
mysql -e "SHOW STATUS LIKE 'wsrep%';"
Observed Result:
Cluster doesn't work at all: we can't access Horizon dashboard (with 404 code), OpenStack CLI commands doesn't work (with 500 code), Galera cluster doesn't work at all, but existing Galera node has status 'Non Primary':
-----------
root@node-2:~# mysql -e "SHOW STATUS LIKE 'wsrep%';"
+------
| Variable_name | Value |
+------
| wsrep_local_
| wsrep_protocol_
| wsrep_last_
| wsrep_replicated | 7 |
| wsrep_replicate
| wsrep_repl_keys | 7 |
| wsrep_repl_
| wsrep_repl_
| wsrep_repl_
| wsrep_received | 245822 |
| wsrep_received_
| wsrep_local_commits | 0 |
| wsrep_local_
| wsrep_local_replays | 0 |
| wsrep_local_
| wsrep_local_
| wsrep_local_
| wsrep_local_
| wsrep_local_
| wsrep_flow_
| wsrep_flow_
| wsrep_flow_
| wsrep_flow_
| wsrep_cert_
| wsrep_apply_oooe | 0.000201 |
| wsrep_apply_oool | 0.000074 |
| wsrep_apply_window | 1.000209 |
| wsrep_commit_oooe | 0.000000 |
| wsrep_commit_oool | 0.000000 |
| wsrep_commit_window | 1.000057 |
| wsrep_local_state | 0 |
| wsrep_local_
| wsrep_cert_
| wsrep_causal_reads | 0 |
| wsrep_cert_interval | 0.009545 |
| wsrep_incoming_
| wsrep_cluster_
| wsrep_cluster_size | 1 |
| wsrep_cluster_
| wsrep_cluster_
| wsrep_connected | ON |
| wsrep_local_
| wsrep_local_index | 0 |
| wsrep_provider_name | Galera |
| wsrep_provider_
| wsrep_provider_
| wsrep_ready | OFF |
+------
-----------
summary: |
- HA doesn't work: Galera claster can't automatically recover when several - controller go down + HA doesn't work: Galera cluster can't automatically recover when several + controllers go down |
summary: |
HA doesn't work: Galera cluster can't automatically recover when several - controllers go down + controllers gone down |
According to the HA Reference architecture http:// docs.mirantis. com/fuel/ fuel-6. 0/reference- architecture. html#openstack- environment- architecture, this test case is invalid. When you deploy 3 controllers, you must maintain a quorum, which is 2 nodes, in order to keep your cluster operate. That means that the failover procedure can succeed only for 3-1 case, but will fail for 3-2 case.
Note, we have an unresolved documentation bug about missing supported failover cases /bugs.launchpad .net/bugs/ 1415398 /bugs.launchpad .net/fuel/ +bug/1326605
https:/
https:/