Two clusters when turn off mgmt network

Bug #1348548 reported by Michael Kraynov on 2014-07-25
24
This bug affects 3 people
Affects Status Importance Assigned to Milestone
Fuel for OpenStack
High
Fuel QA Team
4.1.x
Medium
Fuel Library (Deprecated)
5.0.x
Medium
Fuel Library (Deprecated)

Bug Description

VERSION:
  release: "4.0"
  fuellib_sha: "098f381ff8a528a39d3b6f17ea70955baeb159e8"
  nailgun_sha: "ac02e18990cd652db6577ce42bdea9838076c63c"
  astute_sha: "8b2059a37be9bd82df49f684822727b4df4c511b"
  ostf_sha: "83ada35fec2664089e07fdc0d34861ae2a4d948a"
  fuelmain_sha: "17eed776b30886851ae0042fa7a30184f5cd8eb6"

Step for reproduce:
3 Controller HA mode
2 Compute node

crm status
Last updated: Fri Jul 25 07:46:11 2014
Last change: Thu Jul 24 23:42:46 2014 via cibadmin on node-3
Stack: openais
Current DC: node-1 - partition with quorum
Version: 1.1.8-f722cf1
3 Nodes configured, 3 expected votes
17 Resources configured.

Online: [ node-1 node-2 node-3 ]

 vip__management_old (ocf::heartbeat:IPaddr2): Started node-1
 vip__public_old (ocf::heartbeat:IPaddr2): Started node-2
 Clone Set: clone_p_haproxy [p_haproxy]
     Started: [ node-1 node-2 node-3 ]
 Clone Set: clone_p_mysql [p_mysql]
     Started: [ node-1 node-2 node-3 ]
 Clone Set: clone_p_neutron-plugin-openvswitch-agent [p_neutron-plugin-openvswitch-agent]
     Started: [ node-1 node-2 node-3 ]
 Clone Set: clone_p_neutron-metadata-agent [p_neutron-metadata-agent]
     Started: [ node-1 node-2 node-3 ]
 p_neutron-dhcp-agent (ocf::mirantis:neutron-agent-dhcp): Started node-1
 p_neutron-l3-agent (ocf::mirantis:neutron-agent-l3): Started node-3
 heat-engine (ocf::mirantis:heat-engine): Started node-3

Turn off mgmt network on one controller.
After that we have two cluster with two VIP IP, two DHCP and L3 agents.
Then turn on mgmt network. All services except L3 agent restarted successfully. But crm kill both L3 agents and doesn't launched it on any nodes.
In crm config we have:
no-quorum-policy="ignore"
It means that corosync works without quorum.

summary: - Two clsuters when turn off mgmt network
+ Two clusters when turn off mgmt network
Changed in fuel:
importance: Undecided → Medium
assignee: nobody → Fuel Library Team (fuel-library)
milestone: none → 4.1.2
status: New → Won't Fix
Mike Scherbakov (mihgen) on 2014-07-28
Changed in fuel:
status: Won't Fix → New
Bogdan Dobrelya (bogdando) wrote :

Perhaps we should never use no-quorum-policy="ignore" in production as far as we want split brain handling instead of "let it be splitted".
The other concern is built-in corosync fencing - in order to detect and prevent splitted brains from operating it must be enabled and configured as well.

Dmitry Pyzhov (dpyzhov) on 2014-07-28
no longer affects: fuel/5.1.x
Vladimir Kuklin (vkuklin) wrote :

the workaround is to set no-quorum-policy to stop on the cluster

tags: added: release-notes
Changed in fuel:
assignee: Fuel Library Team (fuel-library) → Bogdan Dobrelya (bogdando)
Pavel Vaylov (pvaylov) wrote :

Colleagues, one more question:

Should we set quorum to "2"?

We have variable: expected-quorum-votes="3" but looks like is not really quorum, it shows us how many nodes in a cluster.

Meg McRoberts (dreidellhasa) wrote :

http://docs.openstack.org/high-availability-guide/content/_setting_basic_cluster_properties.html says: "Setting no-quorum-policy="ignore" is required in 2-node Pacemaker clusters for the following reason: if quorum enforcement is enabled, and one of the two nodes fails, then the remaining node can not establish a majority of quorum votes necessary to run services, and thus it is unable to take over any resources. The appropriate workaround is to ignore loss of quorum in the cluster. This is safe and necessary only in 2-node clusters. Do not set this property in Pacemaker clusters with more than two nodes."

Fix proposed to branch: master
Review: https://review.openstack.org/110240

Changed in fuel:
status: Triaged → In Progress
Bogdan Dobrelya (bogdando) wrote :

The better solution would be deploying with explicit 'ignore' policy and setting it to 'stop' at post deployment orchestration stage.
That would allow to scale corosync cluster w/o issues as well.

Change abandoned by Bogdan Dobrelya (<email address hidden>) on branch: master
Review: https://review.openstack.org/110240

Bogdan Dobrelya (bogdando) wrote :

Pavel, we should not manipulate with expected quorum votes letting it being driven by corosync cluster management logic.

Changed in fuel:
assignee: Bogdan Dobrelya (bogdando) → Dmitry Borodaenko (dborodaenko)

Reviewed: https://review.openstack.org/110602
Committed: https://git.openstack.org/cgit/stackforge/fuel-astute/commit/?id=234bcd189bf2666ece892e5c56e72d8376fc7f84
Submitter: Jenkins
Branch: master

commit 234bcd189bf2666ece892e5c56e72d8376fc7f84
Author: Bogdan Dobrelya <email address hidden>
Date: Wed Jul 30 13:45:49 2014 +0300

    Set no quorum policy stop at post deployment

    Corosync cluster should being deployed with 'ignore' policy
    and puppet ensures that. Later, if there are 3 or more controllers
    in cluster, its policy should be updated to 'stop'
    (or 'suicide', if fencing enabled) in order to handle split brain
    scenarios as appropriate. Otherwise, in case of there are 2 or less
    controllers in cluster, qourum policy should not be changed.

    Closes-bug: #1348548

    Change-Id: Iae2df7736992edcef4ae72f967af9f504d56612b
    Signed-off-by: Bogdan Dobrelya <email address hidden>

Changed in fuel:
status: In Progress → Fix Committed
Bogdan Dobrelya (bogdando) wrote :

We have to verify freeze policy as well - there was an update for this issue from Mirroslav Anashkin - and check, if the corosync cluster will recover from partitioning with stop policy, or we should use freeze instead.

Changed in fuel:
assignee: Dmitry Borodaenko (dborodaenko) → Bogdan Dobrelya (bogdando)
Changed in fuel:
status: Fix Committed → Confirmed
Changed in fuel:
status: Confirmed → In Progress
Bogdan Dobrelya (bogdando) wrote :

I verified the 'stop' policy, and I can confirm what w/o fencing enabled, cluster will remain in broken/partitioning state once connectivity recovered. But if hard reset fencing action issued, the cluster would have exited the partitioned state after fenced node has been rebooted. Here is a details (logs and spreadsheet) http://goo.gl/Xd4KhQ
I will put more info about freeze policy once ready.

tags: added: ha
Bogdan Dobrelya (bogdando) wrote :

related bug for rabbitmq cluster https://bugs.launchpad.net/fuel/+bug/1354319

Bogdan Dobrelya (bogdando) wrote :

I was inaccurate with conclusion made at https://bugs.launchpad.net/fuel/+bug/1348548/comments/14.
Both 'stop' and 'freeze' policy showed no problems with corosync cluster after either in partitioned or in recovered state.
The only caveat is what some nodes will end up with stopped corosync service once the cluster recovered from partitioning - and the only way to handle it is either reboot them manually, or configure fencing. Please see http://goo.gl/Xd4KhQ for details. The spreadsheet linked inside was updated and double-checked :-)
I will provide the 3rd case as well - ignore policy, it'd nice to have it...

Changed in fuel:
status: In Progress → Fix Committed
tags: added: split-brain
Changed in fuel:
status: Fix Committed → Triaged
Bogdan Dobrelya (bogdando) wrote :

Due to http://lists.corosync.org/pipermail/discuss/2013-October/002834.html quorum bug in Corosync, no-quorum-policy has no effect at all, and reserch doc linked above confirms that as well.

In order to fix we have to
1) resolve the quorum bug in order to allow committed no-quorum-policy changes to take place.
2) use fencing (that would w/a quorum evaluation problem and fix all split-brain related issues as well)
3) disable corosync/pacemaker services start up in order to protect the cluster from fencing loops

Vladimir Kuklin (vkuklin) wrote :

According to Sergii Golovatiuk this bug was fixed in corosync 1.x. we just need to find the patch

Bogdan Dobrelya (bogdando) wrote :

Yes, next action item is to try this patch and update our corosync packages, if it fixes the quorum evaluation bug

Bogdan Dobrelya (bogdando) wrote :

Patch did not fix the issue, Vladimir will try corosync 1.4.7 build to reproduce quorum evaluation bug

Changed in fuel:
assignee: Bogdan Dobrelya (bogdando) → Vladimir Kuklin (vkuklin)
Vladimir Kuklin (vkuklin) wrote :

according to this discussion http://lists.corosync.org/pipermail/discuss/2014-August/003292.html we should not use ifdown for tests, but turn off port on the bridge instead

I tested it manually on my virtual environment by turning of VMs port on the management bridge and it works like a charm.
Closing the bug finally.

Changed in fuel:
status: Triaged → Fix Committed
no longer affects: fuel/6.0.x
Changed in fuel:
assignee: Vladimir Kuklin (vkuklin) → Fuel QA Team (fuel-qa)
status: Fix Committed → In Progress
Vladimir Kuklin (vkuklin) wrote :

This bug should be also verified on bare-metal environment as the result may be different for virtual and physical environment as ip l down on the host node does not result in NO-CARRIER status of the interface on the VM.

Dmitry Borodaenko (angdraug) wrote :

Based on comments #23 and #24, this should be Fix Committed for 5.1 and assigned to QA to verify on bare metal.

Artem Panchenko (apanchenko-8) wrote :

I have verified this bug on bare metal and confirm that the issue is fixed already. Here you can find the output of 'pcs status' command on controllers after shutting down switch port, which is connected to the node-5/br-mgmt (eth0):

http://paste.openstack.org/show/97199/

As you can all services on offline controller (node-5) are down and started on other controllers. Also, openstack cluster was fully operable after that.

Changed in fuel:
status: In Progress → Fix Released
Artem Panchenko (apanchenko-8) wrote :

It was verified on

api: '1.0'
astute_sha: 8e1db3926b2320b30b23d7a772122521b0d96166
auth_required: true
build_id: 2014-08-18_11-13-09
build_number: '449'
feature_groups:
- mirantis
fuellib_sha: 2c9ad4aec9f3b6fc060cb5a394733607f07063c1
fuelmain_sha: 08f04775dcfadd8f5b438a31c63e81f29276b7d3
nailgun_sha: bc9e377dbe010732bc2ba47161ed9d433998e07b
ostf_sha: d2a894d228c1f3c22595a77f04b1e00d09d8e463
production: docker
release: '5.1'

To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Duplicates of this bug

Other bug subscribers