keepalived brings the VIP up in multiple places on a controller reboot

Bug #1367742 reported by jan grant
18
This bug affects 4 people
Affects Status Importance Assigned to Milestone
tripleo
Expired
High
Unassigned

Bug Description

keepalived may be racing network reconfiguration on reboot. In combination with a default quorum setting this leads to a split brain situation.

One trace of this is the "Netlink: filter function error" report from keepalived in syslog; I think this is down to OVS reconfiguring the network asynchronously - this seems to confuse keepalived (= 1.2.13-1).

The _consequence_ of this is that occasionally, when a controller node that's hosting the VIP is rebooted, the VIP moves correctly - but then the original node comes back up and (for some reason) cannot see the remainder of the cluster. The consequence is that the VIP is then brought back up on that controller.

I think there are two problems here. One's the keepalived race (if you SIGHUP keepalived on the rebooted host, it recovers correctly). The second is that keepalived should probably be configured with a quorum greater than the default 1 for this situation, which would prevent a split-brain keepalive from hosing the working half of the cluster.

A reproducer (which only works some of the time, alas):

$ for ii in 107 109 104; do ssh heat-admin@10.22.170.$ii "hostname; hostname -I"; done
overcloud-controller0-zjzuh6rff6n3
10.22.170.107 10.22.170.105
overcloud-controller1-pbxzr2p3ydyj
10.22.170.109
overcloud-controller2-yf6sbwoho5sm
10.22.170.104

$ nova stop 3d37ae90-0e0f-47e2-94f8-08482e8bd255

$ for ii in 107 109 104; do ssh -i .ssh/seed_id_rsa heat-admin@10.22.170.$ii "hostname; hostname -I"; done
ssh: connect to host 10.22.170.107 port 22: No route to host
overcloud-controller1-pbxzr2p3ydyj
10.22.170.109
overcloud-controller2-yf6sbwoho5sm
10.22.170.104

$ for ii in 109 104; do ssh -i .ssh/seed_id_rsa heat-admin@10.22.170.$ii "hostname; hostname -I"; done
overcloud-controller1-pbxzr2p3ydyj
10.22.170.109 10.22.170.105
overcloud-controller2-yf6sbwoho5sm
10.22.170.104

$ nova start 3d37ae90-0e0f-47e2-94f8-08482e8bd255

$ for ii in 107 109 104; do ssh -i .ssh/seed_id_rsa heat-admin@10.22.170.$ii "hostname; hostname -I"; done
overcloud-controller0-zjzuh6rff6n3
10.22.170.107 10.22.170.105
overcloud-controller1-pbxzr2p3ydyj
10.22.170.109 10.22.170.105
overcloud-controller2-yf6sbwoho5sm
10.22.170.104

The rebooted controller0 shows the keepalived trace of "Netlink: filter function error" which is often harmless, but does at least show that OVS has been having a tinker under the hood.

Revision history for this message
jan grant (jan-grant) wrote :

https://review.openstack.org/120443 is a really clunky workaround.

Changed in tripleo:
status: New → Triaged
importance: Undecided → High
Revision history for this message
Julia Kreger (juliaashleykreger) wrote :

It appears that the issue the workaround was targetted for is still an issue in upstream TripleO. I'm presently able to reliably reproduce this with tripleo as of 20150121, when performing an upgrade sequence with tripleo-ansible.

keepalived 1.2.7 with Ubuntu 14.04.1 LTS.

Since the netlink interface is being used to facilitate the addresses being added/removed, the HUP only seem to trigger the notify logic, but the vip address(es), if it already exists, are not removed from the host.

Tripleo-ansible brings down controller0, brings it back up to an operational status, after which it repeats with controllers 1 and 2 where upon reboot the address can flip/flop because of the settings and in this process, it appears that one of the vrrp child instances of keepalived can crash which results in the IP address being present on the host and never removed unless keepalived is fully restarted. This can be resolved by fully restarting keepalived, however that is not exactly an option due to races that can occur with cluster startup sequences.

Changed in tripleo:
assignee: nobody → Julia Kreger (juliaashleykreger)
Revision history for this message
Julia Kreger (juliaashleykreger) wrote :

On ubuntu 14.04.1 with 1.2.7, this is what we see:

Jan 27 22:05:01 overcloud-controller2-jylbes275u3w Keepalived_vrrp[10679]: Using LinkWatch kernel netlink reflector...
Jan 27 22:05:01 overcloud-controller2-jylbes275u3w Keepalived_vrrp[10679]: cant do IP_DROP_MEMBERSHIP errno=Bad file descriptor (9)
Jan 27 22:05:01 overcloud-controller2-jylbes275u3w Keepalived_vrrp[10679]: VRRP_Instance(VI_CONTROL) Entering BACKUP STATE

The failure at IP_DROP_MEMBERSHIP is a result of the file descriptor for the netlink interface with-in keepalived has been closed and it cannot be used to remove the address. Keepalived has no internal mechanism to attempt to correct this, so the error message is logged, but the address is never removed.

Interpreting the commit message for https://github.com/acassen/keepalived/commit/afea07bd94384c8ac8125e8cdbfd18bc4a46b14e which relates, it appears we are loosing the socket due to the fact we need to reload to prevent the VIP from being taken down during controller initialization. In fact we may need to revert the previous fix as it may just be compounding the problem further.

Changed in tripleo:
assignee: Julia Kreger (juliaashleykreger) → nobody
Revision history for this message
Steven Hardy (shardy) wrote : potentially eol bug

This bug was reported against an old version of TripleO, and may no longer be valid.

Since it was reported before the start of the liberty cycle (and our oldest stable
branch is stable/liberty), I'm marking this incomplete.

Please reopen this (change the status from incomplete) if the bug is still valid
on a current supported (stable/liberty, stable/mitaka or trunk) version of TripleO,
thanks!

Changed in tripleo:
status: Triaged → Incomplete
Revision history for this message
Launchpad Janitor (janitor) wrote :

[Expired for tripleo because there has been no activity for 60 days.]

Changed in tripleo:
status: Incomplete → Expired
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.