Just wanted to clarify the behaviour described in my last comment. It is not related to this issue, but to keepalived's VRRP implementation, which preempts equal-priority BACKUP nodes when a higher IP address node comes online again (see https://github.com/acassen/keepalived/issues/107).
To avoid this unnecessary fail back when l2 and l3 services in any node, I've configured one of my two nodes with higher priority, as suggested here: http://serverfault.com/a/579979 . Nonetheless, if I reboot the higher priority node, it preempts the other node when it comes online.
This is obviously a keepalived limitation, but I wanted to make it clear that with the default generated keepalived.conf we are experiencing VIP flapping (and extra downtime).
Just wanted to clarify the behaviour described in my last comment. It is not related to this issue, but to keepalived's VRRP implementation, which preempts equal-priority BACKUP nodes when a higher IP address node comes online again (see https:/ /github. com/acassen/ keepalived/ issues/ 107).
To avoid this unnecessary fail back when l2 and l3 services in any node, I've configured one of my two nodes with higher priority, as suggested here: http:// serverfault. com/a/579979 . Nonetheless, if I reboot the higher priority node, it preempts the other node when it comes online.
This is obviously a keepalived limitation, but I wanted to make it clear that with the default generated keepalived.conf we are experiencing VIP flapping (and extra downtime).