Comment 15 for bug 1529937

Revision history for this message
Jerzy Mikolajczak (jmikolajczak-b) wrote :

I'm posting update here from https://review.openstack.org/323797

@Bogdan,

I'm guessing that You needed "dedicated IPs" for peer sync option.
I propose not to use peer option and keep sticky table for mysqld local.

So we have 3 controller deployment which results in /etc/haproxy/conf.d/110-mysqld.cfg file like this:

server node-2 192.168.0.3:3307 check port 49000 inter 20s fastinter 2s downinter 2s rise 3 fall 3
server node-3 192.168.0.6:3307 backup check port 49000 inter 20s fastinter 2s downinter 2s rise 3 fall 3
server node-4 192.168.0.4:3307 backup check port 49000 inter 20s fastinter 2s downinter 2s rise 3 fall 3

node-2 is current owner of vip__management

With my settings enabled, mysqld sticky table at node-2 looks like this:
# table: mysqld, type: ip, size:1, used:1
0x2951104: key=192.168.0.2 use=0 exp=0 server_id=1

and for the rest of the nodes it's empty as they don't get traffic as they are backup nodes.

So what happens when node-2 goes down?
1) vip__management is moved to node-3
2) haproxy on node-3 detects node-2 failure and routes traffic to node-3
3) mysqld sticky table at node-3 is being updated and now looks like this:

# table: mysqld, type: ip, size:1, used:1
0x2951104: key=192.168.0.2 use=0 exp=0 server_id=2

So what happens when node-2 comes back?
1) vip__management stays on node-3
2) mysql sticky table at node-3 still shows server_id=2 so traffic still goes to node-3

So what happens when node-3 goes down?
1) vip__management is moved to node-2
2) haproxy on node-2 detects failure of node-3 but it's one of the backup nodes
3) mysqld sticky table at node-2 gets an update to:

# table: mysqld, type: ip, size:1, used:1
0x2951104: key=192.168.0.2 use=0 exp=0 server_id=1

I propose not to use:
- nopurge option which prevents further updates of one-row-mysqld-sticky-table
- peer option, as there is no need to sync mysqld table