DDL statements against slave_master_info and slave_relay_log_info tables are replicated
Affects | Status | Importance | Assigned to | Milestone | ||
---|---|---|---|---|---|---|
Percona XtraDB Cluster moved to https://jira.percona.com/projects/PXC | Status tracked in 5.6 | |||||
5.6 |
Confirmed
|
Undecided
|
Unassigned |
Bug Description
In normal MySQL async replication slave_master_info and slave_relay_
In Galera replication this is true only for DML statements, while DDLs are replicated. Even though I wasn't able to break async replication by for example truncating these tables from other nodes, this potentially can lead to problems.
In order to stay consistent with MySQL, Galera should also exclude those tables from replication, also for DDL statements.
Example:
-- async slave node:
percona1 mysql> select * from mysql.slave_
*******
Ssl_verify_
Ignored_
Enabled_
1 row in set (0.00 sec)
percona1 mysql> select * from mysql.slave_
*******
Number_of_lines: 7
Relay_log_name: ./percona1-
Relay_log_pos: 4
Master_log_name: binlog.000004
Master_log_pos: 4113
Sql_delay: 0
Number_of_workers: 0
Id: 1
1 row in set (0.00 sec)
-- other node in same cluster:
percona3 mysql> truncate mysql.slave_
Query OK, 0 rows affected (0.04 sec)
percona3 mysql> truncate mysql.slave_
Query OK, 0 rows affected (0.04 sec)
-- async slave node:
percona1 mysql> select * from mysql.slave_
Empty set (0.00 sec)
percona1 mysql> select * from mysql.slave_
Empty set (0.00 sec)
percona1 mysql> show slave status\G
*******
...
(These tables are fortunately re-created on async slave on next transaction received from async master or after service restart.)
percona1 mysql> select @@version,
+------
| @@version | @@version_comment |
+------
| 5.6.26-74.0-56-log | Percona XtraDB Cluster (GPL), Release rel74.0, Revision 1, WSREP version 25.12, wsrep_25.12 |
+------
1 row in set (0.00 sec)
I was able to verify this with :
mysql> select @@version; ------- ------- + ------- ------- + ------- ------- +
+------
| @@version |
+------
| 5.6.26-74.0-56-log |
+------
1 row in set (0.00 sec)
I've boostraped the cluster out of node1 (flc-node1), which is an async replica of an external MySQL server. Node 2 (flc-node2) did an SST having Node 1 as donor when it joined the cluster, so it got contents of table mysql.slave_ relay_log_ info from it:
flc-node2> select * from mysql.slave_ relay_log_ info\G ******* ******* ****** 1. row ******* ******* ******* ****** relay-bin. 000001
*******
Number_of_lines: 7
Relay_log_name: ./flc-node1-
Relay_log_pos: 4
Master_log_name:
Master_log_pos: 0
Sql_delay: 0
Number_of_workers: 0
Id: 1
1 row in set (0.19 sec)
Note Relay_log_name, which is unique to node 1. I've then truncated table slave_relay_ log_info in node 2, affecting node 1 as well. When node 3 was started it didn't got any contents in that table from SST. The table has been re-populated in node 1 following replication events:
flc-node1> select * from mysql.slave_ relay_log_ info\G ******* ******* ****** 1. row ******* ******* ******* ****** relay-bin. 000002
*******
Number_of_lines: 7
Relay_log_name: ./flc-node1-
Relay_log_pos: 1161
Master_log_name: percona-bin.000003
Master_log_pos: 2236
Sql_delay: 0
Number_of_workers: 0
Id: 1
1 row in set (0.00 sec)
but this have not affected the contents of the same table in nodes 2 and 3. That is, until one of the nodes need to do SST again and node 1 is chosen as donor.
However, and this IMHO, nodes should be idendical, and by design there's no place for having a Galera node act as an async replica and this should rather be documented.