DDL statements against slave_master_info and slave_relay_log_info tables are replicated

Bug #1523850 reported by Przemek
10
This bug affects 2 people
Affects Status Importance Assigned to Milestone
Percona XtraDB Cluster moved to https://jira.percona.com/projects/PXC
Status tracked in 5.6
5.6
Confirmed
Undecided
Unassigned

Bug Description

In normal MySQL async replication slave_master_info and slave_relay_log_info tables are excluded from being written to binlog, hence changes to them are not replicated for neither DML nor DDL statements.
In Galera replication this is true only for DML statements, while DDLs are replicated. Even though I wasn't able to break async replication by for example truncating these tables from other nodes, this potentially can lead to problems.
In order to stay consistent with MySQL, Galera should also exclude those tables from replication, also for DDL statements.

Example:
-- async slave node:
percona1 mysql> select * from mysql.slave_master_info\G
*************************** 1. row ***************************
       Number_of_lines: 23
       Master_log_name: binlog.000004
        Master_log_pos: 4113
                  Host: 192.168.88.44
             User_name: repl
         User_password: repl
                  Port: 3306
         Connect_retry: 60
           Enabled_ssl: 0
                Ssl_ca:
            Ssl_capath:
              Ssl_cert:
            Ssl_cipher:
               Ssl_key:
Ssl_verify_server_cert: 0
             Heartbeat: 1800
                  Bind:
    Ignored_server_ids: 0
                  Uuid: 12446bf7-3219-11e5-9434-080027079e3d
           Retry_count: 86400
               Ssl_crl:
           Ssl_crlpath:
 Enabled_auto_position: 0
1 row in set (0.00 sec)

percona1 mysql> select * from mysql.slave_relay_log_info\G
*************************** 1. row ***************************
  Number_of_lines: 7
   Relay_log_name: ./percona1-relay-bin.000021
    Relay_log_pos: 4
  Master_log_name: binlog.000004
   Master_log_pos: 4113
        Sql_delay: 0
Number_of_workers: 0
               Id: 1
1 row in set (0.00 sec)

-- other node in same cluster:
percona3 mysql> truncate mysql.slave_relay_log_info;
Query OK, 0 rows affected (0.04 sec)

percona3 mysql> truncate mysql.slave_master_info\G
Query OK, 0 rows affected (0.04 sec)

-- async slave node:

percona1 mysql> select * from mysql.slave_relay_log_info\G
Empty set (0.00 sec)

percona1 mysql> select * from mysql.slave_master_info\G
Empty set (0.00 sec)

percona1 mysql> show slave status\G
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: 192.168.88.44
                  Master_User: repl
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: binlog.000004
          Read_Master_Log_Pos: 4113
               Relay_Log_File: percona1-relay-bin.000022
                Relay_Log_Pos: 351
        Relay_Master_Log_File: binlog.000004
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes
...

(These tables are fortunately re-created on async slave on next transaction received from async master or after service restart.)

percona1 mysql> select @@version,@@version_comment;
+--------------------+---------------------------------------------------------------------------------------------+
| @@version | @@version_comment |
+--------------------+---------------------------------------------------------------------------------------------+
| 5.6.26-74.0-56-log | Percona XtraDB Cluster (GPL), Release rel74.0, Revision 1, WSREP version 25.12, wsrep_25.12 |
+--------------------+---------------------------------------------------------------------------------------------+
1 row in set (0.00 sec)

Revision history for this message
Fernando Laudares Camargos (fernando-laudares) wrote :

I was able to verify this with :

mysql> select @@version;
+--------------------+
| @@version |
+--------------------+
| 5.6.26-74.0-56-log |
+--------------------+
1 row in set (0.00 sec)

I've boostraped the cluster out of node1 (flc-node1), which is an async replica of an external MySQL server. Node 2 (flc-node2) did an SST having Node 1 as donor when it joined the cluster, so it got contents of table mysql.slave_relay_log_info from it:

flc-node2> select * from mysql.slave_relay_log_info\G
*************************** 1. row ***************************
  Number_of_lines: 7
   Relay_log_name: ./flc-node1-relay-bin.000001
    Relay_log_pos: 4
  Master_log_name:
   Master_log_pos: 0
        Sql_delay: 0
Number_of_workers: 0
               Id: 1
1 row in set (0.19 sec)

Note Relay_log_name, which is unique to node 1. I've then truncated table slave_relay_log_info in node 2, affecting node 1 as well. When node 3 was started it didn't got any contents in that table from SST. The table has been re-populated in node 1 following replication events:

flc-node1> select * from mysql.slave_relay_log_info\G
*************************** 1. row ***************************
  Number_of_lines: 7
   Relay_log_name: ./flc-node1-relay-bin.000002
    Relay_log_pos: 1161
  Master_log_name: percona-bin.000003
   Master_log_pos: 2236
        Sql_delay: 0
Number_of_workers: 0
               Id: 1
1 row in set (0.00 sec)

but this have not affected the contents of the same table in nodes 2 and 3. That is, until one of the nodes need to do SST again and node 1 is chosen as donor.

However, and this IMHO, nodes should be idendical, and by design there's no place for having a Galera node act as an async replica and this should rather be documented.

Revision history for this message
Shahriyar Rzayev (rzayev-sehriyar) wrote :

Percona now uses JIRA for bug reports so this bug report is migrated to: https://jira.percona.com/browse/PXC-1869

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.