Slave with MTS enabled may get broken after restart
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Percona Server moved to https://jira.percona.com/projects/PS |
Fix Released
|
High
|
Unassigned | ||
5.6 |
Fix Released
|
High
|
Unassigned |
Bug Description
If you restart or shutdown a mysql slave with MTS enabled, the slave can get get broken. Please take a look at the report #1420606 also. The version 5.6.24-72.2 was reported to fix this bug, but the problem still exists, at least in our case. Since we know of this behavior we always do a "STOP SLAVE;" before we shutdown or restart the server.
Because of privacy we cannot offer the relay logs. In addition we are running the row based replication. I don't know, if the logs would even help.
Configuration on the master:
[mysqld]
port = 6033
user = mysql
basedir = /usr/local
datadir = /zdata/db
slave_load_tmpdir = /zdata/tmp
tmpdir = /ramdisk
bind-address = 0.0.0.0
socket = /var/run/
slow_query_log_file = /var/log/
slow_query_log = 1
slow_query_
long_query_time = 4
log_slow_verbosity = full
log_queries_
log-slow-
min_examined_
server-id = 2
innodb_
innodb_
innodb_
innodb_
innodb_
table_open_
innodb_
innodb_
innodb_doublewrite = 0
innodb_
innodb_
innodb_flush_method = O_DIRECT
innodb_
innodb_
innodb_status_file = 1
innodb_
innodb_
innodb_
innodb_
innodb_io_capacity = 10000
innodb_
innodb_
innodb_
thread_handling = pool-of-threads
slave-parallel-
skip-slave-start = 1
log_slave_updates = 1
sync_binlog = 0
max_connections = 10000
log_bin = /zdata/
max_binlog_size = 512M
binlog_format = ROW
binlog_row_image = MINIMAL
expire_logs_days = 5
relay-log = master-relay-bin
relay_log_
relay_log_recovery = 1
master_
slave_net_timeout = 60
read_only = 0
table_open_cache = 4096
table_definitio
query_cache_size = 0
query_cache_type = 0
key_buffer_size = 256M
join_buffer_size = 256K
sort_buffer_size = 256K
max_heap_table_size = 1G
tmp_table_size = 1G
max_allowed_packet = 64M
slave-pending-
innodb_
innodb_
log_warnings = 1
skip-host-cache
skip-name-resolve
performance_schema = 0
slave_compresse
The slave:
[mysqld]
port = 6033
user = mysql
basedir = /usr/local
datadir = /zdata/db
slave_load_tmpdir = /zdata/db
tmpdir = /zdata/db
bind-address = 0.0.0.0
socket = /var/run/
slow_query_log_file = /var/log/
slow_query_log = 1
slow_query_
long_query_time = 4
log_slow_verbosity = full
log_queries_
log-slow-
min_examined_
server-id = 4
innodb_
innodb_
innodb_
innodb_
innodb_
table_open_
innodb_
innodb_
innodb_doublewrite = 0
innodb_
innodb_
innodb_flush_method = O_DIRECT
innodb_
innodb_
innodb_status_file = 1
innodb_
innodb_
innodb_io_capacity = 2000
innodb_
innodb_
thread_handling = pool-of-threads
innodb_
slave-parallel-
skip-slave-start = 1
log_slave_updates = 0
sync_binlog = 0
max_connections = 1000
log_bin = /zdata/
max_binlog_size = 512M
binlog_format = ROW
binlog_row_image = MINIMAL
expire_logs_days = 3
relay-log = /zdata/
relay_log_
relay_log_recovery = 1
master_
slave_net_timeout = 60
read_only = 1
table_open_cache = 4096
table_definitio
query_cache_size = 0
query_cache_type = 0
key_buffer_size = 256M
join_buffer_size = 256K
sort_buffer_size = 256K
max_heap_table_size = 256M
tmp_table_size = 256M
max_allowed_packet = 64M
slave-pending-
innodb_
innodb_
log_warnings = 1
skip-host-cache
skip-name-resolve
performance_schema = 0
slave_compresse
tags: | added: regression |
Please, provide some more details about the way slave is "broken". Do you get errors when slave starts? Can you share any error messages from the error log or the output of
show slave status\G
that demonstrates the problem?