Memory leak in multi-source replication when binlog_rows_query_log_events is enabled
Affects | Status | Importance | Assigned to | Milestone | ||
---|---|---|---|---|---|---|
MySQL Server |
Unknown
|
Unknown
|
||||
Percona Server moved to https://jira.percona.com/projects/PS | Status tracked in 5.7 | |||||
5.5 |
Invalid
|
Undecided
|
Unassigned | |||
5.6 |
Invalid
|
Undecided
|
Unassigned | |||
5.7 |
Fix Released
|
High
|
Vlad Lesin |
Bug Description
This is reproducible on both MySQL Community and Percona Server 5.7.17.
Slave SQL thread leaks memory when replicating using two channels from master-master pair. Not using binlog_
SQL thread restart doesn't release the memory.
Steps to reproduce using MySQLSandbox below. The more sysbench requests, the bigger memory usage, which seems to never stop growing.
$ ./multi_source.sh percona5.7.17 mysql ALL-MASTERS
installing node 1
installing node 2
installing node 3
installing node 4
group directory installed in $HOME/sandboxes
...
# Setting topology ALL-MASTERS
# node node1
...
cd multi_msb_
node1 [localhost] {msandbox} ((none)) > stop slave for channel 'node3'; reset slave all for channel 'node3';
node1 [localhost] {msandbox} ((none)) > stop slave for channel 'node4'; reset slave all for channel 'node4';
node2 [localhost] {msandbox} ((none)) > stop slave for channel 'node3'; reset slave all for channel 'node3';
node2 [localhost] {msandbox} ((none)) > stop slave for channel 'node4'; reset slave all for channel 'node4';
node3 [localhost] {msandbox} ((none)) > stop slave for channel 'node4'; reset slave all for channel 'node4';
node4 [localhost] {msandbox} ((none)) > stop slave for channel 'node3'; reset slave all for channel 'node3';
node4 [localhost] {msandbox} ((none)) > stop slave for channel 'node2'; reset slave all for channel 'node2';
node4 [localhost] {msandbox} ((none)) > stop slave for channel 'node1'; reset slave all for channel 'node1';
$ for i in {1..3}; do echo "binlog_
$ for i in {1..3}; do echo "performance-
$ for i in {1..3}; do echo "log_slave_
$ ./restart_all
$ ./use_all "SELECT * FROM performance_
# server: 1:
*******
COUNT_RECEIVED_
LAST_HEARTBEAT
RECEIVED_
LAST_
# server: 2:
*******
COUNT_RECEIVED_
LAST_HEARTBEAT
RECEIVED_
LAST_
# server: 3:
*******
COUNT_RECEIVED_
LAST_HEARTBEAT
RECEIVED_
LAST_
*******
COUNT_RECEIVED_
LAST_HEARTBEAT
RECEIVED_
LAST_
$ ./use_all "select * from sys.memory_
# server: 1:
total_allocated
332.57 MiB
# server: 2:
total_allocated
332.56 MiB
# server: 3:
total_allocated
332.64 MiB
# server: 4:
total_allocated
332.18 MiB
$ ps aux|grep multi_msb_
vsz: 712.836 MB rss: 245.57 MB --port=14418
vsz: 712.836 MB rss: 230.875 MB --port=14419
vsz: 712.836 MB rss: 237.129 MB --port=14420
node1 [localhost] {msandbox} ((none)) > create database sbtest1;
Query OK, 1 row affected (0.00 sec)
$ sysbench --num-threads=16 --max-requests=
sysbench 0.5: multi-threaded system evaluation benchmark
Creating table 'sbtest1'...
Inserting 1000000 records into 'sbtest1'
...
$ ps aux|grep multi_msb_
vsz: 756.836 MB rss: 383.094 MB --port=14418
vsz: 752.836 MB rss: 376.918 MB --port=14419
vsz: 1176.84 MB rss: 754.984 MB --port=14420
$ sysbench --num-threads=16 --max-requests=
sysbench 0.5: multi-threaded system evaluation benchmark
Running the test with following options:
Number of threads: 16
...
$ ./use_all "select * from sys.memory_
# server: 1:
total_allocated
360.05 MiB
# server: 2:
total_allocated
343.64 MiB
# server: 3:
total_allocated
1.35 GiB
$ ps aux|grep multi_msb_
vsz: 852.898 MB rss: 421 MB --port=14418
vsz: 752.836 MB rss: 375.461 MB --port=14419
vsz: 1988.84 MB rss: 1615.76 MB --port=14420
node3 [localhost] {msandbox} ((none)) > select event_name, high_number_
+------
| event_name | high_number_
+------
| memory/
| memory/
| memory/
| memory/sql/XID | 19.00152588 |
| memory/
| memory/
| memory/
| memory/
| memory/
| memory/
+------
10 rows in set (0.00 sec)
node3 [localhost] {msandbox} ((none)) > select thread_id tid, user, current_count_used ccu, current_allocated ca, current_avg_alloc caa, current_max_alloc cma, total_allocated from sys.memory_
+-----+
| tid | user | ccu | ca | caa | cma | total_allocated |
+-----+
| 28 | sql/slave_sql | 3810486 | 1.01 GiB | 285 bytes | 1.00 GiB | 17.68 GiB |
| 1 | sql/main | 3306 | 192.17 MiB | 59.52 KiB | 133.25 MiB | 227.30 MiB |
| 27 | sql/slave_sql | 392833 | 26.10 MiB | 70 bytes | 17.83 MiB | 27.50 GiB |
| 33 | msandbox@localhost | 83 | 774.75 KiB | 9.33 KiB | 256.00 KiB | 2.81 MiB |
| 26 | sql/slave_io | 542 | 614.54 KiB | 1.13 KiB | 528.01 KiB | 10.71 MiB |
| 29 | sql/slave_io | 533 | 611.96 KiB | 1.15 KiB | 528.01 KiB | 10.71 MiB |
| 25 | innodb/
| 24 | innodb/
| 30 | sql/signal_handler | 0 | 0 bytes | 0 bytes | 0 bytes | 0 bytes |
| 32 | sql/compress_
+-----+
10 rows in set (0.10 sec)
node3 [localhost] {msandbox} ((none)) > stop slave; start slave;
Query OK, 0 rows affected (0.00 sec)
Query OK, 0 rows affected (0.00 sec)
node3 [localhost] {msandbox} ((none)) > select * from sys.memory_
+-----------------+
| total_allocated |
+-----------------+
| 1.35 GiB |
+-----------------+
1 row in set (0.00 sec)
node3 [localhost] {msandbox} ((none)) > select @@innodb_
+------
| @@innodb_
+------
| 128.00000000 |
+------
1 row in set (0.01 sec)
tags: | added: upstream |
Related upstream bug: https:/ /bugs.mysql. com/bug. php?id= 85034