Percona Server 5.7 memory leak when enabled performance_schema and thread pool

Bug #1712240 reported by yejr
8
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Percona Server moved to https://jira.percona.com/projects/PS
New
Undecided
Unassigned

Bug Description

When i enable performance_schema and thread pool, Percona Server 5.7 will memory leak, and then lead to be oom-killed.
Although i upgrade the version from 5.7.17-11 to 5.7.18-16 this problem still happens.

My settiongs:
innodb_buffer_pool_size = 128M
innodb_buffer_pool_instances = 1
...
performance_schema = 1
performance_schema_instrument = '%=on'
...
thread_handling = "pool-of-threads"
thread_pool_max_threads = 10
thread_pool_size = 2
thread_pool_oversubscribe = 2
thread_pool_stall_limit = 100
extra_port = 3307

the memory of mysqld process will grow from 173MB to 903MB within 20min.

ps -ef | grep mysqld
> 925624 ./bin/mysqld --defaults-file=/etc/my.cnf-5.7

mysql -e "select event_name,SUM_NUMBER_OF_BYTES_ALLOC from memory_summary_global_by_event_name order by SUM_NUMBER_OF_BYTES_ALLOC desc LIMIT 10;" performance_schema
event_name SUM_NUMBER_OF_BYTES_ALLOC
memory/innodb/mem0mem 17933218707
memory/memory/HP_PTRS 3030236768
memory/sql/thd::main_mem_root 1078226976
memory/sql/Filesort_buffer::sort_keys 538117200
memory/sql/String::value 454065560
memory/sql/TABLE 425176683
memory/mysys/IO_CACHE 261595752
memory/mysys/MY_DIR 237420552
memory/performance_schema/events_statements_summary_by_thread_by_event_name 162555904
memory/innodb/buf_buf_pool 139722752

mysql -e "select thread_id, event_name, SUM_NUMBER_OF_BYTES_ALLOC from memory_summary_by_thread_by_event_name order by SUM_NUMBER_OF_BYTES_ALLOC desc limit 20" performance_schema
>>
thread_id event_name SUM_NUMBER_OF_BYTES_ALLOC
1 memory/innodb/buf_buf_pool 139722752
1 memory/sql/THD::Session_tracker 34062822
1 memory/sql/thd::main_mem_root 32576208
1 memory/sql/NET::buff 32148279
2206 memory/innodb/mem0mem 26965020
2518 memory/innodb/mem0mem 26963324
1988 memory/innodb/mem0mem 26938940
3775 memory/innodb/mem0mem 26938940
2692 memory/innodb/mem0mem 26853572
328 memory/innodb/mem0mem 26827964
104 memory/innodb/mem0mem 26102225
1920 memory/innodb/mem0mem 21829092
1 memory/sql/XID 19924544
1 memory/innodb/log0log 16865840
1 memory/sql/THD::transactions::mem_root 16217728
1209 memory/innodb/mem0mem 14509563
1 memory/innodb/hash0hash 11613000
1988 memory/sql/Filesort_buffer::sort_keys 9087712
2518 memory/sql/Filesort_buffer::sort_keys 9087712
2206 memory/sql/Filesort_buffer::sort_keys 9087712

Revision history for this message
Sveta Smirnova (svetasmirnova) wrote :

Thank you for the report.

Do you experience this on idle server of after some load? If yes, could you please provide details on what kind of load. Please also send us output of mysql -e "select thread_id, event_name, SUM_NUMBER_OF_BYTES_ALLOC from memory_summary_by_thread_by_event_name order by SUM_NUMBER_OF_BYTES_ALLOC desc limit 20" performance_schema after running same load, but with Thread Plugin not used.

Changed in percona-server:
status: New → Incomplete
Revision history for this message
Kenn Takara (kenn-takara) wrote :

This is the same bug (with repro steps):

https://bugs.launchpad.net/percona-server/+bug/1693511

Revision history for this message
yejr (imysql) wrote :

Hi Sveta Smirnova (svetasmirnova),

I had not monitor the output of memory_summary_by_thread_by_event_name when the total memory of mysqld process less then 900MB.

To Kenn Takara (kenn-takara),

It seems like the same bug with 1693511.
And i enabled jemalloc right now, i will monitor memory usage continual.

thanks all.

Revision history for this message
yejr (imysql) wrote :

Hi Kenn Takara (kenn-takara),

I had enable jemalloc, but memory is still leak :(

lsof -p `pidof mysqld` | grep -i malloc
mysqld 17487 mysql mem REG 253,0 210024 17828 /usr/lib64/libjemalloc.so.1

the memory of mysqld grew up from 251480KB(newly restart) to larger than 921344KB(and then cause oom-kill).

When newly restart,
mysql> select event_name,SUM_NUMBER_OF_BYTES_ALLOC from memory_summary_global_by_event_name order by SUM_NUMBER_OF_BYTES_ALLOC desc LIMIT 10;
event_name SUM_NUMBER_OF_BYTES_ALLOC
memory/innodb/buf_buf_pool 139722752
memory/innodb/mem0mem 104210359
memory/sql/Filesort_buffer::sort_keys 40001048
memory/sql/thd::main_mem_root 32235400
memory/memory/HP_PTRS 30778736
memory/sql/TABLE_SHARE::mem_root 20833800
memory/sql/XID 19924544
memory/sql/TABLE 19739726
memory/sql/String::value 18222136
memory/sql/Log_event 17724522

mysql> select thread_id, event_name, SUM_NUMBER_OF_BYTES_ALLOC from memory_summary_by_thread_by_event_name order by SUM_NUMBER_OF_BYTES_ALLOC desc limit 20;
thread_id event_name SUM_NUMBER_OF_BYTES_ALLOC
1 memory/innodb/buf_buf_pool 139722752
46 memory/innodb/mem0mem 28610178
48 memory/innodb/mem0mem 26623196
1 memory/sql/XID 19924544
1 memory/sql/Log_event 17713990
1 memory/innodb/log0log 16865840
1 memory/innodb/hash0hash 11613000
46 memory/sql/Filesort_buffer::sort_keys 9462232
48 memory/sql/Filesort_buffer::sort_keys 9087712
48 memory/memory/HP_PTRS 6777144
46 memory/memory/HP_PTRS 6777144
38 memory/innodb/mem0mem 5155883
1 memory/innodb/os0file 4908586
50 memory/innodb/mem0mem 4560020
31 memory/innodb/mem0mem 4449398
1 memory/mysys/KEY_CACHE 4196576
1 memory/innodb/ut0pool 4194568
1 memory/innodb/fil0fil 4022008
1 memory/innodb/parallel_doublewrite 3966944
1 memory/innodb/mem0mem 2586413

Revision history for this message
Alberto Pastore (bpress) wrote :

I have exactly the same problem.

I've just set up a fresh installation on a test machine of version 5.7.19 (small VM with 6GB RAM)
The server is totally idle (no connections, no queries running), but after 1 hour both ram & swap are full and an oom is triggered.

As soon as I disable either performance_schema or thread pooling, the problem does not occur.

Changed in percona-server:
status: Incomplete → New
Revision history for this message
Shahriyar Rzayev (rzayev-sehriyar) wrote :

Percona now uses JIRA for bug reports so this bug report is migrated to: https://jira.percona.com/browse/PS-3734

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.