2017-05-25 13:40:15 |
Kenn Takara |
bug |
|
|
added bug |
2017-05-25 13:40:15 |
Kenn Takara |
attachment added |
|
config file https://bugs.launchpad.net/bugs/1693511/+attachment/4883401/+files/Conf-2.txt |
|
2017-05-26 10:08:22 |
Kenn Takara |
description |
Ran across this issue when running PMM with PXC 5.7 (reproed this with Percona Server 5.7.17 on Centos7)
The server has no activity except for PMM. The memory usage of the process grows until killed.
Repro case (does not require PMM):
(1) Start the server
(2) Run the test script
#!/bin/bash
while true
do
./bin/mysql -Sdata/socket.sock -uroot -e "SELECT EVENT_NAME FROM performance_schema.file_summary_by_event_name;" > /dev/null
done
What happens:
the memory usage keeps growing until the process or the script is killed
Expect:
The memory usage to stabilize at some point
=======================
This appears to be a combination of the performance schema and thread pooling. Here's the output from running top.
Output after starting the test script
performance_schema=ON
thread-handling=pool-of-threads
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
2017-05-25-05:43:05 20260 kennt 20 0 4545832 393380 20408 S 0.0 10.2 0:02.07 mysqld
2017-05-25-05:43:08 20260 kennt 20 0 4545832 393380 20408 S 0.7 10.2 0:02.09 mysqld
>> started the test script
2017-05-25-05:43:11 20260 kennt 20 0 4562732 393908 20436 S 9.3 10.2 0:02.37 mysqld
2017-05-25-05:43:14 20260 kennt 20 0 4564292 434824 20436 S 43.3 11.2 0:03.67 mysqld
2017-05-25-05:43:17 20260 kennt 20 0 4564292 516656 20436 S 43.2 13.4 0:04.97 mysqld
2017-05-25-05:43:20 20260 kennt 20 0 4564292 557572 20436 S 42.7 14.4 0:06.25 mysqld
2017-05-25-05:43:23 20260 kennt 20 0 4564292 598488 20436 S 42.9 15.5 0:07.54 mysqld
2017-05-25-05:43:26 20260 kennt 20 0 4564292 639404 20436 S 42.9 16.5 0:08.83 mysqld
2017-05-25-05:43:29 20260 kennt 20 0 4564292 680320 20436 S 42.3 17.6 0:10.10 mysqld
(this will keep growing)
2017-05-25-05:44:05 20260 kennt 20 0 5023044 988.9m 20548 S 0.3 26.2 0:20.43 mysqld
2017-05-25-05:44:08 20260 kennt 20 0 5023044 991.2m 20728 S 0.3 26.3 0:20.44 mysqld
Output after starting the script with performance_schema=OFF
performance_schema=OFF
thread-handling=pool-of-threads
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
2017-05-25-05:46:14 24348 kennt 20 0 4394072 260772 20308 S 0.3 6.7 0:01.87 mysqld
2017-05-25-05:46:20 24348 kennt 20 0 4394072 260772 20308 S 0.0 6.7 0:01.87 mysqld
>> started the test script
2017-05-25-05:46:20 24348 kennt 20 0 4412532 261300 20340 S 9.6 6.8 0:02.16 mysqld
2017-05-25-05:46:23 24348 kennt 20 0 4412532 261300 20340 S 11.0 6.8 0:02.49 mysqld
2017-05-25-05:46:26 24348 kennt 20 0 4412532 261300 20340 S 11.3 6.8 0:02.83 mysqld
2017-05-25-05:46:29 24348 kennt 20 0 4412532 261300 20340 S 11.3 6.8 0:03.17 mysqld
2017-05-25-05:46:32 24348 kennt 20 0 4412532 261300 20340 S 10.6 6.8 0:03.49 mysqld
Output after starting the script with thread-handling set to default
performance_schema=ON
#thread-handling=pool-of-threads
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
2017-05-25-05:48:12 27277 kennt 20 0 4537668 393232 20424 S 0.3 10.2 0:02.00 mysqld
2017-05-25-05:48:15 27277 kennt 20 0 4537668 393232 20424 S 0.0 10.2 0:02.00 mysqld
>> started the test script
2017-05-25-05:48:18 27277 kennt 20 0 4537668 393480 20572 S 4.7 10.2 0:02.14 mysqld
2017-05-25-05:48:21 27277 kennt 20 0 4537668 393480 20572 S 43.0 10.2 0:03.43 mysqld
2017-05-25-05:48:24 27277 kennt 20 0 4537668 393480 20572 S 42.5 10.2 0:04.71 mysqld
2017-05-25-05:48:27 27277 kennt 20 0 4537668 393480 20572 S 42.5 10.2 0:05.99 mysqld
2017-05-25-05:48:30 27277 kennt 20 0 4537668 393480 20572 S 43.0 10.2 0:07.28 mysqld |
Ran across this issue when running PMM with PXC 5.7 (reproed this with Percona Server 5.7.17 on Centos7 with THP disabled and with/without jemalloc)
The server has no activity except for PMM. The memory usage of the process grows until killed.
Repro case (does not require PMM):
(1) Start the server
(2) Run the following test script
#!/bin/bash
while true
do
mysql -Spath/to/socket.sock -uroot -e "SELECT EVENT_NAME FROM performance_schema.file_summary_by_event_name;" > /dev/null
done
What happens:
the memory usage keeps growing until the process or the script is killed
Expect:
The memory usage to stabilize at some point
=======================
This appears to be a combination of the performance schema and thread pooling. Here's the output from running top.
Output after starting the test script
performance_schema=ON
thread-handling=pool-of-threads
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
2017-05-25-05:43:05 20260 kennt 20 0 4545832 393380 20408 S 0.0 10.2 0:02.07 mysqld
2017-05-25-05:43:08 20260 kennt 20 0 4545832 393380 20408 S 0.7 10.2 0:02.09 mysqld
>> started the test script
2017-05-25-05:43:11 20260 kennt 20 0 4562732 393908 20436 S 9.3 10.2 0:02.37 mysqld
2017-05-25-05:43:14 20260 kennt 20 0 4564292 434824 20436 S 43.3 11.2 0:03.67 mysqld
2017-05-25-05:43:17 20260 kennt 20 0 4564292 516656 20436 S 43.2 13.4 0:04.97 mysqld
2017-05-25-05:43:20 20260 kennt 20 0 4564292 557572 20436 S 42.7 14.4 0:06.25 mysqld
2017-05-25-05:43:23 20260 kennt 20 0 4564292 598488 20436 S 42.9 15.5 0:07.54 mysqld
2017-05-25-05:43:26 20260 kennt 20 0 4564292 639404 20436 S 42.9 16.5 0:08.83 mysqld
2017-05-25-05:43:29 20260 kennt 20 0 4564292 680320 20436 S 42.3 17.6 0:10.10 mysqld
(this will keep growing)
2017-05-25-05:44:05 20260 kennt 20 0 5023044 988.9m 20548 S 0.3 26.2 0:20.43 mysqld
2017-05-25-05:44:08 20260 kennt 20 0 5023044 991.2m 20728 S 0.3 26.3 0:20.44 mysqld
Output after starting the script with performance_schema=OFF
performance_schema=OFF
thread-handling=pool-of-threads
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
2017-05-25-05:46:14 24348 kennt 20 0 4394072 260772 20308 S 0.3 6.7 0:01.87 mysqld
2017-05-25-05:46:20 24348 kennt 20 0 4394072 260772 20308 S 0.0 6.7 0:01.87 mysqld
>> started the test script
2017-05-25-05:46:20 24348 kennt 20 0 4412532 261300 20340 S 9.6 6.8 0:02.16 mysqld
2017-05-25-05:46:23 24348 kennt 20 0 4412532 261300 20340 S 11.0 6.8 0:02.49 mysqld
2017-05-25-05:46:26 24348 kennt 20 0 4412532 261300 20340 S 11.3 6.8 0:02.83 mysqld
2017-05-25-05:46:29 24348 kennt 20 0 4412532 261300 20340 S 11.3 6.8 0:03.17 mysqld
2017-05-25-05:46:32 24348 kennt 20 0 4412532 261300 20340 S 10.6 6.8 0:03.49 mysqld
Output after starting the script with thread-handling commented out
performance_schema=ON
#thread-handling=pool-of-threads
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
2017-05-25-05:48:12 27277 kennt 20 0 4537668 393232 20424 S 0.3 10.2 0:02.00 mysqld
2017-05-25-05:48:15 27277 kennt 20 0 4537668 393232 20424 S 0.0 10.2 0:02.00 mysqld
>> started the test script
2017-05-25-05:48:18 27277 kennt 20 0 4537668 393480 20572 S 4.7 10.2 0:02.14 mysqld
2017-05-25-05:48:21 27277 kennt 20 0 4537668 393480 20572 S 43.0 10.2 0:03.43 mysqld
2017-05-25-05:48:24 27277 kennt 20 0 4537668 393480 20572 S 42.5 10.2 0:04.71 mysqld
2017-05-25-05:48:27 27277 kennt 20 0 4537668 393480 20572 S 42.5 10.2 0:05.99 mysqld
2017-05-25-05:48:30 27277 kennt 20 0 4537668 393480 20572 S 43.0 10.2 0:07.28 mysqld |
|
2017-05-30 11:31:37 |
Przemek |
tags |
|
i187595 |
|
2017-05-30 11:32:09 |
Przemek |
bug |
|
|
added subscriber Przemek |
2017-05-31 09:47:06 |
Przemek |
nominated for series |
|
percona-server/5.7 |
|
2017-05-31 09:47:06 |
Przemek |
bug task added |
|
percona-server/5.7 |
|
2017-05-31 09:47:15 |
Przemek |
percona-server/5.7: status |
New |
Confirmed |
|
2017-06-09 15:32:12 |
Zach Moazeni |
bug |
|
|
added subscriber Zach Moazeni |
2017-08-29 07:03:10 |
Laurynas Biveinis |
percona-server/5.7: importance |
Undecided |
High |
|
2017-08-29 07:03:13 |
Laurynas Biveinis |
percona-server/5.7: status |
Confirmed |
Triaged |
|
2017-08-29 07:03:24 |
Laurynas Biveinis |
tags |
i187595 |
i187595 threadpool |
|
2018-01-18 06:29:14 |
Michael Wang |
bug |
|
|
added subscriber Michael Wang |