Comment 0 for bug 909774

Revision history for this message
Elena Stepanova (elenst) wrote : Thread pool: new threads are created too slowly even with maximum tuning

If there are at least a couple of active threads in a group, it takes long time to create a new one.

In the provided test case (1), two threads are running select from a big table, while the third one is trying to send a very fast query not depending on the table at all (e.g. SELECT NULL FROM DUAL). The timer in the test case starts right before the query and ends right after it, so the resulting time provided by MTR relates to this query only.

In the traditional case, without thread pool, the query takes a few milliseconds (1-3 ms in my tests).
With thread-pool-size=1 it takes 1.685 seconds.

I'm using thread-pool-size=1 for more obvious measurements, in fact the issue is scalable, it only requires more threads to reproduce it with a bigger pool size. For example, with thread-pool-size=4 and 10 threads querying the big table, it again takes about 1.7 seconds. Even with the minimum value of thread-pool-stall-limit=60 it still takes 200+ ms, which is 100-200 times more than with one-thread-per-connection.

If you run the same test with count=10 (the number of threads querying the table) and with thread-pool-size=1 and default thread-pool-stall-limit, in the process list output you can see that threads are created roughly one per second; and it will take ~10 seconds to process the select that we are timing.

There might be various ways how such a delay might impact real-life scenarios. To name one, assume you have a job (or just user inflow) that connects to the database every 0.5 second. Each connection runs a 5-second query and exits.

With the traditional thread handling (one thread per connection), even with a 1-core machine it's a tiny load which won't make any problem. After initial growth to 10-12 open connections or so, it will stabilize and be maintained this way as long as needed.
With the artificial delay on request processing, the connections will start piling up (since the requests arrive faster than they are processed). With default parameters on the same 1-core machine it will be only a matter of a minute or so to hit the max connections limit.

The second test case is an example of a script which can be used to observe the problem. It's not pretty, and it also might stop working the same way and will need adjustments after the bug#909537 is fixed, but it reflects the idea, as it's difficult to implement nicely using standard tools.

# Test case 1
# At the moment, it could be easier implemented with sleeps,
# but I want to separate it from the bug#909537.

--enable_connect_log

SET @tp_size=@@thread_pool_size;
SET @tp_idle=@@thread_pool_idle_timeout;
SET GLOBAL thread_pool_size=1;
SET GLOBAL thread_pool_idle_timeout=1;

--disable_warnings
DROP TABLE IF EXISTS t1;
--enable_warnings
CREATE TABLE t1 ( i INT );
INSERT INTO t1 VALUES (1),(2),(3),(4);
INSERT INTO t1 SELECT a.i FROM
 t1 a, t1 b, t1 c, t1 d, t1 e,
 t1 f, t1 g, t1 h, t1 i, t1 j;

--let $count=2

--let $id=$count
while ($id)
{
  --connect(con$id,localhost,root,,)
  --dec $id
}

# Wait till old idle threads are gone
--sleep 2

--let $id=$count
while ($id)
{
   --connection con$id
   --send
     eval SELECT COUNT(*) FROM t1 WHERE i=$id;
   --dec $id
}

--connection default
--start_timer
SELECT NULL FROM DUAL;
--end_timer
SHOW GLOBAL STATUS LIKE 'threadpool%threads';
SHOW PROCESSLIST;

DROP TABLE t1;

SET GLOBAL thread_pool_size=@tp_size;
SET GLOBAL thread_pool_idle_timeout=@tp_idle;

--exit

# End of test case 1

# Test case 2
# It assumes you already have the server running
# Tune the sleep time depending on your threadpool parameters

debug/client/mysql -uroot -e "SET GLOBAL lock_wait_timeout=5; DROP TABLE IF EXISTS test.t; CREATE TABLE test.t ( i INT ); LOCK TABLE test.t WRITE; SELECT SLEEP(300)" &

while [ 1 ]
do
  debug/client/mysql -uroot -ss --force -e "SELECT COUNT(*) FROM information_schema.processlist; SELECT 1 FROM test.t" &
  sleep 0.5
done

# End of test case 2