performance regression from fix for lp:1433432
Affects | Status | Importance | Assigned to | Milestone | ||
---|---|---|---|---|---|---|
Percona Server moved to https://jira.percona.com/projects/PS | Status tracked in 5.7 | |||||
5.5 |
Invalid
|
Undecided
|
Unassigned | |||
5.6 |
Fix Released
|
Medium
|
Nickolay Ihalainen | |||
5.7 |
Fix Released
|
Medium
|
Nickolay Ihalainen |
Bug Description
The fix for lp:1433432 in Percona Server 5.6 introduces a severe performance regression. Sysbench OLTP read-only throughput drops by 50% (single thread) up to 95% (40 threads on a 16-core machine).
In order to see that, the active set of the database must be slightly bigger than the buffer pool and the workload must access rows in a rather random fashion.
Here are some numbers. Command line:
sysbench-0.4.12 --test=oltp --oltp-read-only --oltp-
5.6.27: 660 tps
5.6.28: 36 tps
This doesn't change when using multiple tables. Performance comes back to normal when the buffer pool is increased. I think this performance regression is too big and too likely to being hit by a real-world user to be ignored. It is quite common that the active set of a database does not fit into the buffer pool.
Suggestion: roll back commit 6532572a783ea5a
tags: | added: performance |
What is the buffer pool size in your experiment?