bcache: Performance degradation when querying priority_stats
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Linux |
Fix Released
|
Undecided
|
Unassigned | ||
linux (Ubuntu) |
Fix Released
|
Undecided
|
Heitor Alves de Siqueira | ||
Xenial |
Fix Released
|
Undecided
|
Heitor Alves de Siqueira | ||
Bionic |
Fix Released
|
Undecided
|
Heitor Alves de Siqueira | ||
Disco |
Fix Released
|
Undecided
|
Heitor Alves de Siqueira | ||
Eoan |
Fix Released
|
Undecided
|
Heitor Alves de Siqueira |
Bug Description
[Impact]
Querying bcache's priority_stats attribute in sysfs causes severe performance degradation for read/write workloads and occasional system stalls
[Test Case]
Note: As the sorting step has the most noticeable performance impact, the test case below pins a workload and the sysfs query to the same CPU. CPU contention issues still occur without any pinning, this just removes the scheduling factor of landing in different CPUs and affecting different tasks.
1) Start a read/write workload on the bcache device with e.g. fio or dd, pinned to a certain CPU:
# taskset 0x10 dd if=/dev/zero of=/dev/bcache0 bs=4k status=progress
2) Start a sysfs query loop for the priority_stats attribute pinned to the same CPU:
# for i in {1..100000}; do taskset 0x10 cat /sys/fs/
3) Monitor the read/write workload for any performance impact
[Fix]
To fix CPU contention and performance impact, a cond_resched() call is introduced in the priority_stats sort comparison.
[Regression Potential]
Regression potential is low, as the change is confined to the priority_stats sysfs query. In cases where frequent queries to bcache priority_stats take place (e.g. node_exporter), the impact should be more noticeable as those could now take a bit longer to complete. A regression due to this patch would most likely show up as a performance degradation in bcache-focused workloads.
--
[Description]
In the latest bcache drivers, there's a sysfs attribute that calculates bucket priority statistics in /sys/fs/
This is due to the way the driver calculates the stats: the bcache buckets are locked and iterated through, collecting information about each individual bucket. An array of nbucket elements is constructed and sorted afterwards, which can cause very high CPU contention in cases of larger bcache setups.
From our tests, the sorting step of the priority_stats query causes the most expressive performance reduction, as it can hinder tasks that are not even doing any bcache IO. If a task is "unlucky" to be scheduled in the same CPU as the sysfs query, its performance will be harshly reduced as both compete for CPU time. We've had users report systems stalls of up to ~6s due to this, as a result from monitoring tools that query the priority_stats periodically (e.g. Prometheus Node Exporter from [0]). These system stalls have triggered several other issues such as ceph-mon re-elections, problems in percona-cluster and general network stalls, so the impact is not isolated to bcache IO workloads.
An example benchmark can be seen in [1], where the read performance on a bcache device suffered quite heavily (going from ~40k IOPS to ~4k IOPS due to priority_stats). Other comparison charts are found under [2].
[0] https:/
[1] https:/
[2] https:/
description: | updated |
tags: | added: canonical-bootstack |
description: | updated |
Changed in linux (Ubuntu Disco): | |
assignee: | nobody → Heitor Alves de Siqueira (halves) |
Changed in linux (Ubuntu Bionic): | |
assignee: | nobody → Heitor Alves de Siqueira (halves) |
Changed in linux (Ubuntu Xenial): | |
assignee: | nobody → Heitor Alves de Siqueira (halves) |
Changed in linux (Ubuntu Xenial): | |
status: | New → Fix Committed |
Changed in linux (Ubuntu Bionic): | |
status: | New → Fix Committed |
Changed in linux (Ubuntu Disco): | |
status: | New → Fix Committed |
Changed in linux (Ubuntu Eoan): | |
status: | In Progress → Fix Committed |
Changed in linux: | |
status: | Fix Committed → Fix Released |
This has been reported upstream as well, with a tentative patch in https:/ /lkml.org/ lkml/2019/ 3/7/8
I've done some tests in fio to get some more data on this issue, with the following scenarios:
- "raw" test -> fio test without any sysfs queries
- "sysfs" test -> fio + scripted sysfs queries
- "mutex" test -> fio + sysfs + mutex patch
- "resched" test -> fio + sysfs + cond_resched() patch
The "mutex patch" removed bucket locking from the sysfs query. This caused the stats to be computed all wrong of course, but we weren't interested in the stats themselves for now (just on the performance impact of the bucket locking).
The "cond_resched()" patch was suggested upstream by Shile Zhang, and introduces a cond_resched() call in the comparison function used for sorting.
Tests were run on a NVMe-backed bcache device with writeback enabled. The full logs and graphs are available in the bcache- results. tar.gz attachment.