OpenZFS writing stalls, under load
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Native ZFS for Linux |
New
|
Unknown
|
|||
zfs-linux (Ubuntu) |
Fix Released
|
Medium
|
Unassigned |
Bug Description
Using a QNAP 4-drive USB enclosure, with a set of SSDs, on a Raspberry Pi 8GB. ZFS deduplication, and LZJB compression is enabled.
This issue seems to occur, intermittently, after some time (happens with both SMB access, via Samba, and when interacting with the system, via SSH), and never previously occurred, until a few months ago, and I sometimes have to force a reboot of the system (at the cost of some data loss), in order to use it again.
The "dmesg" log reports:
[25375.911590] z_wr_iss_h D 0 2161 2 0x00000028
[25375.911606] Call trace:
[25375.911627] __switch_
[25375.911639] __schedule+
[25375.911647] schedule+0x3c/0xb8
[25375.911655] io_schedule+
[25375.911668] rq_qos_
[25375.911677] wbt_wait+0xb4/0xf0
[25375.911687] __rq_qos_
[25375.911700] blk_mq_
[25375.911712] generic_
[25375.911722] submit_
[25375.911960] vdev_disk_
[25375.912181] zio_vdev_
[25375.912400] zio_nowait+
[25375.912617] vdev_mirror_
[25375.912839] zio_vdev_
[25375.913057] zio_execute+
[25375.913096] taskq_thread+
[25375.913108] kthread+0xfc/0x128
[25375.913119] ret_from_
[25375.913149] INFO: task txg_sync:2333 blocked for more than 120 seconds.
[25375.919916] Tainted: P C OE 5.4.0-1018-raspi #20-Ubuntu
[25375.926848] "echo 0 > /proc/sys/
[25375.934835] txg_sync D 0 2333 2 0x00000028
[25375.934850] Call trace:
[25375.934869] __switch_
[25375.934879] __schedule+
[25375.934887] schedule+0x3c/0xb8
[25375.934899] schedule_
[25375.934908] io_schedule_
[25375.934946] __cv_timedwait_
[25375.934982] __cv_timedwait_
[25375.935205] zio_wait+
[25375.935423] dsl_pool_
[25375.935650] spa_sync+
[25375.935867] txg_sync_
[25375.935911] thread_
[25375.935924] kthread+0xfc/0x128
[25375.935935] ret_from_
[25375.936017] INFO: task zbackup:75339 blocked for more than 120 seconds.
[25375.942780] Tainted: P C OE 5.4.0-1018-raspi #20-Ubuntu
[25375.949710] "echo 0 > /proc/sys/
[25375.957702] zbackup D 0 75339 5499 0x00000000
[25375.957716] Call trace:
[25375.957732] __switch_
[25375.957742] __schedule+
[25375.957750] schedule+0x3c/0xb8
[25375.957789] cv_wait_
[25375.957823] __cv_wait+0x30/0x40 [spl]
[25375.958045] zil_commit_
[25375.958263] zil_commit+
[25375.958481] zfs_create+
[25375.958698] zpl_create+
[25375.958711] lookup_
[25375.958721] do_last+0x260/0x8c0
[25375.958730] path_openat+
[25375.958739] do_filp_
[25375.958752] do_sys_
[25375.958763] __arm64_
[25375.958773] el0_svc_
[25375.958781] el0_svc_
[25375.958791] el0_svc+0x10/0x2cc
[25375.958801] INFO: task zbackup:95187 blocked for more than 120 seconds.
[25375.965564] Tainted: P C OE 5.4.0-1018-raspi #20-Ubuntu
[25375.972492] "echo 0 > /proc/sys/
[25375.980479] zbackup D 0 95187 5499 0x00000000
[25375.980493] Call trace:
[25375.980514] __switch_
[25375.980525] __schedule+
[25375.980536] schedule+0x3c/0xb8
[25375.980578] cv_wait_
[25375.980612] __cv_wait+0x30/0x40 [spl]
[25375.980834] zil_commit_
[25375.981052] zil_commit+
[25375.981280] zfs_write+
[25375.981498] zpl_write_
[25375.981726] zpl_iter_
[25375.981766] new_sync_
[25375.981776] __vfs_write+
[25375.981784] vfs_write+
[25375.981794] ksys_write+
[25375.981803] __arm64_
[25375.981813] el0_svc_
[25375.981821] el0_svc_
[25375.981830] el0_svc+0x10/0x2cc
[25375.981842] INFO: task zbackup:95188 blocked for more than 120 seconds.
[25375.988615] Tainted: P C OE 5.4.0-1018-raspi #20-Ubuntu
[25375.995543] "echo 0 > /proc/sys/
[25376.003532] zbackup D 0 95188 5499 0x00000008
[25376.003545] Call trace:
[25376.003561] __switch_
[25376.003571] __schedule+
[25376.003579] schedule+0x3c/0xb8
[25376.003619] cv_wait_
[25376.003654] __cv_wait+0x30/0x40 [spl]
[25376.003875] zil_commit_
[25376.004092] zil_commit+
[25376.004312] zfs_write+
[25376.004530] zpl_write_
[25376.004748] zpl_iter_
[25376.004759] new_sync_
[25376.004768] __vfs_write+
[25376.004776] vfs_write+
[25376.004785] ksys_write+
[25376.004794] __arm64_
[25376.004803] el0_svc_
[25376.004811] el0_svc_
[25376.004820] el0_svc+0x10/0x2cc
[25376.004917] INFO: task zbackup:95231 blocked for more than 120 seconds.
[25376.011678] Tainted: P C OE 5.4.0-1018-raspi #20-Ubuntu
[25376.018587] "echo 0 > /proc/sys/
[25376.026554] zbackup D 0 95231 5499 0x00000000
[25376.026562] Call trace:
[25376.026575] __switch_
[25376.026582] __schedule+
[25376.026588] schedule+0x3c/0xb8
[25376.026615] cv_wait_
[25376.026636] __cv_wait+0x30/0x40 [spl]
[25376.026771] zil_commit_
[25376.026904] zil_commit+
[25376.027042] zfs_write+
[25376.027174] zpl_write_
[25376.027309] zpl_iter_
[25376.027317] new_sync_
[25376.027322] __vfs_write+
[25376.027332] vfs_write+
[25376.027338] ksys_write+
[25376.027344] __arm64_
[25376.027350] el0_svc_
[25376.027355] el0_svc_
[25376.027361] el0_svc+0x10/0x2cc
[25376.027374] INFO: task z_wr_int:95546 blocked for more than 120 seconds.
[25376.034203] Tainted: P C OE 5.4.0-1018-raspi #20-Ubuntu
[25376.041113] "echo 0 > /proc/sys/
[25376.049080] z_wr_int D 0 95546 2 0x00000028
[25376.049089] Call trace:
[25376.049100] __switch_
[25376.049106] __schedule+
[25376.049111] schedule+0x3c/0xb8
[25376.049116] io_schedule+
[25376.049123] rq_qos_
[25376.049129] wbt_wait+0xb4/0xf0
[25376.049135] __rq_qos_
[25376.049143] blk_mq_
[25376.049150] generic_
[25376.049156] submit_
[25376.049292] vdev_disk_
[25376.049423] zio_vdev_
[25376.049554] zio_nowait+
[25376.049693] vdev_queue_
[25376.049847] zio_vdev_
[25376.049986] zio_execute+
[25376.050010] taskq_thread+
[25376.050025] kthread+0xfc/0x128
[25376.050033] ret_from_
[25376.050041] INFO: task z_wr_int:95553 blocked for more than 120 seconds.
[25376.056873] Tainted: P C OE 5.4.0-1018-raspi #20-Ubuntu
[25376.063780] "echo 0 > /proc/sys/
[25376.071746] z_wr_int D 0 95553 2 0x00000028
[25376.071754] Call trace:
[25376.071764] __switch_
[25376.071770] __schedule+
[25376.071775] schedule+0x3c/0xb8
[25376.071780] io_schedule+
[25376.071787] rq_qos_
[25376.071792] wbt_wait+0xb4/0xf0
[25376.071798] __rq_qos_
[25376.071806] blk_mq_
[25376.071813] generic_
[25376.071819] submit_
[25376.071955] vdev_disk_
[25376.072086] zio_vdev_
[25376.072217] zio_nowait+
[25376.072346] vdev_queue_
[25376.072476] zio_vdev_
[25376.072606] zio_execute+
[25376.072629] taskq_thread+
[25376.072637] kthread+0xfc/0x128
[25376.072643] ret_from_
[25496.722315] INFO: task z_wr_iss_h:2161 blocked for more than 241 seconds.
[25496.729251] Tainted: P C OE 5.4.0-1018-raspi #20-Ubuntu
[25496.736195] "echo 0 > /proc/sys/
[25496.744197] z_wr_iss_h D 0 2161 2 0x00000028
[25496.744247] Call trace:
[25496.744268] __switch_
[25496.744279] __schedule+
[25496.744288] schedule+0x3c/0xb8
[25496.744296] io_schedule+
[25496.744308] rq_qos_
[25496.744317] wbt_wait+0xb4/0xf0
[25496.744327] __rq_qos_
[25496.744340] blk_mq_
[25496.744351] generic_
[25496.744362] submit_
[25496.744600] vdev_disk_
[25496.744821] zio_vdev_
[25496.745039] zio_nowait+
[25496.745256] vdev_mirror_
[25496.745473] zio_vdev_
[25496.745690] zio_execute+
[25496.745730] taskq_thread+
[25496.745742] kthread+0xfc/0x128
[25496.745753] ret_from_
[25496.745853] INFO: task zbackup:75339 blocked for more than 241 seconds.
[25496.752621] Tainted: P C OE 5.4.0-1018-raspi #20-Ubuntu
[25496.759557] "echo 0 > /proc/sys/
[25496.767547] zbackup D 0 75339 5499 0x00000000
[25496.767561] Call trace:
[25496.767585] __switch_
[25496.767595] __schedule+
[25496.767606] schedule+0x3c/0xb8
[25496.767646] cv_wait_
[25496.767682] __cv_wait+0x30/0x40 [spl]
[25496.767906] zil_commit_
[25496.768126] zil_commit+
[25496.768355] zfs_create+
[25496.768574] zpl_create+
[25496.768595] lookup_
[25496.768604] do_last+0x260/0x8c0
[25496.768614] path_openat+
[25496.768624] do_filp_
[25496.768636] do_sys_
[25496.768647] __arm64_
[25496.768657] el0_svc_
[25496.768665] el0_svc_
[25496.768674] el0_svc+0x10/0x2cc
tyson@ubuntu:~$ lsb_release -rd
Description: Ubuntu 20.04.1 LTS
Release: 20.04
tyson@ubuntu:~$ apt-cache policy zfs-dkms
zfs-dkms:
Installed: 0.8.3-1ubuntu12.4
Candidate: 0.8.3-1ubuntu12.4
Version table:
*** 0.8.3-1ubuntu12.4 500
500 http://
500 http://
100 /var/lib/
0.
500 http://
500 http://
ProblemType: Bug
DistroRelease: Ubuntu 20.04
Package: zfs-dkms 0.8.3-1ubuntu12.4
ProcVersionSign
Uname: Linux 5.4.0-1018-raspi aarch64
NonfreeKernelMo
ApportVersion: 2.20.11-0ubuntu27.8
Architecture: arm64
CasperMD5CheckR
Date: Sat Oct 10 08:50:34 2020
ImageMediaBuild: 20200423.1
PackageArchitec
ProcEnviron:
TERM=xterm-
PATH=(custom, no user)
XDG_RUNTIME_
LANG=C.UTF-8
SHELL=/bin/bash
SourcePackage: zfs-linux
UpgradeStatus: No upgrade log present (probably fresh install)
Changed in zfs: | |
status: | Unknown → New |
Changed in zfs-linux (Ubuntu): | |
assignee: | Colin Ian King (colin-king) → nobody |
It is worth noting that using dededuplication does consume a lot of memory.
Deduplication tables consume memory and eventually spill over and consume disk space - this causes extra read and write operations for every block of data on which deduplication is attempted cause more I/O waits.
A system with large pools with small memory areas does not perform deduplication well.
So, if you run the following command on your pool (pool-ssd in this example)
sudo zdb -s pool-ssd
..you see something like the following:
pool: pool-ssd
state: ONLINE
scan: scrub repaired 0B in 0 days 00:31:14 with 0 errors on Wed Sep 2 14:35:52 2020
config:
NAME STATE READ WRITE CKSUM INTEL_SSDSC2BW4 80H6_CVTR608104 NW480EGN ONLINE 0 0 0 INTEL_SSDSC2BW4 80H6_CVTR552500 S0480EGN ONLINE 0 0 0
pool-ssd ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ata-
ata-
errors: No known data errors
dedup: DDT entries 4258028, size 448B on disk, 306B in core
bucket allocated referenced _______ _______ _______ __ _______ _______ _______ _______ __
______ _______
refcnt blocks LSIZE PSIZE DSIZE blocks LSIZE PSIZE DSIZE
------ ------ ----- ----- ----- ------ ----- ----- -----
1 3.63M 464G 226G 226G 3.63M 464G 226G 226G
2 376K 46.1G 24.3G 24.3G 822K 100G 53.2G 53.2G
4 51.3K 5.60G 2.51G 2.51G 240K 26.3G 11.8G 11.8G
8 7.49K 925M 544M 544M 77.9K 9.41G 5.47G 5.47G
16 3.32K 415M 261M 261M 68.8K 8.41G 5.35G 5.35G
32 657 78.7M 62.1M 62.1M 29.3K 3.51G 2.79G 2.79G
64 389 46.2M 42.7M 42.7M 34.0K 4.03G 3.78G 3.78G
128 248 30.1M 29.5M 29.5M 44.8K 5.44G 5.32G 5.32G
256 339 42.1M 40.8M 40.8M 123K 15.4G 14.9G 14.9G
512 374 46.8M 46.1M 46.1M 271K 33.9G 33.5G 33.5G
1K 254 31.8M 31.3M 31.3M 355K 44.4G 43.8G 43.8G
2K 5 640K 513K 513K 12.9K 1.61G 1.20G 1.20G
4K 4 512K 512K 512K 22.7K 2.84G 2.84G 2.84G
8K 8 1M 897K 897K 89.3K 11.2G 10.1G 10.1G
64K 2 256K 6K 6K 226K 28.3G 679M 679M
Total 4.06M 517G 254G 254G 5.99M 759G 421G 421G
---
Looking at the "dedup: DDT entries 4258028, size 448B on disk, 306B in core" line, we have:
4258028 de-dup entries, each entry used 306 bytes, so that's ~1.2GB of memory for just the de-dup/ table. But it is capped at 1/4 the size of the ARC, so you may find on a small memory system such as a raspberry pi that a lot of the dedup table can't be stored in memory and needs to be on-disk, and this increases the I/O waits.
It may be an idea to disable de-dup and see if it helps to resolve your issue on a comparatively slow and memory constrained system such as a raspberry pi when you have large pools