I hit his bug as well in Ubuntu 22.04 with kernel 5.15.0-67-generic We have a single RAID 5 on 3 drives for 28T. I'm switching to the workaround from comment #5. Jun 4 12:48:11 server1 kernel: [1622699.548591] INFO: task md0_raid5:406 blocked for more than 120 seconds. Jun 4 12:48:11 server1 kernel: [1622699.556202] Tainted: G OE 5.15.0-67-generic #74-Ubuntu Jun 4 12:48:11 server1 kernel: [1622699.564101] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Jun 4 12:48:11 server1 kernel: [1622699.573063] task:md0_raid5 state:D stack: 0 pid: 406 ppid: 2 flags:0x00004000 Jun 4 12:48:11 server1 kernel: [1622699.573077] Call Trace: Jun 4 12:48:11 server1 kernel: [1622699.573081] Jun 4 12:48:11 server1 kernel: [1622699.573087] __schedule+0x24e/0x590 Jun 4 12:48:11 server1 kernel: [1622699.573103] schedule+0x69/0x110 Jun 4 12:48:11 server1 kernel: [1622699.573115] raid5d+0x3d9/0x5f0 [raid456] Jun 4 12:48:11 server1 kernel: [1622699.573140] ? wait_woken+0x70/0x70 Jun 4 12:48:11 server1 kernel: [1622699.573151] md_thread+0xad/0x170 Jun 4 12:48:11 server1 kernel: [1622699.573162] ? wait_woken+0x70/0x70 Jun 4 12:48:11 server1 kernel: [1622699.573169] ? md_write_inc+0x60/0x60 Jun 4 12:48:11 server1 kernel: [1622699.573176] kthread+0x12a/0x150 Jun 4 12:48:11 server1 kernel: [1622699.573187] ? set_kthread_struct+0x50/0x50 Jun 4 12:48:11 server1 kernel: [1622699.573197] ret_from_fork+0x22/0x30 Jun 4 12:48:11 server1 kernel: [1622699.573212] Jun 4 12:48:11 server1 kernel: [1622699.573231] INFO: task jbd2/dm-0-8:1375 blocked for more than 120 seconds. Jun 4 12:48:11 server1 kernel: [1622699.581119] Tainted: G OE 5.15.0-67-generic #74-Ubuntu Jun 4 12:48:11 server1 kernel: [1622699.589004] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Jun 4 12:48:11 server1 kernel: [1622699.597959] task:jbd2/dm-0-8 state:D stack: 0 pid: 1375 ppid: 2 flags:0x00004000 Jun 4 12:48:11 server1 kernel: [1622699.597968] Call Trace: Jun 4 12:48:11 server1 kernel: [1622699.597970] Jun 4 12:48:11 server1 kernel: [1622699.597973] __schedule+0x24e/0x590 Jun 4 12:48:11 server1 kernel: [1622699.597984] schedule+0x69/0x110 Jun 4 12:48:11 server1 kernel: [1622699.597992] md_write_start.part.0+0x174/0x220 Jun 4 12:48:11 server1 kernel: [1622699.598002] ? wait_woken+0x70/0x70 Jun 4 12:48:11 server1 kernel: [1622699.598024] md_write_start+0x14/0x30 Jun 4 12:48:11 server1 kernel: [1622699.598032] raid5_make_request+0x77/0x540 [raid456] Jun 4 12:48:11 server1 kernel: [1622699.598051] ? wait_woken+0x70/0x70 Jun 4 12:48:11 server1 kernel: [1622699.598058] md_handle_request+0x12d/0x1b0 Jun 4 12:48:11 server1 kernel: [1622699.598065] ? __blk_queue_split+0xfe/0x200 Jun 4 12:48:11 server1 kernel: [1622699.598075] md_submit_bio+0x71/0xc0 Jun 4 12:48:11 server1 kernel: [1622699.598082] __submit_bio+0x1a5/0x220 Jun 4 12:48:11 server1 kernel: [1622699.598091] ? mempool_alloc_slab+0x17/0x20 Jun 4 12:48:11 server1 kernel: [1622699.598102] __submit_bio_noacct+0x85/0x200 Jun 4 12:48:11 server1 kernel: [1622699.598110] ? kmem_cache_alloc+0x1ab/0x2f0 Jun 4 12:48:11 server1 kernel: [1622699.598122] submit_bio_noacct+0x4e/0x120 Jun 4 12:48:11 server1 kernel: [1622699.598131] submit_bio+0x4a/0x130 Jun 4 12:48:11 server1 kernel: [1622699.598139] submit_bh_wbc+0x18d/0x1c0 Jun 4 12:48:11 server1 kernel: [1622699.598151] submit_bh+0x13/0x20 Jun 4 12:48:11 server1 kernel: [1622699.598160] jbd2_journal_commit_transaction+0x861/0x17a0 Jun 4 12:48:11 server1 kernel: [1622699.598170] ? __update_idle_core+0x93/0x120 Jun 4 12:48:11 server1 kernel: [1622699.598184] kjournald2+0xa9/0x280 Jun 4 12:48:11 server1 kernel: [1622699.598190] ? wait_woken+0x70/0x70 Jun 4 12:48:11 server1 kernel: [1622699.598197] ? load_superblock.part.0+0xc0/0xc0 Jun 4 12:48:11 server1 kernel: [1622699.598202] kthread+0x12a/0x150 Jun 4 12:48:11 server1 kernel: [1622699.598210] ? set_kthread_struct+0x50/0x50 Jun 4 12:48:11 server1 kernel: [1622699.598218] ret_from_fork+0x22/0x30 Jun 4 12:48:11 server1 kernel: [1622699.598229]