System Hangs Running RT Kernel with Disk Stress-ng

Bug #1998536 reported by Michael Reed
14
This bug affects 1 person
Affects Status Importance Assigned to Milestone
ubuntu-realtime
Incomplete
High
Joseph Salisbury

Bug Description

While running disk stress-ng the system tends to hang. I had to skip the tests to complete the certification run. This is where the test hangs

stress-ng: info: [9295] dentry: removing 2048 entries
stress-ng: info: [9254] dentry: 5787691 dentries allocated
stress-ng: info: [9254] dentry: removing 2048 entries
stress-ng: info: [9255] dentry: removing 2048 entries
stress-ng: info: [9279] dentry: removing 2048 entries
stress-ng: info: [9285] dentry: removing 2048 entries
stress-ng: info: [9269] dentry: removing 2048 entries
stress-ng: info: [9267] dentry: removing 2048 entries
stress-ng: info: [9287] dentry: removing 2048 entries
stress-ng: info: [9298] dentry: removing 2048 entries
stress-ng: info: [9291] dentry: removing 2048 entries
stress-ng: info: [9258] dentry: removing 2048 entries
stress-ng: info: [9253] successful run completed in 244.32s (4 mins, 4.32 secs)
stress-ng: info: [9306] setting to a 240 second (4 mins, 0.00 secs) run per stressor
stress-ng: info: [9306] dispatching hogs: 48 dir
stress-ng: info: [9306] successful run completed in 247.98s (4 mins, 7.98 secs)
stress-ng: info: [9412] setting to a 240 second (4 mins, 0.00 secs) run per stressor
stress-ng: info: [9412] dispatching hogs: 48 fallocate
stress-ng: info: [9412] successful run completed in 240.02s (4 mins, 0.02 secs)
stress-ng: info: [9465] setting to a 240 second (4 mins, 0.00 secs) run per stressor
stress-ng: info: [9465] dispatching hogs: 48 fiemap
stress-ng: info: [9786] setting to a 240 second (4 mins, 0.00 secs) run per stressor
stress-ng: info: [9786] dispatching hogs: 48 filename
stress-ng: info: [9881] setting to a 240 second (4 mins, 0.00 secs) run per stressor
stress-ng: info: [9881] dispatching hogs: 48 flock
stress-ng: info: [9945] setting to a 240 second (4 mins, 0.00 secs) run per stressor
stress-ng: info: [9945] dispatching hogs: 48 fstat

Steps to reproduce
1. Install canonical-certification-server
2. Run storage certification
    a. test-storage

OS: 22.04.1
Kernel: $ uname -a
Linux torchtusk 5.15.0-1025-realtime #28-Ubuntu SMP PREEMPT_RT Fri Oct 28 23:19:16 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux

Revision history for this message
Michael Reed (mreed8855) wrote :
Revision history for this message
Michael Reed (mreed8855) wrote :
Revision history for this message
Michael Reed (mreed8855) wrote :
Revision history for this message
Michael Reed (mreed8855) wrote (last edit ):

Disk Stress-ng does not have any issue on the normal Jammy 5.15 kernel. I have seen the behavior on a few servers.

HPE DL110
HPE DL380 Gen10
Dell PowerEdge R7525
Dell PowerEdge R7515

Revision history for this message
Michael Reed (mreed8855) wrote :

$ sudo apt-cache policy canonical-certification-server
canonical-certification-server:
  Installed: 0.58.0~ppa~ubuntu22.04.1
  Candidate: 0.58.0~ppa~ubuntu22.04.1
  Version table:
 *** 0.58.0~ppa~ubuntu22.04.1 500
        500 https://ppa.launchpadcontent.net/hardware-certification/public/ubuntu jammy/main amd64 Packages
        100 /var/lib/dpkg/status

Revision history for this message
Joseph Salisbury (jsalisbury) wrote :

I'd like to try and reproduce this bug. Are there any specific parameters you pass to kick off the disk stress-ng test?

To reproduce, I would install the certification test suite:
sudo apt-get install canonical-certification-server

Then what is the command line invocation you use for stress-ng?

Changed in ubuntu-realtime:
importance: Undecided → High
status: New → Triaged
assignee: nobody → Joseph Salisbury (jsalisbury)
Revision history for this message
Joseph Salisbury (jsalisbury) wrote :

I have one additional request. I just create the version -1029 of the realtime kernel, which has the latest upstream rt patches. This kernel can be downloaded from:

https://people.canonical.com/~jsalisbury/lp1998536/

Can you try this kernel and see if it also exhibits the bug?

Revision history for this message
Colin Ian King (colin-king) wrote (last edit ):

It may be also useful to add the --klog-check option to stress-ng and re-run the stressors manually as this will pickup the kernel log messages during the run and dump them to the output of stress-ng, then one can see which stressor has triggered the issue.

Revision history for this message
Joseph Salisbury (jsalisbury) wrote :

Hey, Colin! Do you know the option to pass to stress_ng that will trigger the 'test-storage' or disk specific tests? I ran 'stress-ng -h', but I didn't see anything disk specific.

Maybe I need to run the entire canonical-certification-server suite to trigger this.

Revision history for this message
Joseph Salisbury (jsalisbury) wrote :

I found how to get the certification packages. I performed the following:

sudo add-apt-repository ppa:hardware-certification/public
sudo apt-get update
sudo apt-get install canonical-certification-server

However, I get a message that the suite is now deprecated:
canonical-certification-server
DEPRECATED: /usr/bin/canonical-certification-server

*** ATTENTION! ***
This command is deprecated and no longer functions
For certification testing, you should now use the version specific
commands that initiate certification runs.

certify-18.04: launches the 18.04 certification suite
certify-16.04: launches the 16.04 certification suite

Additionally there are other commands for running the smaller test
plans for targeted testing of Network, Storage, USB, and other
areas.

Please refer to the current Self-Test Guide for more information:
https://certification.canonical.com/

Revision history for this message
Michael Reed (mreed8855) wrote :

Hi Joseph,

Yes the certify-18.04 and certify-16.04 commands have been deprecated. If you want to run the full certification on a 22.04 system use
certify-22.04

If you just want to run the storage tests then use
test-storage

If you would like to run our stress-ng script

cd /usr/lib/checkbox-provider-base/bin/
sudo stress_ng_test.py disk --device sdb --base-time 240

Change sdb to your target device

All three options will run the stress-ng test case.

Revision history for this message
Joseph Salisbury (jsalisbury) wrote :

Thanks for the info, Michael. I'll see if I can reproduce this issue and do some debugging.

Changed in ubuntu-realtime:
status: Triaged → In Progress
Revision history for this message
Colin Ian King (colin-king) wrote (last edit ):

@Joseph,

Try:

stress-ng --hdd 0 -t 60
stesss-ng --iomix 0 -r 60

or

stress-ng --seq 0 -t 60 --class io
stress-ng --seq 0 -t 60 --class filesystem

Revision history for this message
Joseph Salisbury (jsalisbury) wrote :

Running the test with the following options does not exhibit the bug:
$ sudo ./stress_ng_test.py disk --device sdb --base-time 240

07 Dec 21:39: Running stress-ng xattr stressor for 240 seconds...
stress-ng: info: [3968515] setting to a 240 second (4 mins, 0.00 secs) run per stressor
stress-ng: info: [3968515] dispatching hogs: 96 xattr
stress-ng: info: [3968515] successful run completed in 240.57s (4 mins, 0.57 secs)

retval is 0
**************************************************************
* stress-ng test passed!
**************************************************************

I will try the suggestions posted by Colin as well as running the full certification test suite.

Revision history for this message
Joseph Salisbury (jsalisbury) wrote :

Also, I this could be hardware specific. Do you have other hardware to run the certification test on, to see if the bug is happening there as well?

For the system that currently exhibits the bug, can you run the following to gather more detailed info:

apport-collect 1998536
or to a file:
apport-bug --save /tmp/report.1998536 linux

If apport can't be run:
1) uname -a > uname-a.log
2) dmesg > dmesg.log
3) sudo lspci -vvnn > lspci-vvnn.log
4) cat /proc/version_signature > version.log

Revision history for this message
Joseph Salisbury (jsalisbury) wrote (last edit ):
Download full text (3.3 KiB)

I was not able to reproduce this bug by running the individual tests suggested in comments #11 and #13. However, it seems I can reproduce the hang by running the full test-storage test suite, by running: 'test-storage' and keeping all menu options enabled.

It appears some type of deadlock is happening in the kernel, since my existing terminals still accept a carriage return, but then lock up if I run a command such as top. Here is the output I see from the test when the hang happens:

-------------[ Running job 67 / 97. Estimated time left: unknown ]--------------
------------------[ Disk stress_ng test for PERC_H355_Front ]-------------------
ID: com.canonical.certification::disk/disk_stress_ng_sda
Category: com.canonical.plainbox::disk
... 8< -------------------------------------------------------------------------
STRESS_NG_DISK_TIME env var is not found, stress_ng disk running time is default value
stress-ng: info: [3340418] setting to a 240 second (4 mins, 0.00 secs) run per stressor
stress-ng: info: [3340418] dispatching hogs: 96 aio
stress-ng: info: [3340418] successful run completed in 240.43s (4 mins, 0.43 secs)
stress-ng: info: [3340616] setting to a 240 second (4 mins, 0.00 secs) run per stressor
stress-ng: info: [3340616] dispatching hogs: 96 aiol
stress-ng: info: [3340616] successful run completed in 240.14s (4 mins, 0.14 secs)
stress-ng: info: [3342731] setting to a 240 second (4 mins, 0.00 secs) run per stressor
stress-ng: info: [3342731] dispatching hogs: 96 chdir
stress-ng: info: [3342731] successful run completed in 251.67s (4 mins, 11.67 secs)
stress-ng: info: [3342828] setting to a 240 second (4 mins, 0.00 secs) run per stressor
stress-ng: info: [3342828] dispatching hogs: 96 chmod
stress-ng: info: [3342828] successful run completed in 240.01s (4 mins, 0.01 secs)
stress-ng: info: [3342931] setting to a 240 second (4 mins, 0.00 secs) run per stressor
stress-ng: info: [3342931] dispatching hogs: 96 chown
stress-ng: info: [3342931] successful run completed in 240.01s (4 mins, 0.01 secs)
stress-ng: info: [3343029] setting to a 240 second (4 mins, 0.00 secs) run per stressor
stress-ng: info: [3343029] dispatching hogs: 96 dentry
stress-ng: info: [3343030] stress-ng-dentry: 5597426 dentries allocated
stress-ng: info: [3343029] successful run completed in 246.89s (4 mins, 6.89 secs)
stress-ng: info: [3343131] setting to a 240 second (4 mins, 0.00 secs) run per stressor
stress-ng: info: [3343131] dispatching hogs: 96 dir
stress-ng: info: [3343131] successful run completed in 245.69s (4 mins, 5.69 secs)
stress-ng: info: [3343230] setting to a 240 second (4 mins, 0.00 secs) run per stressor
stress-ng: info: [3343230] dispatching hogs: 96 fallocate
stress-ng: info: [3343230] successful run completed in 240.04s (4 mins, 0.04 secs)
stress-ng: info: [3343328] setting to a 240 second (4 mins, 0.00 secs) run per stressor
stress-ng: info: [3343328] dispatching hogs: 96 fiemap
stress-ng: info: [3343815] setting to a 240 second (4 mins, 0.00 secs) run per stressor
stress-ng: info: [3343815] dispatching hogs: 96 filename
stress-ng: info: [3343921] setting to a 240 second (4 mins, 0.00 secs) run per stressor
...

Read more...

Revision history for this message
Joseph Salisbury (jsalisbury) wrote :

An additional data point, I cannot reproduce the bug using a VM and the same steps.

Specific details from the test suite where the bug happens:
-------------[ Running job 67 / 97. Estimated time left: unknown ]--------------
------------------[ Disk stress_ng test for PERC_H355_Front ]-------------------
ID: com.canonical.certification::disk/disk_stress_ng_sda
Category: com.canonical.plainbox::disk

Revision history for this message
Joseph Salisbury (jsalisbury) wrote :

This bug also happens with the v6.1-rc7 kernel and the rt5 real-time patch. I also confirmed the bug/system hang does not happen with just v6.1-rc7 without the real-time patch applied.

I will bisect through the rt patches to see if I can find the specific patch(s) that causes the bug. I will use the v5.15 kernel and it's rt patches, since v5.15 has the patches broken out individually, and they can be applied one at a time with 'git am'.

Revision history for this message
Joseph Salisbury (jsalisbury) wrote :

I've been bisecting this bug down. It appears the bug has existed back to v5.15-rc3-rt5. I'll test older versions, but it may be that this bug has always existed with PREEMPT_RT enabled.

Revision history for this message
Michael Reed (mreed8855) wrote :

Hi Joseph,

Thank you for the update.

Revision history for this message
Joseph Salisbury (jsalisbury) wrote :

The bisect has not provided much more detail. This issue does not appear to be a regression and appears to have always existed.

I was able to get additional debug info and a call trace by enabling CONFIG_DEBUG_PREEMPT, CONFIG_PROVE_LOCKING and CONFIG_JBD2_DEBUG. It also shows a circular locking issue. I will analyze this data for the next steps. I'll post the logs in the next comments.

Revision history for this message
Joseph Salisbury (jsalisbury) wrote :
Download full text (10.0 KiB)

LOCKING ISSUE:

Feb 24 00:24:34 taycet kernel: [ 482.097302] ======================================================
Feb 24 00:24:34 taycet kernel: [ 482.097302] WARNING: possible circular locking dependency detected
Feb 24 00:24:34 taycet kernel: [ 482.097303] 5.15.0-1033-realtime #36~lp1998536DEBUG Not tainted
Feb 24 00:24:34 taycet kernel: [ 482.097305] ------------------------------------------------------
Feb 24 00:24:34 taycet kernel: [ 482.097305] sos/6644 is trying to acquire lock:
Feb 24 00:24:34 taycet kernel: [ 482.097306] ff385579af220758 ((softirq_ctrl.lock)){+.+.}-{2:2}, at: __local_bh_disable_ip+0xf3/0x1a0
Feb 24 00:24:34 taycet kernel: [ 482.097313]
Feb 24 00:24:34 taycet kernel: [ 482.097313] but task is already holding lock:
Feb 24 00:24:34 taycet kernel: [ 482.097314] ffffffffac6ad488 (raw_v6_hashinfo.lock){++.+}-{2:2}, at: raw_seq_start+0x2a/0x80
Feb 24 00:24:34 taycet kernel: [ 482.097319]
Feb 24 00:24:34 taycet kernel: [ 482.097319] which lock already depends on the new lock.
Feb 24 00:24:34 taycet kernel: [ 482.097319]
Feb 24 00:24:34 taycet kernel: [ 482.097319]
Feb 24 00:24:34 taycet kernel: [ 482.097319] the existing dependency chain (in reverse order) is:
Feb 24 00:24:34 taycet kernel: [ 482.097319]
Feb 24 00:24:34 taycet kernel: [ 482.097319] -> #1 (raw_v6_hashinfo.lock){++.+}-{2:2}:
Feb 24 00:24:34 taycet kernel: [ 482.097321] __lock_acquire+0x4c1/0x990
Feb 24 00:24:34 taycet kernel: [ 482.097324] lock_acquire+0xce/0x240
Feb 24 00:24:34 taycet kernel: [ 482.097325] rt_write_lock+0x38/0xc0
Feb 24 00:24:34 taycet kernel: [ 482.097328] raw_hash_sk+0x49/0xe0
Feb 24 00:24:34 taycet kernel: [ 482.097329] inet6_create.part.0+0x296/0x540
Feb 24 00:24:34 taycet kernel: [ 482.097331] inet6_create+0x16/0x30
Feb 24 00:24:34 taycet kernel: [ 482.097332] __sock_create+0x19c/0x360
Feb 24 00:24:34 taycet kernel: [ 482.097334] sock_create_kern+0x14/0x20
Feb 24 00:24:34 taycet kernel: [ 482.097336] inet_ctl_sock_create+0x37/0x90
Feb 24 00:24:34 taycet kernel: [ 482.097338] icmpv6_sk_init+0x67/0xf0
Feb 24 00:24:34 taycet kernel: [ 482.097340] ops_init+0x3f/0x1f0
Feb 24 00:24:34 taycet kernel: [ 482.097341] register_pernet_operations+0x100/0x1f0
Feb 24 00:24:34 taycet kernel: [ 482.097342] register_pernet_subsys+0x29/0x50
Feb 24 00:24:34 taycet kernel: [ 482.097343] icmpv6_init+0x15/0x57
Feb 24 00:24:34 taycet kernel: [ 482.097346] inet6_init+0x121/0x3af
Feb 24 00:24:34 taycet kernel: [ 482.097348] do_one_initcall+0x49/0x180
Feb 24 00:24:34 taycet kernel: [ 482.097351] do_initcalls+0xcc/0xf0
Feb 24 00:24:34 taycet kernel: [ 482.097354] kernel_init_freeable+0x1df/0x233
Feb 24 00:24:34 taycet kernel: [ 482.097355] kernel_init+0x1a/0x130
Feb 24 00:24:34 taycet kernel: [ 482.097357] ret_from_fork+0x1f/0x30
Feb 24 00:24:34 taycet kernel: [ 482.097359]
Feb 24 00:24:34 taycet kernel: [ 482.097359] -> #0 ((softirq_ctrl.lock)){+.+.}-{2:2}:
Feb 24 00:24:34 taycet kernel: [ 482.097361] check_prev_add+0x93/0xc10
Feb 24 00:24:34 taycet kernel: [ 482.097362]...

Revision history for this message
Joseph Salisbury (jsalisbury) wrote :

CALL TRACE FROM TEST:

[ 4461.908558] INFO: task stress-ng:17065 blocked for more than 622 seconds.
[ 4461.908559] Not tainted 5.15.0-1033-realtime #36~lp1998536DEBUG
[ 4461.908559] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 4461.908560] task:stress-ng state:D stack: 0 pid:17065 ppid: 17059 flags:0x00004006
[ 4461.908561] Call Trace:
[ 4461.908562] <TASK>
[ 4461.908564] __schedule+0x28c/0x5f0
[ 4461.908566] schedule+0x7c/0x170
[ 4461.908568] jbd2_log_wait_commit+0x15e/0x200
[ 4461.908571] ? wait_woken+0x70/0x70
[ 4461.908573] jbd2_complete_transaction+0x7d/0xb0
[ 4461.908575] ext4_fc_commit+0x1a0/0x1d0
[ 4461.908578] ext4_sync_file+0xde/0x2d0
[ 4461.908581] vfs_fsync_range+0x46/0x90
[ 4461.908583] ? up_write+0x44/0x50
[ 4461.908584] ext4_buffered_write_iter+0x132/0x180
[ 4461.908587] ext4_file_write_iter+0x43/0x60
[ 4461.908589] new_sync_write+0x11a/0x1b0
[ 4461.908593] vfs_write+0x2bc/0x3d0
[ 4461.908595] ksys_write+0x70/0xf0
[ 4461.908598] __x64_sys_write+0x19/0x20
[ 4461.908600] do_syscall_64+0x59/0xc0
[ 4461.908603] ? irqentry_exit_to_user_mode+0x27/0x40
[ 4461.908605] ? irqentry_exit+0x6b/0x90
[ 4461.908606] ? exc_page_fault+0xa6/0x120
[ 4461.908609] entry_SYSCALL_64_after_hwframe+0x61/0xcb
[ 4461.908611] RIP: 0033:0x7f58ba4eca37
[ 4461.908612] RSP: 002b:00007fffecc50228 EFLAGS: 00000246 ORIG_RAX: 0000000000000001
[ 4461.908613] RAX: ffffffffffffffda RBX: 00007fffecc51370 RCX: 00007f58ba4eca37
[ 4461.908614] RDX: 0000000000000001 RSI: 00007fffecc51297 RDI: 0000000000000004
[ 4461.908615] RBP: 0000000000000004 R08: 00000000000042bf R09: 00007fffecc50030
[ 4461.908615] R10: 00007fffecc501e0 R11: 0000000000000246 R12: 00007f58b7416000
[ 4461.908616] R13: 0000000000000004 R14: 0000000000000001 R15: 00007fffecc51297

Revision history for this message
Joseph Salisbury (jsalisbury) wrote :
Revision history for this message
Joseph Salisbury (jsalisbury) wrote :
Revision history for this message
Joseph Salisbury (jsalisbury) wrote :

sysrq-w during hang.

Revision history for this message
Joseph Salisbury (jsalisbury) wrote :

output of top during hang.

Revision history for this message
Joseph Salisbury (jsalisbury) wrote :

ps showing priorities while test is running.

Revision history for this message
Colin Ian King (colin-king) wrote (last edit ):

One way to debug RT Ubuntu kernels is to start with the stable kernel and apply the RT patches (e.g. a non-Ubuntu kernel) and build with the ubuntu config. Then see if the bug occurs. If it does not, then it must be some addition extra in the Ubuntu kernel that is causing the issue (e.g. Apparmor, etc).

In the past when working on the Ubuntu RT kernels I found that the stable kernel + RT patches work well and only when the Ubuntu config and/or ubuntu extras are added we get RT kernel issues.

Revision history for this message
Joseph Salisbury (jsalisbury) wrote :
Download full text (3.2 KiB)

I stepped back from the bisect/debugging and looked at the higher level stats. The stress-ng test is started with one process for each core, and there are 96 of them. I looked at top[3] during a hang, and many of the stress-ng processes are running 'R'. However, a sysrq-q[2] also shows many stress-ng processes are 'D' in uninterruptible sleep. What also sticks out to me is all the stress-ng processes are running as root with a priority of 20. Looking back at one of the call traces[1], I see jbd2 stuck in an uninterruptible state:
...
[ 4461.908213] task:journal-offline state:D stack: 0 pid:17541 ppid: 1 flags:0x00000226
...

The jdb2 kernel thread also running with a priority of 20[4]. When the hang happens, jbd2 is also stuck in an uninterruptible state(As well as systemd-journal):
...
1521 root 20 0 0 0 0 D 0.0 0.0 4:10.48 jbd2/sda2-8
1593 root 19 -1 64692 15832 14512 D 0.0 0.1 0:01.54 systemd-journal
...

I am pinning all of the stress-ng threads to cores 1-95 and the kernel threads to a housekeeping cpu, 0.

Output from cmdline:
"BOOT_IMAGE=/boot/vmlinuz-5.15.0-1033-realtime root=UUID=3583d8c4-d539-439f-9d50-4341675268cc ro console=tty0 console=ttyS0,115200 skew_tick=1 isolcpus=managed_irq,domain,1-95 intel_pstate=disable nosoftlockup tsc=nowatchdog crashkernel=0M-2G:128M,2G-6G:256M,6G-8G:512M,8G-:768M"

However, even with this pinning, stress-ng ends up running on cpu 0, per the ps output[4]. This appears to be causing a dead-lock between jdb2 and the stress-ng processes, since they share the same priority/niceness.

To confirm this idea, I started test-storage / stress-ng so they had a lower priority than jbd2. I used the following:
sudo nice -10 test-storage

This causes jbd2 to continue to run with a priority of 20, but all the stress-ng threads are run with a priority of 30:

PSR TID PID COMMAND %CPU PRI NI
0 1517 1517 jbd2/sda2-8 5.0 20 0
0 125875 125875 stress-ng 15.5 30 10
0 125882 125882 stress-ng 4.4 30 10
0 125925 125925 stress-ng 4.4 30 10
...

By adding 'nice -10' the test will complete without hanging. It appears the system hang was it waiting to complete I/O, which would never happen since the jdb2 threads cannot preempt stress-ng and causes a dead-lock.

Michael, could you also try running with the following command to confirm the results:
sudo nice -10 test-storage

If this resolves the bug, there are several options:
1. Run the cert suite with a nice value for real-time tests.
2. Change the tests so they do not run as root.
3. Tune the real-time system so stress-ng threads are pinned to isolated cores and and kernel threads are on a housekeeping only core.

I'm going to investigate option 3. I am assigning cores 1-95 as the isolated cores, so stress-ng should not run on core 0, but it is. I'm going to figure out why this is happening.

[0] https://launchpadlibrarian.net/653810449/locking_issue.txt
[1] https://launchpadlibrarian.net/653810490/call_trace.txt
[2] https://launchpadlibrarian.net/655372944/sysrq-w.txt
[3] https://launchpadlibrarian.net/655374168/top-during-hang.txt
[4] https://lau...

Read more...

Revision history for this message
Joseph Salisbury (jsalisbury) wrote :

I read that stress-ng has a taskset option: --taskset list.

Is there a way to pass options like this to the certification tests, like test-storage? Of is there a way to edit the code within the certification suite to set options for stress-ng?

Changed in ubuntu-realtime:
status: In Progress → Incomplete
Revision history for this message
Michael Reed (mreed8855) wrote :

I ran this (test-storage) on an HPE system ProLiant DL380 Gen10 Plus with the 5.15.0-1050-realtime kernel and I do not see this error.

Revision history for this message
Michael Reed (mreed8855) wrote :

Passing Disk test results

Revision history for this message
Joseph Salisbury (jsalisbury) wrote :

Thanks for the feedback, Michael. Did you have to modify the test for it to pass, or does the 1050 kernel pass without having to change any priorities?

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Duplicates of this bug

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.