These observations were on a system in our lab called "seuss". My suggestions for next steps would be:
1) Determine how reliable this failure is.
a) Deploy xenial/hwe on seuss, and downgrade the kernel to the version in the crash log (4.13.0-36-generic #40~16.04.1-Ubuntu).
b) Run the com.canonical.certification::disk/disk_stress_ng_sda test ~10 times, and see how frequently we hit this failure.
2) Test another CRB1S system just like in 1). Does it also hit this issue?
4) If found to be reliably failing in #1, but never fails with the latest mainline kernel in #3, it maybe a candidate for bisection. I'd suggest bisecting with upstream git, first verifying that v4.13 fails and master does not (to rule out Ubuntu-specific patches). Remember this will be backwards from a typical bisect - "good" here means it fails, "bad" means it does not fail. Therefore, the first "bad" commit would be the one that fixes it.
These observations were on a system in our lab called "seuss". My suggestions for next steps would be:
1) Determine how reliable this failure is. 04.1-Ubuntu) . certification: :disk/disk_ stress_ ng_sda test ~10 times, and see how frequently we hit this failure.
a) Deploy xenial/hwe on seuss, and downgrade the kernel to the version in the crash log (4.13.0-36-generic #40~16.
b) Run the com.canonical.
2) Test another CRB1S system just like in 1). Does it also hit this issue?
3) Test w/ the latest upstream kernel (https:/ /wiki.ubuntu. com/Kernel/ MainlineBuilds). Is it still reproducible?
4) If found to be reliably failing in #1, but never fails with the latest mainline kernel in #3, it maybe a candidate for bisection. I'd suggest bisecting with upstream git, first verifying that v4.13 fails and master does not (to rule out Ubuntu-specific patches). Remember this will be backwards from a typical bisect - "good" here means it fails, "bad" means it does not fail. Therefore, the first "bad" commit would be the one that fixes it.