Comment 15 for bug 1849682

Revision history for this message
dann frazier (dannf) wrote :

In Comment #3, I noted that it was mysterious that we were seeing this at all on the reported system - but, after staring at the log, I think I've an explanation for that now.

This system is supposed to be configured to have 8 identical NVMe drives in a raid0 mounted at /raid. There are also 2 other NVMes in this system which are supposed to have partitions configured in a raid1 for /. At least, that is what was *supposed* to be the case.

A filtered version of the log in comment #1 shows:
[ 16.757165] md/raid0:md0: cannot assemble multi-zone RAID0 with default_layout setting
[ 16.757165] md/raid0: please set raid.default_layout to 1 or 2
[ 16.757166] md: pers->run() failed ...
[ 19.051379] md1: detected capacity change from 0 to 30724962910208
[ 72.720232] md/raid0:md0: cannot assemble multi-zone RAID0 with default_layout setting
[ 72.728149] md/raid0: please set raid.default_layout to 1 or 2
[ 72.733979] md: pers->run() failed ...

While not explicit, we see that md1 is ginormous - matching the capacity we'd expect for the 8 drive raid0 that's supposed to be mounted at /raid. However, the md/raid0 driver is actually complaining about md*0*. I'm guessing that md0 is the array of 2 partitions that was supposed to be a raid1 mounted at /, but was misconfigured as a raid0. And it therefore makes sense that it is a multi-zone array, as we see that only one NVMe seems to be partitioned:

[ 16.541847] nvme1n1: p1 p2

Presumably nvme1n1p1 is used as the EFI System Partition, and presumably nvme1n1p2 was combined with the full nvme0 block device to form a heterogenous raid0, which would therefore be multi-zone.