Comment 11 for bug 990913

Revision history for this message
Peter Koster (i41bktobiu5-launchpad-net) wrote :

I am having the same symptoms, though if the issue is related it is not described sufficiently here.

What is happening is that the kernel initiates the md components, and then the init scripts continue before all controllers and disks are up. This happens about 5 seconds into the initramfs boot process.
In my case the message says 'degraded' because one of the disks in the raid is on an onboard SATA port, and the rest are on two LSI mptsas cards. I suspect it would simply not see any raid devices at that point if I moved that disk to a SAS card port as well.

An extract of the boot messages (copied by hand, so they may contain errors):
md: raid6 personality registered for level 6
md: raid5 personality registered for level 5
md: raid4 personality registered for level 4
md: raid10 personality registered for level 10
done.
Begin: Running /scripts/init-premount ... done.
Begin: Mounting root file system ... Begin: Running /scripts/local-top ... done.
Begin: Running /scripts/local-premounte ... ** WARNING: There appears to be one or more degraded RAID devices **
<snip text asking for continue yes/no with instructions how to set bootdegraded=true>

Then while dropped to the initramfs shell, the boot process continues:
ioc0: LSISAS1068E B3: Capabilities={Initiator}
scsi6 : ioc1: LSI53C1020A A1, FwRev=01032700h, Ports=1, MaxQ=255, IRQ=16
scsi7 : iosc0: LSISAS1068E B3, FwRev=01210000h, Ports=1, MaxQ=483, IRQ=16
Etcetera.

This continues until 30 seconds into the boot process, when the kernel brings up the md devices:
md/raid:md0 raid level 6 active with 8 out of 8 devices, algorithm 2
<snip>
md/raid:md1 raid level 5 active with 4 out of 4 devices, algorithm 2
<snip>

Doing a cat /proc/mdstat at this point shows the devices are clean and healthy.