Comment 34 for bug 290885

Revision history for this message
Dustin Kirkland  (kirkland) wrote : Re: [Bug 290885] Re: SRU: Backport of Boot Degraded RAID functionality from Intrepid to Hardy

Paul-

I think I understand the wrinkle you're seeing here...

mdadm will only fail to construct a "newly" degraded array.

So the *first* time you boot with a missing disk, mdadm expects the
RAID to be fully operational, notices it's missing a disk, and the new
code we have in the initramfs takes over, checking the configuration
value in that etc file, and interactively prompting you (Do you want
to boot degraded?).

If you do not boot, then mdadm doesn't flag this array as running
degraded, and the next time you reboot, you will see the same
question, about a degraded raid.

If you do choose to boot the raid degraded, mdadm will mark the array,
and "degraded" is now the expected mode of operation. Subsequent
boots will proceed, since you have chosen to boot degraded.

To continue testing, you can reboot your test machine with the second
disk present. It will boot into the degraded array, even with the
second disk (as mdadm doesn't know the state of this additional disk).
 And then you can add the new disk back to the array with mdadm
/dev/md0 --add /dev/sdb1 or some such. You'll want to wait until it's
fully sync'd again (watch -n1 cat /proc/mdstat). Reboot, and you
should boot with both disks in the array. Disconnect one again, and
this will create a new degraded raid event, and rebooting, the
initramfs will see that its missing a disk that it expects.

We pondered different verbage in the development cycles, like "newly
degraded RAID", but decided that was too wordy. A RAID admin should
understand (or come to understand) this workflow.

:-Dustin