missing --incremental on array that is part of another array (nested)
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
mdadm (Ubuntu) |
Confirmed
|
Undecided
|
Unassigned |
Bug Description
http://
> when booting, the system dumps
> into initramfs shell with the raid array in an inactive state.
There are two problems here.
Firstly, the fact that the array doesn't assemble completely should not cause
the boot to fail. A degraded raid1 is perfectly sufficient for booting.
What is happening is that the initrd is relying on udev to assemble the array
by passing each new device to "mdadm --incremental $DEVNAME".
This will assemble the array as soon as all devices are present, but not
before. If a device failed before shutdown that will be recorded in the
metadata and "mdadm --incremental" will not wait for it. If it disappears
during reboot, mdadm will still expect it.
To deal with this issue, the initrd should run
mdadm --incremental --scan --run
[better: only degrade the required raid, not all !!! ]
which means "look for all arrays that are being incrementally assembled, and
start them".
This should be called after running "udevadm settle" and before mounting the
root filesystem.
However fixing this won't fix your problem [with a nested array], it will just change it.
The udev rules files which is calling "mdadm --incremental" does so
on /dev/sdb1 /dev/sdc1 and /dev/sde1, but apparently not on /dev/md0.
If at the initrd shell prompt you run
mdadm -I /dev/md0
it should finish assembling md1 for you. For some reason udev isn't doing
that.
Have a look in /lib/udev/rules.d or /etc/udev/rules.d for a file that runs
"mdadm --incremental" or "mdadm -I" and see how it works.
Maybe post it.
BTW what distro are you using?
NeilBrown
summary: |
- missing --incremental on array that is part of another arrays (nested) + missing --incremental on array that is part of another array (nested) |
Status changed to 'Confirmed' because the bug affects multiple users.