Comment 39 for bug 942106

Revision history for this message
Josh Bendavid (josh-bendavid) wrote :

Hi,
This fix does not work for me. I have a system with a single md raid 5 array with 5 disks, and root on a seperate non-raid disk. With the default bootdegraded=false my system frequently drops to the initramfs prompt with the usual timed out message. With BOOT_DEGRADED=true set in the mdadm initramfs config, the system frequently drops to the initramfs prompt complaining of 2 of 5, 3 of 5, or 4 of 5 disks failed to be added to the array.

I was able to fix the problem by horrible hack by adding a hard sleep after the udevadm settle call in wait_for_udev
(in /usr/share/initramfs-tools/scripts/functions)
# Wait for queued kernel/udev events
wait_for_udev()
{
        command -v udevadm >/dev/null 2>&1 || return 0
        udevadm settle ${1:+--timeout=$1}
        sleep 15
}

With this change the system reliably boots (albeit slightly slower). with the raid properly detected with all 5 drives.

This suggests to me that udevadm settle is not doing what it's supposed to be doing (waiting for the relevant controller/drives to fully come up.)

I am running Ubuntu 13.10 on an Asus M2N-E Motherboard with 6 disks attached to the onboard sata ports. (One with the root partition and the other 5 as part of the raid 5 array).

This board runs an older AMD-based Nvidia chipset and does not support ahci, so the sata drives run through the sata_nv module.