Comment 66 for bug 75681

Revision history for this message
Ian Jackson (ijackson) wrote : Re: [Bug 75681] Re: boot-time race condition initializing md

Aurelien Naldi writes ("Re: [Bug 75681] Re: boot-time race condition initializing md"):
> With older versions (of mdadm I think) the RAID was assembled, but
> degraded or kind of assembled but not startable (when only 1 or 2 disk
> were present at first). a "--no-degraded" option was added in the
> mdadm script to avoid this. Now the RAID does not try to start when
> drive(s) are missing (i.e. is never assembled in degraded mode) but
> weird things are still hapening.

OK, so the main symptom in the log that I'm looking at was that you
got an initramfs prompt ?

> I do not recall which devices were in the "assembled" array when I
> launched this one, but the RAID was _not_ degraded. But I guess you
> have it right: I had sometime enough time to see in /proc/partition
> only a few of my drives, and the array was assembled with these. Once
> all the devices are there, runing /scripts/local-top/mdadm again fixes
> it all.

Right. Did you have to rerun vgscan ?

> after running udevd and udevtrigger, I checked that things were well
> broken (it works too fine in my two first attempts) and then all I had
> to do is running "/scripts/local-top/mdadm" once more and the array
> was correctly assembled.

Right.

> I have not started LVM by hand, I copied the udevd.out to one of my
> normal partition, then exited the shell and it finished to boot fine.

You mounted your root fs by hand, though ?

> One more comment: I have tested some other suggested workarround:
> - removing the mdrun script did not help (the first thing done by this
> script is to check the presence of the mdadm one and then to exit)

I think you must have a newer version than the one I was looking at.

> - adding a timeout in "/usr/share/initramfs-tools/init" works fine for
> me: 5 seconds are not enough but 8 are OK ;)

Hmm.

Ian.