[feisty] non-working initramfs: failed to activate RAID

Bug #75052 reported by Aurelien Naldi on 2006-12-08
16
This bug affects 1 person
Affects Status Importance Assigned to Milestone
initramfs-tools (Ubuntu)
Undecided
Unassigned

Bug Description

Binary package hint: initramfs-tools

I have just upgraded my system to feisty and it can not boot on the new kernel. My old kernels are still fine.
After some checking, it looks like the mdadm script into the initramfs are not functionnal.

By adding the "break=mount" kernel option I got a shell in the initramfs, where I get the error: "mdadm: ' not identified in config file"
the config file looks fine, even the one in the initramfs, only /dev/md0 is defined.

running the RAID by hand works fine.

Aurelien Naldi (aurelien.naldi) wrote :

The above error is fixed: I had edited my /etc/mdadm/mdadm.conf and put a space at the end of the HOMEHOST line.
I replaced my config file by the one generated by mkconf, it solves this problem but still does not boot.

The new error message is: "mdadm: no devices listed in conf fil were found"

The weird thing is that simply running /script/local-top/mdadm by hand works!

Jonathan Hudson (jh+lpd) wrote :

Fails for me too (does that makes it confirmed). Adding break=mount to the boot line, and in the initramfs shell

mdadm --assemble --scan --auto=yes

spews out messages about /dev/md{0,1,2} not being found or some such (sorry, scrolled away), and then the system boots.

Kees Cook (kees) wrote :

Sounds like my problem as well, however, I don't even need to run the mdadm scripts, I just just exit out of the shell and it correctly runs ./scripts/local-top/mdadm

Seems like it's just not being run under "normal" conditions?

Jonathan Hudson (jh+lpd) wrote :

Yeah! fixed today by one or more of linux-image-2.6.20-2-generic / mdadm (2.5.6-7ubuntu1)

Jonathan Hudson (jh+lpd) wrote :

Cancel the above, it's still broken ... wishful thinking.

Yagisan (yagisan) wrote :

I appear to have the same bug. It basically makes edgy -> feisty upgrades on raid system not-feasible.
I have to use an edgy kernel on an otherwise feisty system if I don't want to stuff around with the boot process/

Changed in initramfs-tools:
status: Unconfirmed → Confirmed
harry (harald-dumdey) wrote :

I can confirm that bug here, too. System boots without an error after adding "break=mount" at boot-time and running the "./scripts/local-top/mdadm" script.

Jonathan Hudson (jh+lpd) wrote :

It boots -- really it boots. Two months on and it boots.
(after the 2007-02-09 updates).

Thanks for all the dev effort.

-jonathan

Simone Lazzaris (s-lazzaris) wrote :

Same here; fresh install from 7.04 beta2 server CD; md raid and SCSI disks.
Default install (also after upgrading the system with dist-upgrade) can't boot.
I can work around this with the "mount=break ./scripts/local-top/mdadm" trick.

Simone Lazzaris (s-lazzaris) wrote :

I've manually tweaked the init scripts, adding a

sleep 10
scripts/local-top/mdadm

before the mount of the partition, and the system now boots like a charm. The sleep is needed: without DOSN'T works. I suspect that the boot process is too fast, and it dosn't give udev the time to create the devices.

David Monro (davidm-ub) wrote :

booting without the 'quiet' option I can see that the attempt to run the md bits is happening well before my scsi controller has detected its disks, creating /etc/initramfs-tools/scripts/local-top/00sleep containing just:

#!/bin/bash
sleep 15

gets me to the point where it will find the drives before attempting to assemble the md device.

Is there some way of checking if all drivers are fully loaded before getting into the local-top scripts?

(What still doesn't work even with this fix is the initrd finding the lvm inside the md device :( )

To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers