Comment 31 for bug 120375

Revision history for this message
Ken (ksemple) wrote :

Hi,

Thanks everybody for your help. I have now fixed this on my machine using code similar to that suggested by Peter Haight.

Edit "/usr/share/initramfs-tools/scripts/local" and find the following comment "# We've given up, but we'll let the user fix matters if they can".

Just before this comment add the following code:

 # The following code was added to allow degraded RAID arrays to start
 if [ ! -e "${ROOT}" ] || ! /lib/udev/vol_id "${ROOT}" >/dev/null 2>&1; then
  # Try mdadm and allow degraded arrays to start in case a drive has failed
  log_begin_msg "Attempting to start RAID arrays and allow degraded arrays"
   /sbin/mdadm --assemble --scan
  log_end_msg
 fi

Peter's suggestion of just using [ ! -e "${ROOT}" ] as the condition test didn't work, so I used the condition test from the "if" block above this code and it worked fine.

To rebuilt the boot image use "sudo update-initramfs -u" as suggested by Plnt. This script calls the "mkinitramfs" script mentioned by Peter and is easier to use as you don't have to supply the image name and other options.

I have tested this a couple of times, with and without my other drives plugged in without any problems. Just make sure you have a cron item setup to run "mdadm --monitor --oneshot" to ensure the System Administrator gets an email when an array is running degraded.

This has worked on my machine and I think this is a sound solution. Please let me know if it solves your problems also.

This bug has the title "Bug #120375 is not in Ubuntu", does this mean that it is not considered to be an Ubuntu bug? I believe it is due to the fact that involves the way the startup scripts are configured (it is definitely not a bug in mdadm). How do we get this escalated to an Ubuntu bug so that it will be solved in future releases?

Good luck,
Ken