Ok, found my solution... but probably not for the folks that are actually using a RAID configuration. Essentially whenever a drive has been in a RAID configuration at one point in its life, the controller card writes "meta data" to the last sectors of the drive. These sectors are beyond the user-data space, but can be read by the controller and/or the controller-driver... and apparently the new kernels! I had a former RAID 1 (mirror) configuration on two identical drives. Even though there was no md0, md1 (RAID partitions) on either drive, both drives were present in the system via a /dev/mapper device. The /dev/mapper is a virtual device that maps the physical drives to the fake raid driver card(s). The prefix of this device, in my case "pdc", just tells the system it is a "Promise IDE" card and to use the correct promise fake raid drivers. The rest of the device's name is just some ungodly unique identifier for the set. In order to mount the drives, linux likes to know the unique identifier of the drives and doesn't really care as to their physical partition (/dev/sde, /dev/sdf, etc.). The unique identifier Linux uses is a big hexadecimal code that is reffered to as the UUID. (Universal Unique IDentifier) Well, it turned out that I had two old RAID drives with the same UUID. Since they were mapped via the pdc "fake rade drivers" to the system they should present themselves as a single UUID because they are a mirror. (i.e. Linux doesn't care which physical drive gets the data, the mapper handles that... all it cares is that /dev/sde and /dev/sdf partitions are mounted via the mapper.) In the old configuration, even though both drives had the same UUID, only one of them had data and was mounted, the other was empty and unmounted. The old system didn't seem to care that there was another drive with the same UUID because it never attempted to mount it. The new kernel seems to read the raid meta data and incorrectly identifies the disk as part of a RAID set. It then tries to mount it, but of course there are no RAID md partitions so it can't... hence enter the wonderful work of busybox. It looks like the new kernel must have changed the way it identified if a drive is in a RAID set? Maybe before it looked for the md partitions of Linujx RAID types and now it looks for the existance of meta data or a /dev/mapper/pdcxxxxx device? Either way, during boot it would time out because it couldn't initialize the RAID array... well because there wasn't one. Here is how I fixed my system: List the RAID devices in your system: sudo dmraid -r Verify you don't really have an array: (no Linux RAID md partitions) sudo fdisk -l (that's an "el" above) Remove meta data from the old RAID drives: (be really sure you don't have a RAID, otherwise say bye bye to your data) sudo dmraid -r -E create a new partition if required on the empty (non boot/root disk, sdf in my case): sudo fdisk /dev/sdf (partition as required, in my case one ext3 partition as sdf1) "format the new partition" (in my case ext3) sudo mke2fs -j /dev/sdf1 check out the UUIDs to make sure they are unique: sudo blkid if they aren't, change them: (you may have to reboot here since the drive was associated with a mapper, it may still think it is in-use by the system!) sudo tune2fs -U random /dev/sdf check them again: sudo blkid and of course reboot to see if you've fixed the problem: sudo reboot Hope this helps someone.