[jaunty] Root on nvidia raid 1 mirror does not boot

Bug #358255 reported by Sachin Garg
36
This bug affects 3 people
Affects Status Importance Assigned to Milestone
dmraid (Ubuntu)
Fix Released
High
Unassigned

Bug Description

Description: Ubuntu jaunty (development branch)
Release: 9.04

I have a Phenom X3 on a GeForce7050M-M running Jaunty Beta.

2x160 GB Seagate SATA HDD configured as a RAID1 mirror using the NVIDIA RAID tools.

Unable to boot the system using the initrd created by default for the 2.6.28-11 kernel. The system waits for root device and fails, saying "could not find /dev/mapper/nvidia_gdcfaafd2" and drops me to an initramfs (busybox) shell.

The system boots up fine with the initrd created for the 2.6.27-13 kernel.

/boot/grub/menu.lst:

title Ubuntu jaunty (development branch), kernel 2.6.28-11-generic
root (hd0,1)
kernel /boot/vmlinuz-2.6.28-11-generic root=/dev/mapper/nvidia_gdcfaafd2 iommu=noaperture r
o splash
#initrd /boot/initrd.img-2.6.28-11-generic
initrd /boot/initrd.img-2.6.27-13-generic

The created initramfs for kernel 2.6.28-11 is missing the following modules (as compared with 2.6.27-13):

/lib/modules/<kernel-version>/kernel/drivers/md/dm-log.ko
/lib/modules/<kernel-version>/kernel/drivers/md/dm-mirror.ko
/lib/modules/<kernel-version>/kernel/drivers/md/dm-mod.ko
/lib/modules/<kernel-version>/kernel/drivers/md/dm-multipath.ko
/lib/modules/<kernel-version>/kernel/drivers/md/dm-round-robin.ko
/lib/modules/<kernel-version>/kernel/drivers/md/dm-snapshot.ko

that is *all* dm modules except dm-crypt.ko and dm-zero.ko

$ grep CONFIG_DM_MIRROR /boot/config-2.6.27-13-generic
CONFIG_DM_MIRROR=m
$ grep CONFIG_DM_MIRROR /boot/config-2.6.28-11-generic
CONFIG_DM_MIRROR=y

so, mirroring is built into the kernel in 2.6.28-11

So, why is my Mirrored RAID not working??

Also, see https://bugs.launchpad.net/ubuntu/+source/initramfs-tools/+bug/353256

Revision history for this message
Sachin Garg (sgarg-bugreporter) wrote :
affects: ubuntu → linux (Ubuntu)
Revision history for this message
Sachin Garg (sgarg-bugreporter) wrote :
Changed in linux (Ubuntu):
importance: Undecided → High
status: New → Triaged
Revision history for this message
Sachin Garg (sgarg-bugreporter) wrote :
Revision history for this message
Sachin Garg (sgarg-bugreporter) wrote :
Revision history for this message
Sachin Garg (sgarg-bugreporter) wrote :
Revision history for this message
Sachin Garg (sgarg-bugreporter) wrote :
Revision history for this message
Sachin Garg (sgarg-bugreporter) wrote :

Output of "cat /proc/modules"

Revision history for this message
Sachin Garg (sgarg-bugreporter) wrote :

Still happening with: Ubuntu 2.6.28-11.41-generic

Revision history for this message
Luke Yelavich (themuso) wrote :

Until proven otherwise, this is very likely a dmraid bug.

If you can boot into the system, could you please run the following command, and report back with their output?

sudo dmraid -s

In addition, when attempting to boot 2.6.28-11-generic, do you get dropped to a prompt? If so, please run the command "dmraid -ddd -ay" and report back with the output.

Thanks.

affects: linux (Ubuntu) → dmraid (Ubuntu)
Changed in dmraid (Ubuntu):
status: Triaged → Incomplete
Revision history for this message
Sachin Garg (sgarg-bugreporter) wrote :

sudo dmraid -s:

/dev/sdb: "sil" and "nvidia" formats discovered (using nvidia)!
/dev/sda: "sil" and "nvidia" formats discovered (using nvidia)!
*** Active Set
name : nvidia_gdcfaafd
size : 312581760
stride : 128
type : mirror
status : ok
subsets: 0
devs : 2
spares : 0

Revision history for this message
Sachin Garg (sgarg-bugreporter) wrote :

When attempting to boot 2.6.28-11-generic, I get dropped to an initramfs (busybox) prompt?

Output of command "dmraid -ddd -ay":

/dev/sdb: "sil" and "nvidia" formats discovered (using nvidia)
/dev/sda: "sil" and "nvidia" formats discovered (using nvidia)
DEBUG: _find_set: Searching nvidia_gdcfaafd
DEBUG: _find_set: not found nvidia_gdcfaafd
DEBUG: _find_set: Searching nvidia_gdcfaafd
DEBUG: _find_set: not found nvidia_gdcfaafd
DEBUG: _find_set: Searching nvidia_gdcfaafd
DEBUG: _find_set: found nvidia_gdcfaafd
DEBUG: _find_set: Searching nvidia_gdcfaafd
DEBUG: _find_set: found nvidia_gdcfaafd
DEBUG: checking nvidia device "/dev/sda"
DEBUG: checking nvidia device "/dev/sdb"
set sttaus of set "nvidia" to 16
[53.564257] device-mapper: table: 252:7: mirror: Device lookup failure
[53.564339] device-mapper: ioctl: error adding target to table
RAID set "nvidia_gdcfaafd" was not activated
DEBUG: freeing devices of RAID at "nvidia_gdcfaafd"
DEBUG: freeing device "nvidia", path /dev/sda
DEBUG: freeing device "nvidia", path /dev/sdb

Revision history for this message
Sammy Brence (sammyboy405) wrote :

Output of dmraid -s

*** Active Set
name : nvidia_befgecib
size : 960486400
stride : 128
type : stripe
status : ok
subsets: 0
devs : 4
spares : 0

Output of dmraid -ddd -ay

dmraid -ddd -ay
DEBUG: _find_set: searching nvidia_befgecib
DEBUG: _find_set: not found nvidia_befgecib
DEBUG: _find_set: searching nvidia_befgecib
DEBUG: _find_set: not found nvidia_befgecib
DEBUG: _find_set: searching nvidia_befgecib
DEBUG: _find_set: found nvidia_befgecib
DEBUG: _find_set: searching nvidia_befgecib
DEBUG: _find_set: found nvidia_befgecib
DEBUG: _find_set: searching nvidia_befgecib
DEBUG: _find_set: found nvidia_befgecib
DEBUG: _find_set: searching nvidia_befgecib
DEBUG: _find_set: found nvidia_befgecib
DEBUG: _find_set: searching nvidia_befgecib
DEBUG: _find_set: found nvidia_befgecib
DEBUG: _find_set: searching nvidia_befgecib
DEBUG: _find_set: found nvidia_befgecib
DEBUG: checking nvidia device "/dev/sda"
DEBUG: checking nvidia device "/dev/sdb"
DEBUG: checking nvidia device "/dev/sdc"
DEBUG: checking nvidia device "/dev/sdd"
DEBUG: set status of set "nvidia_befgecib" to 16
RAID set "nvidia_befgecib" already active
DEBUG: _find_set: searching nvidia_befgecib1
DEBUG: _find_set: not found nvidia_befgecib1
DEBUG: _find_set: searching nvidia_befgecib5
DEBUG: _find_set: not found nvidia_befgecib5
RAID set "nvidia_befgecib1" already active
RAID set "nvidia_befgecib5" already active
DEBUG: freeing devices of RAID set "nvidia_befgecib"
DEBUG: freeing device "nvidia_befgecib", path "/dev/sda"
DEBUG: freeing device "nvidia_befgecib", path "/dev/sdb"
DEBUG: freeing device "nvidia_befgecib", path "/dev/sdc"
DEBUG: freeing device "nvidia_befgecib", path "/dev/sdd"
DEBUG: freeing devices of RAID set "nvidia_befgecib1"
DEBUG: freeing device "nvidia_befgecib1", path "/dev/mapper/nvidia_befgecib"
DEBUG: freeing devices of RAID set "nvidia_befgecib5"
DEBUG: freeing device "nvidia_befgecib5", path "/dev/mapper/nvidia_befgecib"

Boots fine once I get to BusyBox and Type dmraid -ay and exit very annoying though.

Revision history for this message
Luke Yelavich (themuso) wrote : Re: [Bug 358255] Re: [jaunty] Root on nvidia raid 1 mirror does not boot

After talking with the bug filer on IRC, it turns out that dmraid arrays are being used with LVM on top. This could be an issue in terms of lvm/dmraid/udev/device-mapper interraction. Need to attempt to reproduce with a dmraid setup.

 affects ubuntu/dmraid
 status new

Changed in dmraid (Ubuntu):
status: Incomplete → New
Revision history for this message
Luke Yelavich (themuso) wrote :

sammyboy405, please file another bug. Your issue is different to what is being discussed in this bug.

Thanks.

Revision history for this message
SilveRaid (silveraid) wrote :

I have the same issue, but with intel software raid.
I distupgraded from 8.10 to 9.04 RC yesterday.

Revision history for this message
Ian Colley (ian-colley) wrote :

I also have experienced exactly the same issue. The message 'Gave up waiting for root device, no block devices found (x4)' is displayed and I am them dropped to a Busybox prompt.

Typing:

dmraid -ay
exit

Then allows the system to boot correctly.

I am running 2.6.28-11-server kernel

Revision history for this message
Nigel Pegram (ndpegram) wrote :

I have to add my 2c. Just got a new system. Installed ubuntu with fakeraid. As above, have to type "dmraid -ay" at prompt and exit to boot.

System otherwise runs fine.

Intrepid on 2.6.27-14 generic kernel.

I also had to manually load the modules during the installation process (alt installer CD). The sata raid was detected, but could not be enabled by the installer process. Going into a shell and loading the modules and activating the raid myself let me continue.

Revision history for this message
ronzo (ronaldw) wrote :

Same issue here. (on Jaunty)

Linux almighty 2.6.28-11-generic #42-Ubuntu SMP Fri Apr 17 01:58:03 UTC 2009 x86_64 GNU/Linux

dmraid version: 1.0.0.rc15 (2008-09-17) shared
dmraid library version: 1.0.0.rc15 (2008.09.17)

Revision history for this message
Stas Sușcov (sushkov) wrote :

The bug persists in released Ubuntu Jaunty.
Is there any chance to get it fixed?

Thanks.

Revision history for this message
Francisco Mesa (franciscomesa) wrote :

I have the same problem in a ML110, like Sachin Garg post.
uname -a: ... 2.6.28-11-server #42-Ubuntu ....

dmraid --version:
   dmraid version: 1.0.0.rc15 (2008-09-17) shared
   dmraid library version: 1.0.0.rc15 (2008.09.17)
   device-mapper version: 4.14.0

Revision history for this message
Deric Crago (deric.crago) wrote :

I'm also experiencing the same type of problem. I have found a work-around, although I'm unsure of any implications.

$ uname -r
2.6.28-11-server

$ lspci | grep RAID
00:1f.2 RAID bus controller: Intel Corporation 82801FR/FRW (ICH6R/ICH6RW) SATA Controller (rev 03)

~$ dmraid --version
dmraid version: 1.0.0.rc15 (2008-09-17) shared
dmraid library version: 1.0.0.rc15 (2008.09.17)
device-mapper version: unknown

$ sudo dmraid -s
*** Group superset .ddf1_disks
--> Active Subset
name : ddf1_OS
size : 1464973440
stride : 128
type : mirror
status : ok
subsets: 0
devs : 2
spares : 0

$ sudo dmraid -ddd -ay
DEBUG: _find_set: searching .ddf1_disks
DEBUG: _find_set: not found .ddf1_disks
DEBUG: _find_set: searching ddf1_OS
DEBUG: _find_set: searching ddf1_OS
DEBUG: _find_set: not found ddf1_OS
DEBUG: _find_set: not found ddf1_OS
DEBUG: _find_set: searching .ddf1_disks
DEBUG: _find_set: found .ddf1_disks
DEBUG: _find_set: searching ddf1_OS
DEBUG: _find_set: searching ddf1_OS
DEBUG: _find_set: found ddf1_OS
DEBUG: _find_set: found ddf1_OS
DEBUG: checking ddf1 device "/dev/sda"
DEBUG: checking ddf1 device "/dev/sdb"
DEBUG: set status of set "ddf1_OS" to 16
DEBUG: set status of set ".ddf1_disks" to 16
RAID set "ddf1_OS" already active
DEBUG: _find_set: searching ddf1_OS1
DEBUG: _find_set: not found ddf1_OS1
DEBUG: _find_set: searching ddf1_OS5
DEBUG: _find_set: not found ddf1_OS5
RAID set "ddf1_OS1" already active
RAID set "ddf1_OS5" already active
DEBUG: freeing devices of RAID set "ddf1_OS"
DEBUG: freeing device "ddf1_OS", path "/dev/sda"
DEBUG: freeing device "ddf1_OS", path "/dev/sdb"
DEBUG: freeing devices of RAID set ".ddf1_disks"
DEBUG: freeing device ".ddf1_disks", path "/dev/sdb"
DEBUG: freeing device ".ddf1_disks", path "/dev/sda"
DEBUG: freeing devices of RAID set "ddf1_OS1"
DEBUG: freeing device "ddf1_OS1", path "/dev/mapper/ddf1_OS"
DEBUG: freeing devices of RAID set "ddf1_OS5"
DEBUG: freeing device "ddf1_OS5", path "/dev/mapper/ddf1_OS"

Here's the work-around I'm using:

Modified /usr/share/initramfs-tools/scripts/local-top/dmraid to read:

#!/bin/sh

# local-top script for dmraid.

PREREQS=""
prereqs()
{
        echo $PREREQS
}

case $1 in
# get pre-requisites
prereqs)
        prereqs
        exit 0
        ;;
esac

# Activate any dmraid arrays that were not identified by udev and vol_id.
#for dev in $(dmraid -r -c); do
# dmraid-activate $dev
#done

# Adding this seems to work.
sleep 2
dmraid -ay

then ran `update-initramfs -u` and after a reboot it booted right up and no longer went to busybox.

Revision history for this message
quequotion (quequotion) wrote :

I am having this problem but I can't get anywhere because BusyBox doesn't recognize any input... in fact, the keyboard seems to be disabled when busybox appears (Num, Caps, and Scroll Lock do nothing).... Any ideas?

I can get into the system from a live cd by chroot and it looks like my Jaunty installation is fully functional other than the fact that dmraid will not boot...

Another strange symptom: dmraid is looking for libdl.so.2 and can't find it...

"dmraid: error while loading shared libraries: libdl.so.2"

libdl.so2, part of libc6, is most certainly there.

Revision history for this message
Tormod Volden (tormodvolden) wrote :

quequotion, please don't post the same information in different bug reports. You either experience the exact same problem as in one of these reports, or you should file your own bug report.

Revision history for this message
Tobias Krais (tux-spam) wrote :
Revision history for this message
Tobias Krais (tux-spam) wrote :

This workaround works for me. Hours of searching are now over! Thanks a lot!

Revision history for this message
Phillip Susi (psusi) wrote :

No, this is not a dup of bug #356503 because that bug is specifically about raid 5. This is raid 1.

Revision history for this message
Phillip Susi (psusi) wrote :

Is this still an issue in Karmic or Lucid?

Changed in dmraid (Ubuntu):
status: New → Incomplete
Revision history for this message
Simon (simonebeling) wrote :

Yes got it in Lucid!

Revision history for this message
Sachin Garg (sgarg-bugreporter) wrote :

System works fine in Lucid. Would like to close this bug.

Phillip Susi (psusi)
Changed in dmraid (Ubuntu):
status: Incomplete → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Duplicates of this bug

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.