Raidset stays inactive due to wrong # of devices

Bug #292302 reported by Juel
54
This bug affects 6 people
Affects Status Importance Assigned to Milestone
Debian
Fix Released
Unknown
dmraid (Ubuntu)
Fix Released
Medium
Unassigned
Nominated for Intrepid by Rich.T.
Nominated for Jaunty by Rich.T.

Bug Description

Binary package hint: dmraid

Hi,
I get the following error with dmraid 1.0.0.rc14-2ubuntu12 and libdmraid1.0.0.rc14:

root@ubuntu:/home/ubuntu# dmraid -ay
ERROR: isw device for volume "XenOS" broken on /dev/sdb in RAID set "isw_cgbcheahia_XenOS"
ERROR: isw: wrong # of devices in RAID set "isw_cgbcheahia_XenOS" [4/2] on /dev/sdb
ERROR: isw device for volume "Data" broken on /dev/sdb in RAID set "isw_cgbcheahia_XenOS"
ERROR: isw: wrong # of devices in RAID set "isw_cgbcheahia_XenOS" [4/2] on /dev/sdb
ERROR: isw device for volume "XenOS" broken on /dev/sdc in RAID set "isw_cgbcheahia_XenOS"
ERROR: isw: wrong # of devices in RAID set "isw_cgbcheahia_XenOS" [4/2] on /dev/sdc
ERROR: isw device for volume "Data" broken on /dev/sdc in RAID set "isw_cgbcheahia_XenOS"
ERROR: isw: wrong # of devices in RAID set "isw_cgbcheahia_XenOS" [4/2] on /dev/sdc
RAID set "sil_adbhbicddebi" already active

My isw raid set contains two stripped raids ( XenOS and Data )

- XenOS contains 3 primary partitions ( XenOS1, XenOS2, XenOS3 ) and the extended partition
XenOS4 containing partitions ( XenOS5 and XenOS6 )
- Data contains just one primary partition Data1

With the earlier version from hardy using 1.0.0.rc14-0ubuntu3 the configurations works quite fine:

RAID set "isw_cgbcheahia_XenOS" already active
RAID set "isw_cgbcheahia_XenOS1" already active
RAID set "isw_cgbcheahia_XenOS2" already active
RAID set "isw_cgbcheahia_XenOS3" already active
RAID set "isw_cgbcheahia_XenOS4" already active
RAID set "isw_cgbcheahia_XenOS5" already active
RAID set "isw_cgbcheahia_XenOS6" already active
RAID set "isw_cgbcheahia_Data" already active
RAID set "isw_cgbcheahia_Data1" already active
RAID set "sil_adbhbicddebi" already active

Revision history for this message
Giuseppe Iuculano (giuseppe-iuculano) wrote :

I think this is related with #494278 debian bug, it seems that 07_isw-raid10-nested.dpatch causes this issue.

Revision history for this message
Juel (juel-juels-world) wrote :

Thats probably it, thx!
Downgrading to 1.0.0.rc14-0ubuntu3 from hardy solves the problem for now..

Revision history for this message
Phillip Susi (psusi) wrote :

Good catch, it appears this is caused by this:

dmraid (1.0.0.rc14-2ubuntu9) intrepid; urgency=low

  * debian/control: dmraid and dmraid-udeb should depend on dmsetup and
    dmsetup-udeb respecitvely, to ensure UUID symlinks are correctly
    created.
  * debian/patches/07_isw-raid10-nested.dpatch: Re-add this patch as a user
    is not able to make use of his RAID array without it. Yes its known
    to break other RAID configurations, however there have been no Ubuntu
    bugs filed about this issue. (LP: #276095)

Changed in dmraid:
importance: Undecided → Medium
status: New → Triaged
Revision history for this message
Phillip Susi (psusi) wrote :

I have looked at the patch and the problem appears to be the changes it makes to name(). Originally name() is passed the isw_dev it should operate on, which corresponds to the raid volume. The patch changes it to be passed the raid_dev and then it finds the isw_dev itself from the raid dev, only it always uses the first one. If you have more than one raid volume then they all get assigned the same name, which leads to a single volume looking like it contains 4 disks.

I think name() just needs fixed to take the isw_dev parameter again instead of looking up the first entry itself. I will try to fix this tomorrow and upload it to my PPA for testing.

Revision history for this message
Juel (juel-juels-world) wrote :

Thats great news, will be happy to test it then as soon as I get some spare time..

Revision history for this message
Phillip Susi (psusi) wrote :

Ok, to use my test package add the following to your sources.list:

deb http://ppa.launchpad.net/psusi/ubuntu intrepid main
deb-src http://ppa.launchpad.net/psusi/ubuntu intrepid main

Then when you install or upgrade dmraid ( don't forget to apt-get update after changing sources.list ) you should get -ubuntu13 and hopefully it will work.

Changed in dmraid:
status: Triaged → In Progress
Revision history for this message
Juel (juel-juels-world) wrote :

Cheers mate, well done!
Works nice and stable :)

Get:1 http://ppa.launchpad.net intrepid/main libdmraid1.0.0.rc14 1.0.0.rc14-2ubuntu13 [80.9kB]
Get:2 http://ppa.launchpad.net intrepid/main dmraid 1.0.0.rc14-2ubuntu13 [28.5kB]
Fetched 109kB in 0s (178kB/s)
Selecting previously deselected package libdmraid1.0.0.rc14.
(Reading database ... 100008 files and directories currently installed.)
Unpacking libdmraid1.0.0.rc14 (from .../libdmraid1.0.0.rc14_1.0.0.rc14-2ubuntu13_amd64.deb) ...
Selecting previously deselected package dmraid.
Unpacking dmraid (from .../dmraid_1.0.0.rc14-2ubuntu13_amd64.deb) ...
Processing triggers for man-db ...
Setting up libdmraid1.0.0.rc14 (1.0.0.rc14-2ubuntu13) ...

Setting up dmraid (1.0.0.rc14-2ubuntu13) ...
update-initramfs is disabled since running on a live CD

Processing triggers for libc6 ...
ldconfig deferred processing now taking place
root@ubuntu:/etc/apt# dmraid -ay
RAID set "isw_cgbcheahia_XenOS" already active
RAID set "isw_cgbcheahia_Data" already active
RAID set "sil_adbhbicddebi" already active
RAID set "isw_cgbcheahia_XenOS1" already active
RAID set "isw_cgbcheahia_XenOS2" already active
RAID set "isw_cgbcheahia_XenOS3" already active
RAID set "isw_cgbcheahia_XenOS5" already active
RAID set "isw_cgbcheahia_XenOS6" already active
RAID set "isw_cgbcheahia_Data1" already active

Revision history for this message
indy2718 (indy2718) wrote :

Hello, I was the user that the raid 10 patch was re-added for. I tried this new dmraid using apt-get on your repository and it doesn't work for me. I am running a custom kernel, 2.6.27 with the latest ubuntu intrepid. Core 2.

The screenshot is during bootup, it bails to the initram prompt.
My raid setup is Intel raid 10.
4 disks of 500 GB each. Total 1000 GB data space.

I would buy a raid controller, but I don't have a free PCI-E slot. I'm not going to upgrade for a while.

Revision history for this message
Phillip Susi (psusi) wrote :

Could you attach the generated output files of dmraid -rD?

Revision history for this message
indy2718 (indy2718) wrote :
Download full text (3.6 KiB)

I installed the package but didn't reboot.

root@thermal:/home/x# dmraid -rD
/dev/sdd: isw, "isw_jceibccac", GROUP, ok, 976773165 sectors, data@ 0
/dev/sdc: isw, "isw_jceibccac", GROUP, ok, 976773165 sectors, data@ 0
/dev/sdb: isw, "isw_jceibccac", GROUP, ok, 976773165 sectors, data@ 0
/dev/sda: isw, "isw_jceibccac", GROUP, ok, 976773165 sectors, data@ 0
root@thermal:/home/x# dmraid --version
dmraid version: 1.0.0.rc14 (2006.11.08) shared
dmraid library version: 1.0.0.rc14 (2006.11.08)
device-mapper version: 4.14.0
root@thermal:/home/x# dpkg -l | grep dmraid
ii dmraid 1.0.0.rc14-2ubuntu13 Device-Mapper Software RAID support tool
ii libdmraid1.0.0.rc14 1.0.0.rc14-2ubuntu13 Device-Mapper Software RAID support tool - s
root@thermal:/home/x# dmraid -rD -d -d -d -v -v -v
WARN: locking /var/lock/dmraid/.lock
NOTICE: skipping removable device /dev/sde
NOTICE: /dev/sdd: asr discovering
NOTICE: /dev/sdd: ddf1 discovering
NOTICE: /dev/sdd: hpt37x discovering
NOTICE: /dev/sdd: hpt45x discovering
NOTICE: /dev/sdd: isw discovering
NOTICE: writing metadata file "sdd_isw.dat"
NOTICE: writing offset to file "sdd_isw.offset"
NOTICE: writing size to file "sdd_isw.size"
NOTICE: /dev/sdd: isw metadata discovered
NOTICE: /dev/sdd: jmicron discovering
NOTICE: /dev/sdd: lsi discovering
NOTICE: /dev/sdd: nvidia discovering
NOTICE: /dev/sdd: pdc discovering
NOTICE: /dev/sdd: sil discovering
NOTICE: /dev/sdd: via discovering
NOTICE: /dev/sdc: asr discovering
NOTICE: /dev/sdc: ddf1 discovering
NOTICE: /dev/sdc: hpt37x discovering
NOTICE: /dev/sdc: hpt45x discovering
NOTICE: /dev/sdc: isw discovering
NOTICE: writing metadata file "sdc_isw.dat"
NOTICE: writing offset to file "sdc_isw.offset"
NOTICE: writing size to file "sdc_isw.size"
NOTICE: /dev/sdc: isw metadata discovered
NOTICE: /dev/sdc: jmicron discovering
NOTICE: /dev/sdc: lsi discovering
NOTICE: /dev/sdc: nvidia discovering
NOTICE: /dev/sdc: pdc discovering
NOTICE: /dev/sdc: sil discovering
NOTICE: /dev/sdc: via discovering
NOTICE: /dev/sdb: asr discovering
NOTICE: /dev/sdb: ddf1 discovering
NOTICE: /dev/sdb: hpt37x discovering
NOTICE: /dev/sdb: hpt45x discovering
NOTICE: /dev/sdb: isw discovering
NOTICE: writing metadata file "sdb_isw.dat"
NOTICE: writing offset to file "sdb_isw.offset"
NOTICE: writing size to file "sdb_isw.size"
NOTICE: /dev/sdb: isw metadata discovered
NOTICE: /dev/sdb: jmicron discovering
NOTICE: /dev/sdb: lsi discovering
NOTICE: /dev/sdb: nvidia discovering
NOTICE: /dev/sdb: pdc discovering
NOTICE: /dev/sdb: sil discovering
NOTICE: /dev/sdb: via discovering
NOTICE: /dev/sda: asr discovering
NOTICE: /dev/sda: ddf1 discovering
NOTICE: /dev/sda: hpt37x discovering
NOTICE: /dev/sda: hpt45x discovering
NOTICE: /dev/sda: isw discovering
NOTICE: writing metadata file "sda_isw.dat"
NOTICE: writing offset to file "sda_isw.offset"
NOTICE: writing size to file "sda_isw.size"
NOTICE: /dev/sda: isw metadata discovered
NOTICE: /dev/sda: jmicron discovering
NOTICE: /dev/sda: lsi discovering
N...

Read more...

Revision history for this message
indy2718 (indy2718) wrote :

Adding dmraid -rD outputs

Revision history for this message
Rich.T. (rich.t.) wrote :

Hello!

 The new version (dmraid 1.0.0.rc14-2ubuntu13) works fine for me now (following the steps above):

ubuntu@ubuntu:~$ sudo dmraid -ay
RAID set "isw_dejcdcjhf_Storage" already active
RAID set "isw_dejcdcjhf_Video_Storage" already active
RAID set "isw_ecbdhhhfe_Linux" already active
RAID set "isw_ecbdhhhfe_Windows" already active
RAID set "isw_dejcdcjhf_Storage1" already active
RAID set "isw_dejcdcjhf_Video_Storage1" already active
ubuntu@ubuntu:~$

 However, I was counting on being able to install as LVM on RAID using the Alternate CD.
 I tried substituting the new files into the /pool/main/d/dmraid folder in the .iso, re summing the md5's, substituting hashes and paths in md5sum.txt and burning, but on booting from the CD, I got an integrity error.
 I know that this fix doesn't help indy2718, but this must be affecting quite a few people who would benefit from having the updated files on the disk image. Maybe an update soon?

 Thanks.

Revision history for this message
Phillip Susi (psusi) wrote :

indy, those files do not appear to contain metadata for some reason. Try this instead:

sudo dd if=/dev/sda of=sda_isw.dat skip=976773165 bs=512

Repeat for each disk.

Revision history for this message
indy2718 (indy2718) wrote :

metadata attached

Revision history for this message
Phillip Susi (psusi) wrote :

Hrm... strange, can you post the output of sudo fdisk -lu /dev/sda?

Revision history for this message
indy2718 (indy2718) wrote :

# sudo fdisk -lu /dev/sda
Warning: invalid flag 0x0000 of partition table 5 will be corrected by w(rite)

Disk /dev/sda: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors
Units = sectors of 1 * 512 = 512 bytes
Disk identifier: 0x7e7498c6

   Device Boot Start End Blocks Id System
/dev/sda1 * 2048 512002047 256000000 7 HPFS/NTFS
/dev/sda2 512002048 1560586229 524292091 5 Extended
/dev/sda3 1560586230 1953536129 196474950 83 Linux

Revision history for this message
yonish (silver83) wrote :

I tried downgrading and I can't understand whether the downgrade didn't work or It's just not working ;
yoni@yoniBuntu:~$ sudo dmraid -ay
/dev/sdb: "sil" and "isw" formats discovered (using isw)!
ERROR: isw device for volume "Volume0" broken on /dev/sdb in RAID set "isw_baiacbfgeh_Volume0"
ERROR: isw: wrong # of devices in RAID set "isw_baiacbfgeh_Volume0" [1/2] on /dev/sdb
RAID set "nvidia_fghcaafc" already active

When I restart my computer after interpid was running (even without dmraid installed) I see a bios diagnostic message telling me one of my two raided hardDisks has "failed". This is solved by a complete shutdown - startup sequence (instead of reboot.

after downgrading In synaptic, I see installed version is 1.0.0.rc14-2ubuntu13.

The sequence of operations I performed in order to "downgrade"
:
1. added the two lines from one of the replies above to my sources.list for apt.
2. sudo apt-get update
3. sudo apt-get upgrade

I saw the installation log and everything looks fine.

Help ?

Revision history for this message
Phillip Susi (psusi) wrote :

yonish, your issue does not appear to be related to this one. It looks like your sdb has both sil and isw metadata on it and dmraid is using the isw, but the other disk is presumably sil. If you aren't using an Intel Matrix Storage controller then you need to erase the isw metadata with sudo dmraid -E /dev/sdb -f isw.

Revision history for this message
snowgarden (anmeldung-snowgarden-deactivatedaccount) wrote :

This solution worked for me -> https://bugs.launchpad.net/ubuntu/+source/dmraid/+bug/292302/comments/6

But now if have the problem with mounting my NTFS Filesystem. Does anybody has the same problem?

I get back this errors:
$MFT has invalid magic.
Failed to load $MFT: Input/output error
Failed to mount '/dev/mapper/isw_degfibeaad_data1': Input/output error
NTFS is either inconsistent, or you have hardware faults, or you have a
SoftRAID/FakeRAID hardware. In the first case run chkdsk /f on Windows
then reboot into Windows TWICE. The usage of the /f parameter is very
important! If you have SoftRAID/FakeRAID then first you must activate
it and mount a different device under the /dev/mapper/ directory, (e.g.
/dev/mapper/nvidia_eahaabcc1). Please see the 'dmraid' documentation
for the details.

And:
NTFS signature is missing.
Failed to mount '/dev/mapper/isw_degfibeaad_data2': Invalid argument
The device '/dev/mapper/isw_degfibeaad_data2' doesn't have a valid NTFS.
Maybe you selected the wrong device? Or the whole disk instead of a
partition (e.g. /dev/hda, not /dev/hda1)? Or the other way around?

Revision history for this message
Phillip Susi (psusi) wrote :

Unfortunately fixing the raid10 patch is a lot more complicated than I thought so I have given up. The isw raid10 support apparently was implemented differently in rc15 and works properly so I suggest just backporting that.

Revision history for this message
indy2718 (indy2718) wrote : Re: [Bug 292302] Re: Raidset stays inactive due to wrong # of devices

Phillip Susi wrote:
> Unfortunately fixing the raid10 patch is a lot more complicated than I
> thought so I have given up. The isw raid10 support apparently was
> implemented differently in rc15 and works properly so I suggest just
> backporting that.
>
>
Hello, thank you for the attempt.

I tried rc15 before and it didn't work, and I just tried jaunty dmraid
rc15. 'Could not find metadata' when I boot and try dmraid -ay at initram.

https://bugs.launchpad.net/ubuntu/+source/dmraid/+bug/276095

At worst case, I can keep a local copy of a dmraid package that works,
and just install it whenever I upgrade.

Revision history for this message
Phillip Susi (psusi) wrote :

indy2718 wrote:
> I tried rc15 before and it didn't work, and I just tried jaunty dmraid
> rc15. 'Could not find metadata' when I boot and try dmraid -ay at initram.
>
> https://bugs.launchpad.net/ubuntu/+source/dmraid/+bug/276095
>
> At worst case, I can keep a local copy of a dmraid package that works,
> and just install it whenever I upgrade.

It would be better if we could figure out what is wrong and get dmraid
fixed in time for Jaunty. If you can do some testing with the Jaunty
daily builds that would be helpful. since you say it reported no
metadata found, that sounds like a different issue so please file a new
bug report.

Revision history for this message
indy2718 (indy2718) wrote :

These are the bugs that I have open or messaged to. 276095 explains
my experience with rc15. I can open another bug for rc15 if you want.

http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=494278
https://bugs.launchpad.net/bugs/276095
https://bugs.launchpad.net/bugs/292302

On Tue, Dec 2, 2008 at 3:50 PM, Phillip Susi <email address hidden> wrote:
> indy2718 wrote:
>> I tried rc15 before and it didn't work, and I just tried jaunty dmraid
>> rc15. 'Could not find metadata' when I boot and try dmraid -ay at initram.
>>
>> https://bugs.launchpad.net/ubuntu/+source/dmraid/+bug/276095
>>
>> At worst case, I can keep a local copy of a dmraid package that works,
>> and just install it whenever I upgrade.
>
> It would be better if we could figure out what is wrong and get dmraid
> fixed in time for Jaunty. If you can do some testing with the Jaunty
> daily builds that would be helpful. since you say it reported no
> metadata found, that sounds like a different issue so please file a new
> bug report.
>
> --
> Raidset stays inactive due to wrong # of devices
> https://bugs.launchpad.net/bugs/292302
> You received this bug notification because you are a direct subscriber
> of the bug.
>

Revision history for this message
Phillip Susi (psusi) wrote :

Since the other bug was marked as fixed, and it's subject was the removal of the raid10 patch from rc14, I'd say file a new bug with details on what goes wrong with rc15 in Jaunty.

Revision history for this message
Giuseppe Iuculano (giuseppe-iuculano) wrote :

Hi,

I've prepared a package, can you try it please?

echo "deb http://ppa.launchpad.net/giuseppe-iuculano/ubuntu intrepid main" >> /etc/apt/sources.list
apt-get update
apt-get install dmraid=1.0.0.rc14-2ubuntu12.2

Giuseppe.

Revision history for this message
indy2718 (indy2718) wrote :

It works fine, I did an apt-get install. I also updated libdmraid
There were no initramfs triggers, so I generated it myself. It boots
and I can use the disk.

x@thermal:/home/x$ dpkg -l | grep dmraid
ii dmraid 1.0.0.rc14-2ubuntu12.2
                   Device-Mapper Software RAID support tool
ii libdmraid1.0.0.rc14 1.0.0.rc14-2ubuntu12.2
                   Device-Mapper Software RAID support tool - s

On Thu, Dec 4, 2008 at 10:13 AM, Giuseppe Iuculano <email address hidden> wrote:
> Hi,
>
> I've prepared a package, can you try it please?
>
> echo "deb http://ppa.launchpad.net/giuseppe-iuculano/ubuntu intrepid main" >> /etc/apt/sources.list
> apt-get update
> apt-get install dmraid=1.0.0.rc14-2ubuntu12.2
>
>
> Giuseppe.
>
> --
> Raidset stays inactive due to wrong # of devices
> https://bugs.launchpad.net/bugs/292302
> You received this bug notification because you are a direct subscriber
> of the bug.
>

Revision history for this message
Giuseppe Iuculano (giuseppe-iuculano) wrote :

Ok, so we need an ack from Juel.

Revision history for this message
Juel (juel-juels-world) wrote :

Nice!
I can confirm that everythings still ok with your new package here.
All raidsets become active and are working.

Well done, Juel

Revision history for this message
Giuseppe Iuculano (giuseppe-iuculano) wrote :

Great! debdiff attached.

Giuseppe

Changed in dmraid:
status: In Progress → Fix Committed
Revision history for this message
Peter Hong (peter-hong) wrote :

Hi,

I have a big problem.

I create a RAID 1(mirror) and install Fedora on it.
Now, I want change the OS to ubuntu 8.10. and change the RAID setting to RAID 0.

When I download the dmraid version(1.0.0.rc14-2ubuntu13) and do" dmraid -ay"
The RAID doesn't work.

ubuntu@ubuntu:~$ sudo dmraid -ay
ERROR: isw device for volume "Volume0" broken on /dev/sdb in RAID set "isw_bcdagehgbe_Volume0"
ERROR: isw: wrong # of devices in RAID set "isw_bcdagehgbe_Volume0" [1/2] on /dev/sdb
ERROR: isw device for volume "SDD3" broken on /dev/sda in RAID set "isw_cjgfhdfgic_SDD3"
ERROR: isw: wrong # of devices in RAID set "isw_cjgfhdfgic_SDD3" [1/2] on /dev/sda
ERROR: no mapping possible for RAID set isw_cjgfhdfgic_SDD3

ubuntu@ubuntu:~$ sudo dmraid -s
ERROR: isw device for volume "Volume0" broken on /dev/sdb in RAID set "isw_bcdagehgbe_Volume0"
ERROR: isw: wrong # of devices in RAID set "isw_bcdagehgbe_Volume0" [1/2] on /dev/sdb
ERROR: isw device for volume "SDD3" broken on /dev/sda in RAID set "isw_cjgfhdfgic_SDD3"
ERROR: isw: wrong # of devices in RAID set "isw_cjgfhdfgic_SDD3" [1/2] on /dev/sda
*** Group superset isw_bcdagehgbe
--> Subset
name : isw_bcdagehgbe_Volume0
size : 625134848
stride : 256
type : stripe
status : broken
subsets: 0
devs : 1
spares : 0
*** Group superset isw_cjgfhdfgic
--> Subset
name : isw_cjgfhdfgic_SDD3
size : 625137664
stride : 128
type : mirror
status : broken
subsets: 0
devs : 1
spares : 0
ubuntu@ubuntu:~$

How can i remove the isw_cjgfhdfgic_SDD3 setting?

Thanks!

Revision history for this message
Giuseppe Iuculano (giuseppe-iuculano) wrote :

Peter Hong ha scritto:
> Hi,
>
> I have a big problem.
>
> I create a RAID 1(mirror) and install Fedora on it.
> Now, I want change the OS to ubuntu 8.10. and change the RAID setting to RAID 0.
>
> When I download the dmraid version(1.0.0.rc14-2ubuntu13) and do" dmraid -ay"

ubuntu13? Where did you find that version? Can you try my version please?

echo "deb http://ppa.launchpad.net/giuseppe-iuculano/ubuntu intrepid main" >>
/etc/apt/sources.list
apt-get update
apt-get install dmraid=1.0.0.rc14-2ubuntu12.2

Giuseppe.

Revision history for this message
Juel (juel-juels-world) wrote :

Probably from Philip Susi here: https://bugs.launchpad.net/ubuntu/+source/dmraid/+bug/292302/comments/6
But you should try the lastest one from Giuseppe...

Juel

Revision history for this message
Peter Hong (peter-hong) wrote :

Sorry, ubuntu13 is my mistake.

I try the version(1.0.0.rc14-2ubuntu12.2).
It still can't work .

The output as follow:
====================================
root@ubuntu:/etc/apt# dmraid --version
dmraid version: 1.0.0.rc14 (2006.11.08) shared
dmraid library version: 1.0.0.rc14 (2006.11.08)
device-mapper version: 4.14.0
root@ubuntu:/etc/apt# dpkg -l | grep dmraid
ii dmraid 1.0.0.rc14-2ubuntu12.2 Device-Mapper Software RAID support tool
ii libdmraid1.0.0.rc14 1.0.0.rc14-2ubuntu12.2 Device-Mapper Software RAID support tool - s
root@ubuntu:/etc/apt# dmraid -rD -d -d -d -v -v -v
WARN: locking /var/lock/dmraid/.lock
NOTICE: /dev/sdb: asr discovering
NOTICE: /dev/sdb: ddf1 discovering
NOTICE: /dev/sdb: hpt37x discovering
NOTICE: /dev/sdb: hpt45x discovering
NOTICE: /dev/sdb: isw discovering
NOTICE: writing metadata file "sdb_isw.dat"
NOTICE: writing offset to file "sdb_isw.offset"
NOTICE: writing size to file "sdb_isw.size"
NOTICE: /dev/sdb: isw metadata discovered
NOTICE: /dev/sdb: jmicron discovering
NOTICE: /dev/sdb: lsi discovering
NOTICE: /dev/sdb: nvidia discovering
NOTICE: /dev/sdb: pdc discovering
NOTICE: /dev/sdb: sil discovering
NOTICE: /dev/sdb: via discovering
NOTICE: /dev/sda: asr discovering
NOTICE: /dev/sda: ddf1 discovering
NOTICE: /dev/sda: hpt37x discovering
NOTICE: /dev/sda: hpt45x discovering
NOTICE: /dev/sda: isw discovering
NOTICE: writing metadata file "sda_isw.dat"
NOTICE: writing offset to file "sda_isw.offset"
NOTICE: writing size to file "sda_isw.size"
NOTICE: /dev/sda: isw metadata discovered
NOTICE: /dev/sda: jmicron discovering
NOTICE: /dev/sda: lsi discovering
NOTICE: /dev/sda: nvidia discovering
NOTICE: /dev/sda: pdc discovering
NOTICE: /dev/sda: sil discovering
NOTICE: /dev/sda: via discovering
INFO: RAID devices discovered:

/dev/sdb: isw, "isw_bcdagehgbe", GROUP, ok, 625142446 sectors, data@ 0
/dev/sda: isw, "isw_cjgfhdfgic", GROUP, ok, 625142445 sectors, data@ 0
WARN: unlocking /var/lock/dmraid/.lock
root@ubuntu:/etc/apt#
root@ubuntu:/etc/apt# dmraid -ay
ERROR: isw device for volume "Volume0" broken on /dev/sdb in RAID set "isw_bcdagehgbe_Volume0"
ERROR: isw: wrong # of devices in RAID set "isw_bcdagehgbe_Volume0" [1/2] on /dev/sdb
ERROR: isw device for volume "SDD3" broken on /dev/sda in RAID set "isw_cjgfhdfgic_SDD3"
ERROR: isw: wrong # of devices in RAID set "isw_cjgfhdfgic_SDD3" [1/2] on /dev/sda
ERROR: no mapping possible for RAID set isw_cjgfhdfgic_SDD3
root@ubuntu:/etc/apt#
====================================

When I try to install Fedora 9 again,
Fedora can detect the correct raid setting( only isw_bcdagehgbe_Volume0 )

Note:
I only create a raid 0 array(isw_bcdagehgbe_Volume0) in my BIOS setting.
isw_cjgfhdfgic_SDD3 << This setting was be delete on BIOS,but it still can find in dmraid.

Revision history for this message
Giuseppe Iuculano (giuseppe-iuculano) wrote :

Peter Hong ha scritto:

> Note:
> I only create a raid 0 array(isw_bcdagehgbe_Volume0) in my BIOS setting.
> isw_cjgfhdfgic_SDD3 << This setting was be delete on BIOS,but it still can find in dmraid.

Try dmraid -rE /dev/sda and dmraid -rE /dev/sdb

Note that this command will erase *all* raid metadata.

Giuseppe.

Revision history for this message
Peter Hong (peter-hong) wrote :

Thanks!

After dmraid -rE /dev/sda and dmraid -rE /dev/sdb.
I create a new raid array on BIOS.
Bootup by Live CD. install dmraid..
The raid array can work!

Revision history for this message
bottkars (karsten-bott) wrote :

I tested various version for installing ubuntu on my fakeraid system with Intel icw and RAID1.
I went through nearly every single error described here ...
Neither intrepid nor jaunty seemed to work.
My solution was to use dmraid 1.0.0.rc14-2ubuntu13 as Phil Susi described, and then pin that version in synaptics.

Will there be a fix that fit´s for all or is this an open issue ?

Revision history for this message
Colin Watson (cjwatson) wrote :

(Fix Committed -> Triaged since this has not yet been committed somewhere that would result in it being in the next Ubuntu upload; though I've drawn this bug to Luke's attention.)

Changed in dmraid:
status: Fix Committed → Triaged
Revision history for this message
P3P (p3p) wrote :

My system: intel ICH9R, 4 hard disks, two raid arrays (raid0 and raid5).

Ubuntu 8.04 installed on raid0 array works well with dmraid 1.0.0.rc14-0ubuntu3.1, capable of read/write raid0 and raid5.

Ubuntu 8.10 intrepid could not boot because dmraid print this error 8 times:
ERROR: isw device for volume "zerovol" broken on /dev/sda in RAID set "isw_baeaijeeda_zerovol"
ERROR: isw: wrong # of devices in RAID set "isw_baeaijeeda_zerovol" [8/4] on /dev/sda

I have followed instructions in another bug to change libata hpa option and updated initramfs, but dmraid has the same error message.

Now I have updated dmraid version to 1.0.0.rc14-2ubuntu13 (Phillip Susi), and the "wrong # of devices" error has disappeared, when I chroot to Ubuntu 8.10 from Ubuntu 8.04 everything in dmraid seems ok, but during boot the system cannot access to root partition in the raid5 array and a initram console appears.

Thanks in advance.

Revision history for this message
P3P (p3p) wrote :

Now I have tested 3 dmraid versions: ubuntu repository, Phillip Susi, and Giuseppe Iuculano.

When I chroot from Ubuntu 8.04 to Ubuntu 8.10 root partition everything seems ok, dmraid -ay has normal output (the same behavior with the 3 dmraid versions).

But when I try to boot Ubuntu 8.10 these errors appear:
==================
Unable to enumerate USB device on port 1.
Gave up waiting for root device. Common problems:
     -Boot args.
     -Missing modules.
ALERT! /dev/mapper/isw_baeaijeeda_cinco5 does not exist. Dropping to a shell.
==================

[With Giuseppe Iuculano version an "No block devices found" error appears 4 times too.]

In the initramfs console I try to execute dmraid -s, dmraid -r and dmraid -ay and the output is like when I chroot BUT with dmraid -ay the output adds an ERROR line:
============
# dmraid -ay
RAID set "isw_baeaijeeda_cero" already active
RAID set "isw_baeaijeeda_cinco" already active
ERROR: adding /dev/mapper/isw_baeaijeeda_cinco5 to raid set
RAID set "isw_baeaijeeda_cero2" already active
RAID set "isw_baeaijeeda_cero5" already active
RAID set "isw_baeaijeeda_cero6" already active
RAID set "isw_baeaijeeda_cero7" already active
RAID set "isw_baeaijeeda_cero8" already active
RAID set "isw_baeaijeeda_cinco1" already active
RAID set "isw_baeaijeeda_cinco2" already active
RAID set "isw_baeaijeeda_cinco5" already active
RAID set "isw_baeaijeeda_cinco6" already active
RAID set "isw_baeaijeeda_cinco7" already active
============

NOTE: I am using amd64 Ubuntu version.

Regards.

Revision history for this message
P3P (p3p) wrote :

Sorry, the output of dmraid -ay in the initramfs console was incorrect. The correct is:
============
# dmraid -ay
RAID set "isw_baeaijeeda_cero" already active
RAID set "isw_baeaijeeda_cinco" already active
ERROR: adding /dev/mapper/isw_baeaijeeda_cinco to raid set
RAID set "isw_baeaijeeda_cero2" already active
RAID set "isw_baeaijeeda_cero5" already active
RAID set "isw_baeaijeeda_cero6" already active
RAID set "isw_baeaijeeda_cero7" already active
RAID set "isw_baeaijeeda_cero8" already active
RAID set "isw_baeaijeeda_cinco1" already active
RAID set "isw_baeaijeeda_cinco2" already active
============

Modest clues from a tedious user:
* Note that only primary partitions of the RAID5 appear (cinco1 and cinco2), the logical partitions (cinco5, cinco6, cinco7) are not mentioned. The ubuntu 8.10 root partition is cinco5.
* Remember that when I chroot from Ubuntu 8.04 everything seems ok in dmraid outputs, so could the problem be initramfs module related?
* Ubuntu 8.04 boots well, so I installled dmraid version from hardy repositories, but device-mapper target type "raid45" is not in intrepid kernel, so initramfs could not boot.

Regards.

Revision history for this message
David Futcher (bobbo) wrote :

This was fixed in a Debian version released which appeared in Ubuntu Jaunty. Seeing as this bug has not been touch in well over a year, I will assume that it was fixed in the Debian release and mark this as Fix Released. Of course, if this is still causing anyone problems in recent releases, please re-open this bug. Thankyou!

Changed in dmraid (Ubuntu):
status: Triaged → Fix Released
Revision history for this message
Kluth (kluth-weas) wrote :
Download full text (7.2 KiB)

I am using Ubuntu 10.04 Kernel 2.6.32-22-generic
Asus K8V SE Deluxe BIOS: AMIBIOS Version 08.00.09 ID: A0058002
Promise-Controller deaktivated
4 IDE-harddisks (Samsung SP1634N) configured as RAID-0 connected via the VIA VT8237 controller
All harddisks are show in BIOS identicaly

I created the RAID with the pratitioning tool included on the Ubuntu 10.04 64Bit minimal Instalation-CD
The system worked fine for two weeks or so

after changing the /etc/fstab by adding

tmpfs /tempFileSystem tmpfs noexec,default,noatime,nodiratime 0 0
and removing the line that seeks for a floppy
/dev/fd0 /media/floppy auto rw,noauto,user,sync 0 0 # This line is a example because I can't read my harddisk files...

the system hangs after the message
JDB: barrier-based sync failed on md1-8 - disabling barriers

(The message before the grub-screen that the system can't find a floppy still appears (floppy controler in BIOS is deactivated))

I can switch to tty7 and back to tty1 but not to the once between (I have not diabled them, tty7 show just a blinking cursor)
If I add an usb-cd-rom it is found and a message is printed out -> the system does not hang totaly
I can not connect via ssh (don't know if i configured the ssh-deamon yet)
If I hit ctrl-alt-del the system shuts down

I can look to some earlyer messages by using shift-pageup. There is a mesage: "md0: unknown partition table", but i think to remember that this massage has been there all the time since installation
And two lines later comes:

EXT4-fs (md1): mounted filesystem with ordered data mode
Begin: Running /scripts/local-bottom ...
Done.
Done.
Begin: Running /scripts/init-bottom ...
Done.

The grub entry is:
normal:

recordfail
insmod raid
insmod mdraid
insmod ext2
set rot='(md1)'
search --no-floppy --fs-uuid --set 88d5917f-fdb3-4673-a3bc-82e29469467a
linux /boot/vmlinuz-2.6.32-22-generic root=UUID=88d5917f-fdb3-4673-a3bc-82e29469467a ro splash
initrd /boot/initrd.img-2.6.32-22-generic

covery:

recordfail
insmod raid
insmod mdraid
insmod ext2
set rot='(md1)'
search --no-floppy --fs-uuid --set 88d5917f-fdb3-4673-a3bc-82e29469467a
echo 'Linux 2.6.32-22-generic wird geladen ...'
linux /boot/vmlinuz-2.6.32-22-generic root=UUID=88d5917f-fdb3-4673-a3bc-82e29469467a ro single
echo ' Initiale Ramdisk wird geladen ...'
initrd /boot/initrd.img-2.6.32-22-generic

I tried following:

-Booting via "Recovery mode" -> nothing changed
-Booting with added bootoption nodmraid gives me the message "dmraid-activate: WARNING: dmraid disable by boot option" -> hangs after the same message
-Booting Ubuntu 10.04 Kernel 2.6.32-21-generic Live-CD

fdisk -l

Disk /dev/sda: 160.0 GB, 160041885696 bytes
255 heads, 63 sectors/track, 19457 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x3870bf41

   Device Boot Start End Blocks Id System
/dev/sda1 * 1 244 1951744 fd Linux raid autodetect
Partition 1 does not end on cylinder boundary.
/dev/sda2 244 19458 154336257 5 Extended
/dev/sda5 244 19458 15433...

Read more...

Changed in debian:
status: Unknown → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Duplicates of this bug

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.