Raidset stays inactive due to wrong # of devices
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Debian |
Fix Released
|
Unknown
|
|||
dmraid (Ubuntu) |
Fix Released
|
Medium
|
Unassigned | ||
Bug Description
Binary package hint: dmraid
Hi,
I get the following error with dmraid 1.0.0.rc14-
root@ubuntu:
ERROR: isw device for volume "XenOS" broken on /dev/sdb in RAID set "isw_cgbcheahia
ERROR: isw: wrong # of devices in RAID set "isw_cgbcheahia
ERROR: isw device for volume "Data" broken on /dev/sdb in RAID set "isw_cgbcheahia
ERROR: isw: wrong # of devices in RAID set "isw_cgbcheahia
ERROR: isw device for volume "XenOS" broken on /dev/sdc in RAID set "isw_cgbcheahia
ERROR: isw: wrong # of devices in RAID set "isw_cgbcheahia
ERROR: isw device for volume "Data" broken on /dev/sdc in RAID set "isw_cgbcheahia
ERROR: isw: wrong # of devices in RAID set "isw_cgbcheahia
RAID set "sil_adbhbicddebi" already active
My isw raid set contains two stripped raids ( XenOS and Data )
- XenOS contains 3 primary partitions ( XenOS1, XenOS2, XenOS3 ) and the extended partition
XenOS4 containing partitions ( XenOS5 and XenOS6 )
- Data contains just one primary partition Data1
With the earlier version from hardy using 1.0.0.rc14-0ubuntu3 the configurations works quite fine:
RAID set "isw_cgbcheahia
RAID set "isw_cgbcheahia
RAID set "isw_cgbcheahia
RAID set "isw_cgbcheahia
RAID set "isw_cgbcheahia
RAID set "isw_cgbcheahia
RAID set "isw_cgbcheahia
RAID set "isw_cgbcheahia
RAID set "isw_cgbcheahia
RAID set "sil_adbhbicddebi" already active
Giuseppe Iuculano (giuseppe-iuculano) wrote : | #1 |
Juel (juel-juels-world) wrote : | #2 |
Thats probably it, thx!
Downgrading to 1.0.0.rc14-0ubuntu3 from hardy solves the problem for now..
Phillip Susi (psusi) wrote : | #3 |
Good catch, it appears this is caused by this:
dmraid (1.0.0.
* debian/control: dmraid and dmraid-udeb should depend on dmsetup and
dmsetup-udeb respecitvely, to ensure UUID symlinks are correctly
created.
* debian/
is not able to make use of his RAID array without it. Yes its known
to break other RAID configurations, however there have been no Ubuntu
bugs filed about this issue. (LP: #276095)
Changed in dmraid: | |
importance: | Undecided → Medium |
status: | New → Triaged |
Phillip Susi (psusi) wrote : | #4 |
I have looked at the patch and the problem appears to be the changes it makes to name(). Originally name() is passed the isw_dev it should operate on, which corresponds to the raid volume. The patch changes it to be passed the raid_dev and then it finds the isw_dev itself from the raid dev, only it always uses the first one. If you have more than one raid volume then they all get assigned the same name, which leads to a single volume looking like it contains 4 disks.
I think name() just needs fixed to take the isw_dev parameter again instead of looking up the first entry itself. I will try to fix this tomorrow and upload it to my PPA for testing.
Juel (juel-juels-world) wrote : | #5 |
Thats great news, will be happy to test it then as soon as I get some spare time..
Phillip Susi (psusi) wrote : | #6 |
Ok, to use my test package add the following to your sources.list:
deb http://
deb-src http://
Then when you install or upgrade dmraid ( don't forget to apt-get update after changing sources.list ) you should get -ubuntu13 and hopefully it will work.
Changed in dmraid: | |
status: | Triaged → In Progress |
Juel (juel-juels-world) wrote : | #7 |
Cheers mate, well done!
Works nice and stable :)
Get:1 http://
Get:2 http://
Fetched 109kB in 0s (178kB/s)
Selecting previously deselected package libdmraid1.
(Reading database ... 100008 files and directories currently installed.)
Unpacking libdmraid1.0.0.rc14 (from .../libdmraid1.
Selecting previously deselected package dmraid.
Unpacking dmraid (from .../dmraid_
Processing triggers for man-db ...
Setting up libdmraid1.0.0.rc14 (1.0.0.
Setting up dmraid (1.0.0.
update-initramfs is disabled since running on a live CD
Processing triggers for libc6 ...
ldconfig deferred processing now taking place
root@ubuntu:
RAID set "isw_cgbcheahia
RAID set "isw_cgbcheahia
RAID set "sil_adbhbicddebi" already active
RAID set "isw_cgbcheahia
RAID set "isw_cgbcheahia
RAID set "isw_cgbcheahia
RAID set "isw_cgbcheahia
RAID set "isw_cgbcheahia
RAID set "isw_cgbcheahia
indy2718 (indy2718) wrote : | #8 |
- picture of linux during boot and bailing out Edit (902.6 KiB, image/jpeg)
Hello, I was the user that the raid 10 patch was re-added for. I tried this new dmraid using apt-get on your repository and it doesn't work for me. I am running a custom kernel, 2.6.27 with the latest ubuntu intrepid. Core 2.
The screenshot is during bootup, it bails to the initram prompt.
My raid setup is Intel raid 10.
4 disks of 500 GB each. Total 1000 GB data space.
I would buy a raid controller, but I don't have a free PCI-E slot. I'm not going to upgrade for a while.
Phillip Susi (psusi) wrote : | #9 |
Could you attach the generated output files of dmraid -rD?
indy2718 (indy2718) wrote : | #10 |
I installed the package but didn't reboot.
root@thermal:
/dev/sdd: isw, "isw_jceibccac", GROUP, ok, 976773165 sectors, data@ 0
/dev/sdc: isw, "isw_jceibccac", GROUP, ok, 976773165 sectors, data@ 0
/dev/sdb: isw, "isw_jceibccac", GROUP, ok, 976773165 sectors, data@ 0
/dev/sda: isw, "isw_jceibccac", GROUP, ok, 976773165 sectors, data@ 0
root@thermal:
dmraid version: 1.0.0.rc14 (2006.11.08) shared
dmraid library version: 1.0.0.rc14 (2006.11.08)
device-mapper version: 4.14.0
root@thermal:
ii dmraid 1.0.0.rc14-
ii libdmraid1.0.0.rc14 1.0.0.rc14-
root@thermal:
WARN: locking /var/lock/
NOTICE: skipping removable device /dev/sde
NOTICE: /dev/sdd: asr discovering
NOTICE: /dev/sdd: ddf1 discovering
NOTICE: /dev/sdd: hpt37x discovering
NOTICE: /dev/sdd: hpt45x discovering
NOTICE: /dev/sdd: isw discovering
NOTICE: writing metadata file "sdd_isw.dat"
NOTICE: writing offset to file "sdd_isw.offset"
NOTICE: writing size to file "sdd_isw.size"
NOTICE: /dev/sdd: isw metadata discovered
NOTICE: /dev/sdd: jmicron discovering
NOTICE: /dev/sdd: lsi discovering
NOTICE: /dev/sdd: nvidia discovering
NOTICE: /dev/sdd: pdc discovering
NOTICE: /dev/sdd: sil discovering
NOTICE: /dev/sdd: via discovering
NOTICE: /dev/sdc: asr discovering
NOTICE: /dev/sdc: ddf1 discovering
NOTICE: /dev/sdc: hpt37x discovering
NOTICE: /dev/sdc: hpt45x discovering
NOTICE: /dev/sdc: isw discovering
NOTICE: writing metadata file "sdc_isw.dat"
NOTICE: writing offset to file "sdc_isw.offset"
NOTICE: writing size to file "sdc_isw.size"
NOTICE: /dev/sdc: isw metadata discovered
NOTICE: /dev/sdc: jmicron discovering
NOTICE: /dev/sdc: lsi discovering
NOTICE: /dev/sdc: nvidia discovering
NOTICE: /dev/sdc: pdc discovering
NOTICE: /dev/sdc: sil discovering
NOTICE: /dev/sdc: via discovering
NOTICE: /dev/sdb: asr discovering
NOTICE: /dev/sdb: ddf1 discovering
NOTICE: /dev/sdb: hpt37x discovering
NOTICE: /dev/sdb: hpt45x discovering
NOTICE: /dev/sdb: isw discovering
NOTICE: writing metadata file "sdb_isw.dat"
NOTICE: writing offset to file "sdb_isw.offset"
NOTICE: writing size to file "sdb_isw.size"
NOTICE: /dev/sdb: isw metadata discovered
NOTICE: /dev/sdb: jmicron discovering
NOTICE: /dev/sdb: lsi discovering
NOTICE: /dev/sdb: nvidia discovering
NOTICE: /dev/sdb: pdc discovering
NOTICE: /dev/sdb: sil discovering
NOTICE: /dev/sdb: via discovering
NOTICE: /dev/sda: asr discovering
NOTICE: /dev/sda: ddf1 discovering
NOTICE: /dev/sda: hpt37x discovering
NOTICE: /dev/sda: hpt45x discovering
NOTICE: /dev/sda: isw discovering
NOTICE: writing metadata file "sda_isw.dat"
NOTICE: writing offset to file "sda_isw.offset"
NOTICE: writing size to file "sda_isw.size"
NOTICE: /dev/sda: isw metadata discovered
NOTICE: /dev/sda: jmicron discovering
NOTICE: /dev/sda: lsi discovering
N...
indy2718 (indy2718) wrote : | #11 |
Rich.T. (rich.t.) wrote : | #12 |
Hello!
The new version (dmraid 1.0.0.rc14-
ubuntu@ubuntu:~$ sudo dmraid -ay
RAID set "isw_dejcdcjhf_
RAID set "isw_dejcdcjhf_
RAID set "isw_ecbdhhhfe_
RAID set "isw_ecbdhhhfe_
RAID set "isw_dejcdcjhf_
RAID set "isw_dejcdcjhf_
ubuntu@ubuntu:~$
However, I was counting on being able to install as LVM on RAID using the Alternate CD.
I tried substituting the new files into the /pool/main/d/dmraid folder in the .iso, re summing the md5's, substituting hashes and paths in md5sum.txt and burning, but on booting from the CD, I got an integrity error.
I know that this fix doesn't help indy2718, but this must be affecting quite a few people who would benefit from having the updated files on the disk image. Maybe an update soon?
Thanks.
Phillip Susi (psusi) wrote : | #13 |
indy, those files do not appear to contain metadata for some reason. Try this instead:
sudo dd if=/dev/sda of=sda_isw.dat skip=976773165 bs=512
Repeat for each disk.
indy2718 (indy2718) wrote : | #14 |
Phillip Susi (psusi) wrote : | #15 |
Hrm... strange, can you post the output of sudo fdisk -lu /dev/sda?
indy2718 (indy2718) wrote : | #16 |
# sudo fdisk -lu /dev/sda
Warning: invalid flag 0x0000 of partition table 5 will be corrected by w(rite)
Disk /dev/sda: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors
Units = sectors of 1 * 512 = 512 bytes
Disk identifier: 0x7e7498c6
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 512002047 256000000 7 HPFS/NTFS
/dev/sda2 512002048 1560586229 524292091 5 Extended
/dev/sda3 1560586230 1953536129 196474950 83 Linux
yonish (silver83) wrote : | #17 |
I tried downgrading and I can't understand whether the downgrade didn't work or It's just not working ;
yoni@yoniBuntu:~$ sudo dmraid -ay
/dev/sdb: "sil" and "isw" formats discovered (using isw)!
ERROR: isw device for volume "Volume0" broken on /dev/sdb in RAID set "isw_baiacbfgeh
ERROR: isw: wrong # of devices in RAID set "isw_baiacbfgeh
RAID set "nvidia_fghcaafc" already active
When I restart my computer after interpid was running (even without dmraid installed) I see a bios diagnostic message telling me one of my two raided hardDisks has "failed". This is solved by a complete shutdown - startup sequence (instead of reboot.
after downgrading In synaptic, I see installed version is 1.0.0.rc14-
The sequence of operations I performed in order to "downgrade"
:
1. added the two lines from one of the replies above to my sources.list for apt.
2. sudo apt-get update
3. sudo apt-get upgrade
I saw the installation log and everything looks fine.
Help ?
Phillip Susi (psusi) wrote : | #18 |
yonish, your issue does not appear to be related to this one. It looks like your sdb has both sil and isw metadata on it and dmraid is using the isw, but the other disk is presumably sil. If you aren't using an Intel Matrix Storage controller then you need to erase the isw metadata with sudo dmraid -E /dev/sdb -f isw.
snowgarden (anmeldung-snowgarden-deactivatedaccount) wrote : | #19 |
- My output of "dmraid -rD -d -vvv" Edit (1.4 KiB, text/plain)
This solution worked for me -> https:/
But now if have the problem with mounting my NTFS Filesystem. Does anybody has the same problem?
I get back this errors:
$MFT has invalid magic.
Failed to load $MFT: Input/output error
Failed to mount '/dev/mapper/
NTFS is either inconsistent, or you have hardware faults, or you have a
SoftRAID/FakeRAID hardware. In the first case run chkdsk /f on Windows
then reboot into Windows TWICE. The usage of the /f parameter is very
important! If you have SoftRAID/FakeRAID then first you must activate
it and mount a different device under the /dev/mapper/ directory, (e.g.
/dev/mapper/
for the details.
And:
NTFS signature is missing.
Failed to mount '/dev/mapper/
The device '/dev/mapper/
Maybe you selected the wrong device? Or the whole disk instead of a
partition (e.g. /dev/hda, not /dev/hda1)? Or the other way around?
Phillip Susi (psusi) wrote : | #20 |
Unfortunately fixing the raid10 patch is a lot more complicated than I thought so I have given up. The isw raid10 support apparently was implemented differently in rc15 and works properly so I suggest just backporting that.
indy2718 (indy2718) wrote : Re: [Bug 292302] Re: Raidset stays inactive due to wrong # of devices | #21 |
Phillip Susi wrote:
> Unfortunately fixing the raid10 patch is a lot more complicated than I
> thought so I have given up. The isw raid10 support apparently was
> implemented differently in rc15 and works properly so I suggest just
> backporting that.
>
>
Hello, thank you for the attempt.
I tried rc15 before and it didn't work, and I just tried jaunty dmraid
rc15. 'Could not find metadata' when I boot and try dmraid -ay at initram.
https:/
At worst case, I can keep a local copy of a dmraid package that works,
and just install it whenever I upgrade.
Phillip Susi (psusi) wrote : | #22 |
indy2718 wrote:
> I tried rc15 before and it didn't work, and I just tried jaunty dmraid
> rc15. 'Could not find metadata' when I boot and try dmraid -ay at initram.
>
> https:/
>
> At worst case, I can keep a local copy of a dmraid package that works,
> and just install it whenever I upgrade.
It would be better if we could figure out what is wrong and get dmraid
fixed in time for Jaunty. If you can do some testing with the Jaunty
daily builds that would be helpful. since you say it reported no
metadata found, that sounds like a different issue so please file a new
bug report.
indy2718 (indy2718) wrote : | #23 |
These are the bugs that I have open or messaged to. 276095 explains
my experience with rc15. I can open another bug for rc15 if you want.
http://
https:/
https:/
On Tue, Dec 2, 2008 at 3:50 PM, Phillip Susi <email address hidden> wrote:
> indy2718 wrote:
>> I tried rc15 before and it didn't work, and I just tried jaunty dmraid
>> rc15. 'Could not find metadata' when I boot and try dmraid -ay at initram.
>>
>> https:/
>>
>> At worst case, I can keep a local copy of a dmraid package that works,
>> and just install it whenever I upgrade.
>
> It would be better if we could figure out what is wrong and get dmraid
> fixed in time for Jaunty. If you can do some testing with the Jaunty
> daily builds that would be helpful. since you say it reported no
> metadata found, that sounds like a different issue so please file a new
> bug report.
>
> --
> Raidset stays inactive due to wrong # of devices
> https:/
> You received this bug notification because you are a direct subscriber
> of the bug.
>
Phillip Susi (psusi) wrote : | #24 |
Since the other bug was marked as fixed, and it's subject was the removal of the raid10 patch from rc14, I'd say file a new bug with details on what goes wrong with rc15 in Jaunty.
Giuseppe Iuculano (giuseppe-iuculano) wrote : | #25 |
Hi,
I've prepared a package, can you try it please?
echo "deb http://
apt-get update
apt-get install dmraid=
Giuseppe.
indy2718 (indy2718) wrote : | #26 |
It works fine, I did an apt-get install. I also updated libdmraid
There were no initramfs triggers, so I generated it myself. It boots
and I can use the disk.
x@thermal:/home/x$ dpkg -l | grep dmraid
ii dmraid 1.0.0.rc14-
ii libdmraid1.0.0.rc14 1.0.0.rc14-
On Thu, Dec 4, 2008 at 10:13 AM, Giuseppe Iuculano <email address hidden> wrote:
> Hi,
>
> I've prepared a package, can you try it please?
>
> echo "deb http://
> apt-get update
> apt-get install dmraid=
>
>
> Giuseppe.
>
> --
> Raidset stays inactive due to wrong # of devices
> https:/
> You received this bug notification because you are a direct subscriber
> of the bug.
>
Giuseppe Iuculano (giuseppe-iuculano) wrote : | #27 |
Ok, so we need an ack from Juel.
Juel (juel-juels-world) wrote : | #28 |
Nice!
I can confirm that everythings still ok with your new package here.
All raidsets become active and are working.
Well done, Juel
Giuseppe Iuculano (giuseppe-iuculano) wrote : | #29 |
Changed in dmraid: | |
status: | In Progress → Fix Committed |
Peter Hong (peter-hong) wrote : | #30 |
Hi,
I have a big problem.
I create a RAID 1(mirror) and install Fedora on it.
Now, I want change the OS to ubuntu 8.10. and change the RAID setting to RAID 0.
When I download the dmraid version(
The RAID doesn't work.
ubuntu@ubuntu:~$ sudo dmraid -ay
ERROR: isw device for volume "Volume0" broken on /dev/sdb in RAID set "isw_bcdagehgbe
ERROR: isw: wrong # of devices in RAID set "isw_bcdagehgbe
ERROR: isw device for volume "SDD3" broken on /dev/sda in RAID set "isw_cjgfhdfgic
ERROR: isw: wrong # of devices in RAID set "isw_cjgfhdfgic
ERROR: no mapping possible for RAID set isw_cjgfhdfgic_SDD3
ubuntu@ubuntu:~$ sudo dmraid -s
ERROR: isw device for volume "Volume0" broken on /dev/sdb in RAID set "isw_bcdagehgbe
ERROR: isw: wrong # of devices in RAID set "isw_bcdagehgbe
ERROR: isw device for volume "SDD3" broken on /dev/sda in RAID set "isw_cjgfhdfgic
ERROR: isw: wrong # of devices in RAID set "isw_cjgfhdfgic
*** Group superset isw_bcdagehgbe
--> Subset
name : isw_bcdagehgbe_
size : 625134848
stride : 256
type : stripe
status : broken
subsets: 0
devs : 1
spares : 0
*** Group superset isw_cjgfhdfgic
--> Subset
name : isw_cjgfhdfgic_SDD3
size : 625137664
stride : 128
type : mirror
status : broken
subsets: 0
devs : 1
spares : 0
ubuntu@ubuntu:~$
How can i remove the isw_cjgfhdfgic_SDD3 setting?
Thanks!
Giuseppe Iuculano (giuseppe-iuculano) wrote : | #31 |
Peter Hong ha scritto:
> Hi,
>
> I have a big problem.
>
> I create a RAID 1(mirror) and install Fedora on it.
> Now, I want change the OS to ubuntu 8.10. and change the RAID setting to RAID 0.
>
> When I download the dmraid version(
ubuntu13? Where did you find that version? Can you try my version please?
echo "deb http://
/etc/apt/
apt-get update
apt-get install dmraid=
Giuseppe.
Juel (juel-juels-world) wrote : | #32 |
Probably from Philip Susi here: https:/
But you should try the lastest one from Giuseppe...
Juel
Peter Hong (peter-hong) wrote : | #33 |
Sorry, ubuntu13 is my mistake.
I try the version(
It still can't work .
The output as follow:
=======
root@ubuntu:
dmraid version: 1.0.0.rc14 (2006.11.08) shared
dmraid library version: 1.0.0.rc14 (2006.11.08)
device-mapper version: 4.14.0
root@ubuntu:
ii dmraid 1.0.0.rc14-
ii libdmraid1.0.0.rc14 1.0.0.rc14-
root@ubuntu:
WARN: locking /var/lock/
NOTICE: /dev/sdb: asr discovering
NOTICE: /dev/sdb: ddf1 discovering
NOTICE: /dev/sdb: hpt37x discovering
NOTICE: /dev/sdb: hpt45x discovering
NOTICE: /dev/sdb: isw discovering
NOTICE: writing metadata file "sdb_isw.dat"
NOTICE: writing offset to file "sdb_isw.offset"
NOTICE: writing size to file "sdb_isw.size"
NOTICE: /dev/sdb: isw metadata discovered
NOTICE: /dev/sdb: jmicron discovering
NOTICE: /dev/sdb: lsi discovering
NOTICE: /dev/sdb: nvidia discovering
NOTICE: /dev/sdb: pdc discovering
NOTICE: /dev/sdb: sil discovering
NOTICE: /dev/sdb: via discovering
NOTICE: /dev/sda: asr discovering
NOTICE: /dev/sda: ddf1 discovering
NOTICE: /dev/sda: hpt37x discovering
NOTICE: /dev/sda: hpt45x discovering
NOTICE: /dev/sda: isw discovering
NOTICE: writing metadata file "sda_isw.dat"
NOTICE: writing offset to file "sda_isw.offset"
NOTICE: writing size to file "sda_isw.size"
NOTICE: /dev/sda: isw metadata discovered
NOTICE: /dev/sda: jmicron discovering
NOTICE: /dev/sda: lsi discovering
NOTICE: /dev/sda: nvidia discovering
NOTICE: /dev/sda: pdc discovering
NOTICE: /dev/sda: sil discovering
NOTICE: /dev/sda: via discovering
INFO: RAID devices discovered:
/dev/sdb: isw, "isw_bcdagehgbe", GROUP, ok, 625142446 sectors, data@ 0
/dev/sda: isw, "isw_cjgfhdfgic", GROUP, ok, 625142445 sectors, data@ 0
WARN: unlocking /var/lock/
root@ubuntu:
root@ubuntu:
ERROR: isw device for volume "Volume0" broken on /dev/sdb in RAID set "isw_bcdagehgbe
ERROR: isw: wrong # of devices in RAID set "isw_bcdagehgbe
ERROR: isw device for volume "SDD3" broken on /dev/sda in RAID set "isw_cjgfhdfgic
ERROR: isw: wrong # of devices in RAID set "isw_cjgfhdfgic
ERROR: no mapping possible for RAID set isw_cjgfhdfgic_SDD3
root@ubuntu:
=======
When I try to install Fedora 9 again,
Fedora can detect the correct raid setting( only isw_bcdagehgbe_
Note:
I only create a raid 0 array(isw_
isw_cjgfhdfgic_SDD3 << This setting was be delete on BIOS,but it still can find in dmraid.
Giuseppe Iuculano (giuseppe-iuculano) wrote : | #34 |
Peter Hong ha scritto:
> Note:
> I only create a raid 0 array(isw_
> isw_cjgfhdfgic_SDD3 << This setting was be delete on BIOS,but it still can find in dmraid.
Try dmraid -rE /dev/sda and dmraid -rE /dev/sdb
Note that this command will erase *all* raid metadata.
Giuseppe.
Peter Hong (peter-hong) wrote : | #35 |
Thanks!
After dmraid -rE /dev/sda and dmraid -rE /dev/sdb.
I create a new raid array on BIOS.
Bootup by Live CD. install dmraid..
The raid array can work!
bottkars (karsten-bott) wrote : | #36 |
I tested various version for installing ubuntu on my fakeraid system with Intel icw and RAID1.
I went through nearly every single error described here ...
Neither intrepid nor jaunty seemed to work.
My solution was to use dmraid 1.0.0.rc14-
Will there be a fix that fit´s for all or is this an open issue ?
Colin Watson (cjwatson) wrote : | #37 |
(Fix Committed -> Triaged since this has not yet been committed somewhere that would result in it being in the next Ubuntu upload; though I've drawn this bug to Luke's attention.)
Changed in dmraid: | |
status: | Fix Committed → Triaged |
- dmraid outputs Edit (5.1 KiB, text/plain)
My system: intel ICH9R, 4 hard disks, two raid arrays (raid0 and raid5).
Ubuntu 8.04 installed on raid0 array works well with dmraid 1.0.0.rc14-
Ubuntu 8.10 intrepid could not boot because dmraid print this error 8 times:
ERROR: isw device for volume "zerovol" broken on /dev/sda in RAID set "isw_baeaijeeda
ERROR: isw: wrong # of devices in RAID set "isw_baeaijeeda
I have followed instructions in another bug to change libata hpa option and updated initramfs, but dmraid has the same error message.
Now I have updated dmraid version to 1.0.0.rc14-
Thanks in advance.
Now I have tested 3 dmraid versions: ubuntu repository, Phillip Susi, and Giuseppe Iuculano.
When I chroot from Ubuntu 8.04 to Ubuntu 8.10 root partition everything seems ok, dmraid -ay has normal output (the same behavior with the 3 dmraid versions).
But when I try to boot Ubuntu 8.10 these errors appear:
==================
Unable to enumerate USB device on port 1.
Gave up waiting for root device. Common problems:
-Boot args.
-Missing modules.
ALERT! /dev/mapper/
==================
[With Giuseppe Iuculano version an "No block devices found" error appears 4 times too.]
In the initramfs console I try to execute dmraid -s, dmraid -r and dmraid -ay and the output is like when I chroot BUT with dmraid -ay the output adds an ERROR line:
============
# dmraid -ay
RAID set "isw_baeaijeeda
RAID set "isw_baeaijeeda
ERROR: adding /dev/mapper/
RAID set "isw_baeaijeeda
RAID set "isw_baeaijeeda
RAID set "isw_baeaijeeda
RAID set "isw_baeaijeeda
RAID set "isw_baeaijeeda
RAID set "isw_baeaijeeda
RAID set "isw_baeaijeeda
RAID set "isw_baeaijeeda
RAID set "isw_baeaijeeda
RAID set "isw_baeaijeeda
============
NOTE: I am using amd64 Ubuntu version.
Regards.
Sorry, the output of dmraid -ay in the initramfs console was incorrect. The correct is:
============
# dmraid -ay
RAID set "isw_baeaijeeda
RAID set "isw_baeaijeeda
ERROR: adding /dev/mapper/
RAID set "isw_baeaijeeda
RAID set "isw_baeaijeeda
RAID set "isw_baeaijeeda
RAID set "isw_baeaijeeda
RAID set "isw_baeaijeeda
RAID set "isw_baeaijeeda
RAID set "isw_baeaijeeda
============
Modest clues from a tedious user:
* Note that only primary partitions of the RAID5 appear (cinco1 and cinco2), the logical partitions (cinco5, cinco6, cinco7) are not mentioned. The ubuntu 8.10 root partition is cinco5.
* Remember that when I chroot from Ubuntu 8.04 everything seems ok in dmraid outputs, so could the problem be initramfs module related?
* Ubuntu 8.04 boots well, so I installled dmraid version from hardy repositories, but device-mapper target type "raid45" is not in intrepid kernel, so initramfs could not boot.
Regards.
David Futcher (bobbo) wrote : | #41 |
This was fixed in a Debian version released which appeared in Ubuntu Jaunty. Seeing as this bug has not been touch in well over a year, I will assume that it was fixed in the Debian release and mark this as Fix Released. Of course, if this is still causing anyone problems in recent releases, please re-open this bug. Thankyou!
Changed in dmraid (Ubuntu): | |
status: | Triaged → Fix Released |
Kluth (kluth-weas) wrote : | #42 |
I am using Ubuntu 10.04 Kernel 2.6.32-22-generic
Asus K8V SE Deluxe BIOS: AMIBIOS Version 08.00.09 ID: A0058002
Promise-Controller deaktivated
4 IDE-harddisks (Samsung SP1634N) configured as RAID-0 connected via the VIA VT8237 controller
All harddisks are show in BIOS identicaly
I created the RAID with the pratitioning tool included on the Ubuntu 10.04 64Bit minimal Instalation-CD
The system worked fine for two weeks or so
after changing the /etc/fstab by adding
tmpfs /tempFileSystem tmpfs noexec,
and removing the line that seeks for a floppy
/dev/fd0 /media/floppy auto rw,noauto,user,sync 0 0 # This line is a example because I can't read my harddisk files...
the system hangs after the message
JDB: barrier-based sync failed on md1-8 - disabling barriers
(The message before the grub-screen that the system can't find a floppy still appears (floppy controler in BIOS is deactivated))
I can switch to tty7 and back to tty1 but not to the once between (I have not diabled them, tty7 show just a blinking cursor)
If I add an usb-cd-rom it is found and a message is printed out -> the system does not hang totaly
I can not connect via ssh (don't know if i configured the ssh-deamon yet)
If I hit ctrl-alt-del the system shuts down
I can look to some earlyer messages by using shift-pageup. There is a mesage: "md0: unknown partition table", but i think to remember that this massage has been there all the time since installation
And two lines later comes:
EXT4-fs (md1): mounted filesystem with ordered data mode
Begin: Running /scripts/
Done.
Done.
Begin: Running /scripts/
Done.
The grub entry is:
normal:
recordfail
insmod raid
insmod mdraid
insmod ext2
set rot='(md1)'
search --no-floppy --fs-uuid --set 88d5917f-
linux /boot/vmlinuz-
initrd /boot/initrd.
covery:
recordfail
insmod raid
insmod mdraid
insmod ext2
set rot='(md1)'
search --no-floppy --fs-uuid --set 88d5917f-
echo 'Linux 2.6.32-22-generic wird geladen ...'
linux /boot/vmlinuz-
echo ' Initiale Ramdisk wird geladen ...'
initrd /boot/initrd.
I tried following:
-Booting via "Recovery mode" -> nothing changed
-Booting with added bootoption nodmraid gives me the message "dmraid-activate: WARNING: dmraid disable by boot option" -> hangs after the same message
-Booting Ubuntu 10.04 Kernel 2.6.32-21-generic Live-CD
fdisk -l
Disk /dev/sda: 160.0 GB, 160041885696 bytes
255 heads, 63 sectors/track, 19457 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x3870bf41
Device Boot Start End Blocks Id System
/dev/sda1 * 1 244 1951744 fd Linux raid autodetect
Partition 1 does not end on cylinder boundary.
/dev/sda2 244 19458 154336257 5 Extended
/dev/sda5 244 19458 15433...
Changed in debian: | |
status: | Unknown → Fix Released |
I think this is related with #494278 debian bug, it seems that 07_isw- raid10- nested. dpatch causes this issue.