vol_id run for partitions on RAID disks
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
casper (Ubuntu) |
Fix Released
|
Undecided
|
Unassigned | ||
udev (Ubuntu) |
Fix Released
|
Medium
|
Unassigned | ||
Bug Description
Binary package hint: dmraid
Hello,
My hardware setup is as follows: evga 680i SLI motherboard and 2 36GB Western Digital Raptors(SATA). I went into the BIOS and created a RAID0 array using the 2 disks. Then, I pop in the Ubuntu 7.04 Live CD and download dmraid. it install and then i do:
root@ubuntu:
/dev/sda: nvidia, "nvidia_dijifjhj", stripe, ok, 72303838 sectors, data@ 0
i notice that it does not show both physical drives but it shows the 72GB that they would have. next i go into disk and look at partitions and notice something:
Disk /dev/mapper/
255 heads, 63 sectors/track, 4500 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
that shows 1 drives worth of space. next:
root@ubuntu:
*** Active Set
name : nvidia_dijifjhj
size : 72303744
stride : 128
type : stripe
status : ok
subsets: 0
devs : 1
spares : 0
on this everything is right except for devs: 1. It should be 2 i think. One other thing is if i use gParted, /dev/mapper will not show up. Just /dev/sda and /dev/sdb
Thanks in advance,
Keven
WORKAROUND: Run "sudo swapoff -a" before installing dmraid
Related branches
- No reviews requested
Keven (hicotton02) wrote : | #1 |
Steve (igloocentral) wrote : | #2 |
pls run:
sudo dmraid -tay -vvvv -dddd -f nvidia
sudo dmraid -n
Keven (hicotton02) wrote : | #3 |
i waited for a couple of days then downloaded the alternative cd and made a software raid. but as you requested i ran the above commands
does this seem to be a bug or user error?
root@######:~# dmraid -tay -vvvv -dddd -f nvidia
NOTICE: checking format identifier nvidia
NOTICE: creating directory /var/lock/dmraid
WARN: locking /var/lock/
NOTICE: skipping removable device /dev/hdb
NOTICE: /dev/sda: nvidia discovering
NOTICE: /dev/sda: nvidia metadata discovered
NOTICE: /dev/sdb: nvidia discovering
DEBUG: _find_set: searching nvidia_dhhfcfca
DEBUG: _find_set: not found nvidia_dhhfcfca
DEBUG: _find_set: searching nvidia_dhhfcfca
DEBUG: _find_set: not found nvidia_dhhfcfca
NOTICE: added /dev/sda to RAID set "nvidia_dhhfcfca"
DEBUG: checking nvidia device "/dev/sda"
DEBUG: set status of set "nvidia_dhhfcfca" to 16
nvidia_dhhfcfca: 0 72303744 linear /dev/sda 0
INFO: Activating stripe RAID set "nvidia_dhhfcfca"
NOTICE: discovering partitions on "nvidia_dhhfcfca"
NOTICE: /dev/.static/
NOTICE: /dev/.static/
DEBUG: _find_set: searching nvidia_dhhfcfca1
DEBUG: _find_set: not found nvidia_dhhfcfca1
DEBUG: _find_set: searching nvidia_dhhfcfca2
DEBUG: _find_set: not found nvidia_dhhfcfca2
DEBUG: _find_set: searching nvidia_dhhfcfca3
DEBUG: _find_set: not found nvidia_dhhfcfca3
NOTICE: created partitioned RAID set(s) for /dev/.static/
nvidia_dhhfcfca1: 0 337302 linear /dev/.static/
INFO: Activating partition RAID set "nvidia_dhhfcfca1"
nvidia_dhhfcfca2: 0 70493220 linear /dev/.static/
INFO: Activating partition RAID set "nvidia_dhhfcfca2"
nvidia_dhhfcfca3: 0 1461915 linear /dev/.static/
INFO: Activating partition RAID set "nvidia_dhhfcfca3"
WARN: unlocking /var/lock/
DEBUG: freeing devices of RAID set "nvidia_dhhfcfca"
DEBUG: freeing device "nvidia_dhhfcfca", path "/dev/sda"
DEBUG: freeing devices of RAID set "nvidia_dhhfcfca1"
DEBUG: freeing device "nvidia_dhhfcfca1", path "/dev/.
DEBUG: freeing devices of RAID set "nvidia_dhhfcfca2"
DEBUG: freeing device "nvidia_dhhfcfca2", path "/dev/.
DEBUG: freeing devices of RAID set "nvidia_dhhfcfca3"
DEBUG: freeing device "nvidia_dhhfcfca3", path "/dev/.
root@######:~# dmraid -n
/dev/sda (nvidia):
0x000 NVIDIA
0x008 size: 30
0x00c chksum: 946633451
0x010 version: 100
0x012 unitNumber: 0
0x013 reserved: 255
0x014 capacity: 144603392
0x018 sectorSize: 512
0x01c productID: STRIPE 68.95G
0x02c productRevision: 100
0x030 unitFlags: 0
0x034 array->version: 6553668
0x038 array->
0x03c array->
0x040 array->
0x044 array->
0x048 array->raidJobCode: 0
0x049 array->stripeWidth: 2
0x04a array->
0x04b array->
0x04c array->raidLevel: 128
0x050 array->
0x054 array->
0x058 array->
0x05c array->st...
Phillip Susi (psusi) wrote : | #4 |
Wow, something is fubar there. dmraid appears to think that the raid is only 36 gigs and is made of only one drive. What version of Ubuntu and dmraid are you running? Can you blank the disks ( sudo dmraid if=/dev/zero of=/dev/sda bs=1MB, and again for /dev/sdb ) and start from scratch, recreating the array in the bios?
Keven (hicotton02) wrote : | #5 |
theskaz@kevenpc:~$ dmraid --version
dmraid version: 1.0.0.rc13 (2006.10.11)
dmraid library version: 1.0.0.rc13 (2006.10.11)
device-mapper version: unknown
Ubuntu 7.04(Feisty Fawn)
umm... as far as starting from scratch, thats going to have to wait a few days. about 2-3 but yeah, ill wipe this system and start over as soon as my other system gets up and running :)
elkekas (jchusillos-deactivatedaccount) wrote : | #6 |
I have the same problem, the configuration hardware is 2 raptor in raid 0 and the same main board Aus A8N- Sli but with Mandriva 2008 and Fedora 8 the raid is recog correctly and detecting, I probe with dmraid release RC14 but the same problem, with Ubuntu not posible, only one raptor device in the raid, with Mandrive and Fedora all OK, install and boot very good.
3vi1 (launchpad-net-eternaldusk) wrote : | #7 |
I noticed this same thing while trying to setup a system using an Asus Striker II Formula MB and two 750GB drives as RAID1 yesterday. dmraid -ay produces no errors, and dmraid -r (and -s) has the right output, but the /dev/mapper device never gets created.
I just went ahead an used softraid for now, but that's not going to be acceptable for the purposes of some dual-booters.
3vi1 (launchpad-net-eternaldusk) wrote : | #8 |
btw - I was using 8.04 and the latest dmraid from the repos.
Alan Ferrier (alan-ferrier) wrote : | #9 |
This has been driving me slightly nuts since Fiesty's release (and it's persisted through Gutsy and doesn't seem to be fixed in Hardy beta's (up to 2.6.24-16) either). And yes, I know I should have raised it as a bug earlier, but I've just worked around it by booting into 2.6.20-12 which was the last kernel where it worked properly. My bad. I've an ASUS P5N-E SLI board (which I think uses the 680i chipset). Anyway, I've configured RAID-1 across two WD SATA drives for dual-booting into XP. Prior to 2.6.20-12 I was able to detect this fakeraid device with dmraid. Since 2.6.22, however, dmraid -ay gives the following message in /var/log/messages:
device-mapper: ioctl: error adding target to table
dmraid -tay works, however, so something's being detected:
nvidia_acihdbbb: 0 976773166 mirror core 2 131072 nosync 2 /dev/sda 0 /dev/sdb 0
I've Googled around a bit and some people are talking about it being a problem with duplicate UUIDs, which indeed there are on my system. blkid gives:
/dev/sda3: UUID="4EECF9B0E
/dev/sdb3: UUID="4EECF9B0E
but as the type is NTFS, there's no easy way to change this UUID.
I tried the same dmraid -ay on both the new Fedora 9 (which failed with the same error) and Knoppix 5.3.1, which uses 2.6.24 kernel - and it succeeded! So my very tentative assumption that it's a kernel bug introduced prior to 2.6.20 is wrong. udev? libdevmapper?
Would be very pleased to get any input or ideas. Any further info required, I'll be happy to post it here.
Phillip Susi (psusi) wrote : | #10 |
Both drives have the same UUID because you have them mirrored, so of course they are the same. That is not an issue.
Please try running with -dddd -vvvv for maxium debug and verbose output, and check the last few lines of dmesg after for kernel errors.
Alan Ferrier (alan-ferrier) wrote : | #11 |
I thought the UUID stuff was a red herring. Here's the output from 2.6.20-12, where everything works ok:
$ sudo dmraid -ay -vvvv -dddd
WARN: locking /var/lock/
NOTICE: /dev/sda: asr discovering
NOTICE: /dev/sda: ddf1 discovering
NOTICE: /dev/sda: hpt37x discovering
NOTICE: /dev/sda: hpt45x discovering
NOTICE: /dev/sda: isw discovering
NOTICE: /dev/sda: jmicron discovering
NOTICE: /dev/sda: lsi discovering
NOTICE: /dev/sda: nvidia discovering
NOTICE: /dev/sda: nvidia metadata discovered
NOTICE: /dev/sda: pdc discovering
NOTICE: /dev/sda: sil discovering
NOTICE: /dev/sda: via discovering
NOTICE: /dev/sdb: asr discovering
NOTICE: /dev/sdb: ddf1 discovering
NOTICE: /dev/sdb: hpt37x discovering
NOTICE: /dev/sdb: hpt45x discovering
NOTICE: /dev/sdb: isw discovering
NOTICE: /dev/sdb: jmicron discovering
NOTICE: /dev/sdb: lsi discovering
NOTICE: /dev/sdb: nvidia discovering
NOTICE: /dev/sdb: nvidia metadata discovered
NOTICE: /dev/sdb: pdc discovering
NOTICE: /dev/sdb: sil discovering
NOTICE: /dev/sdb: via discovering
NOTICE: /dev/sdc: asr discovering
NOTICE: /dev/sdc: ddf1 discovering
NOTICE: /dev/sdc: hpt37x discovering
NOTICE: /dev/sdc: hpt45x discovering
NOTICE: /dev/sdc: isw discovering
NOTICE: /dev/sdc: jmicron discovering
NOTICE: /dev/sdc: lsi discovering
NOTICE: /dev/sdc: nvidia discovering
NOTICE: /dev/sdc: pdc discovering
NOTICE: /dev/sdc: sil discovering
NOTICE: /dev/sdc: via discovering
DEBUG: _find_set: searching nvidia_acihdbbb
DEBUG: _find_set: not found nvidia_acihdbbb
DEBUG: _find_set: searching nvidia_acihdbbb
DEBUG: _find_set: not found nvidia_acihdbbb
NOTICE: added /dev/sda to RAID set "nvidia_acihdbbb"
DEBUG: _find_set: searching nvidia_acihdbbb
DEBUG: _find_set: found nvidia_acihdbbb
DEBUG: _find_set: searching nvidia_acihdbbb
DEBUG: _find_set: found nvidia_acihdbbb
NOTICE: added /dev/sdb to RAID set "nvidia_acihdbbb"
DEBUG: checking nvidia device "/dev/sda"
DEBUG: checking nvidia device "/dev/sdb"
DEBUG: set status of set "nvidia_acihdbbb" to 16
INFO: Activating mirror RAID set "nvidia_acihdbbb"
NOTICE: discovering partitions on "nvidia_acihdbbb"
NOTICE: /dev/mapper/
NOTICE: /dev/mapper/
DEBUG: _find_set: searching nvidia_acihdbbb1
DEBUG: _find_set: not found nvidia_acihdbbb1
DEBUG: _find_set: searching nvidia_acihdbbb2
DEBUG: _find_set: not found nvidia_acihdbbb2
DEBUG: _find_set: searching nvidia_acihdbbb3
DEBUG: _find_set: not found nvidia_acihdbbb3
NOTICE: created partitioned RAID set(s) for /dev/mapper/
INFO: Activating partition RAID set "nvidia_acihdbbb1"
INFO: Activating partition RAID set "nvidia_acihdbbb2"
INFO: Activating partition RAID set "nvidia_acihdbbb3"
WARN: unlocking /var/lock/
DEBUG: freeing devices of RAID set "nvidia_acihdbbb"
DEBUG: freeing device "nvidia_acihdbbb", path "/dev/sda"
DEBUG: freeing device "nvidia_acihdbbb", path "/dev/sdb"
DEBUG: freeing devices of RAID set "nvidia_acihdbbb1"
DEBUG: freeing device "nvidia_acihdbbb1", path "/dev/mapper/
DEBUG: freeing devices of RAID set "n...
Phillip Susi (psusi) wrote : | #12 |
It looks like you don't have the dm-mirror kernel module installed/loaded. Try modprobe dm-mirror before dmraid -ay.
Alan Ferrier (alan-ferrier) wrote : | #13 |
Good thinking, Batman - but it ain't that:
$ lsmod | grep dm
dm_crypt 15364 0
dm_mirror 24832 0
dm_mod 62660 2 dm_crypt,dm_mirror
Also, dmraid -tay returns:
$ sudo dmraid -tay
nvidia_acihdbbb: 0 976773166 mirror core 2 131072 nosync 2 /dev/sda 0 /dev/sdb 0
so the mirror is being detected ok, and dmraid -n returns:
$ sudo dmraid -n
/dev/sdb (nvidia):
0x000 NVIDIA
0x008 size: 30
0x00c chksum: 4196806261
0x010 version: 100
0x012 unitNumber: 1
0x013 reserved: 0
0x014 capacity: 976773120
0x018 sectorSize: 512
0x01c productID: MIRROR 465.76G
0x02c productRevision: 100
0x030 unitFlags: 0
0x034 array->version: 6553668
0x038 array->
0x03c array->
0x040 array->
0x044 array->
0x048 array->raidJobCode: 0
0x049 array->stripeWidth: 1
0x04a array->
0x04b array->
0x04c array->raidLevel: 129
0x050 array->
0x054 array->
0x058 array->
0x05c array->stripeMask: 127
0x060 array->stripeSize: 128
0x064 array->
0x068 array->raidJobMark 0
0x06c array->
0x070 array->
0x074 array->flags 0x1
/dev/sda (nvidia):
0x000 NVIDIA
0x008 size: 30
0x00c chksum: 4196871797
0x010 version: 100
0x012 unitNumber: 0
0x013 reserved: 0
0x014 capacity: 976773120
0x018 sectorSize: 512
0x01c productID: MIRROR 465.76G
0x02c productRevision: 100
0x030 unitFlags: 0
0x034 array->version: 6553668
0x038 array->
0x03c array->
0x040 array->
0x044 array->
0x048 array->raidJobCode: 0
0x049 array->stripeWidth: 1
0x04a array->
0x04b array->
0x04c array->raidLevel: 129
0x050 array->
0x054 array->
0x058 array->
0x05c array->stripeMask: 127
0x060 array->stripeSize: 128
0x064 array->
0x068 array->raidJobMark 0
0x06c array->
0x070 array->
0x074 array->flags 0x1
However, dmraid -ay fails silently:
$ sudo dmraid -ay
$
but with the following in /var/log/messages:
May 5 19:25:45 ***-home kernel: [ 757.141136] device-mapper: ioctl: error adding target to table
Alan Ferrier (alan-ferrier) wrote : | #14 |
Just to confirm, problem still exists in recently released 2.6.24-17 kernel.
Also, dmraid -ay occasionally returns:
"device-mapper: table 253:0: mirror: Device lookup failure"
at the command line on all failing kernels
Patrick Lowry (lowry) wrote : | #15 |
Similar here. dmraid -ay silently fails and no /dev/mapper devices are created. I can confirm that it works in gentoo with 2.6.23, but not in ubuntu hardy liveCD.
misc. output:
ubuntu@ubuntu:~$ sudo dmraid -ay -vvvv -dddd
WARN: locking /var/lock/
NOTICE: /dev/sdb: asr discovering
NOTICE: /dev/sdb: ddf1 discovering
NOTICE: /dev/sdb: hpt37x discovering
NOTICE: /dev/sdb: hpt45x discovering
NOTICE: /dev/sdb: isw discovering
NOTICE: /dev/sdb: jmicron discovering
NOTICE: /dev/sdb: lsi discovering
NOTICE: /dev/sdb: nvidia discovering
NOTICE: /dev/sdb: nvidia metadata discovered
NOTICE: /dev/sdb: pdc discovering
NOTICE: /dev/sdb: sil discovering
NOTICE: /dev/sdb: via discovering
NOTICE: /dev/sda: asr discovering
NOTICE: /dev/sda: ddf1 discovering
NOTICE: /dev/sda: hpt37x discovering
NOTICE: /dev/sda: hpt45x discovering
NOTICE: /dev/sda: isw discovering
NOTICE: /dev/sda: jmicron discovering
NOTICE: /dev/sda: lsi discovering
NOTICE: /dev/sda: nvidia discovering
NOTICE: /dev/sda: nvidia metadata discovered
NOTICE: /dev/sda: pdc discovering
NOTICE: /dev/sda: sil discovering
NOTICE: /dev/sda: via discovering
DEBUG: _find_set: searching nvidia_cecdfbfh
DEBUG: _find_set: not found nvidia_cecdfbfh
DEBUG: _find_set: searching nvidia_cecdfbfh
DEBUG: _find_set: not found nvidia_cecdfbfh
NOTICE: added /dev/sdb to RAID set "nvidia_cecdfbfh"
DEBUG: _find_set: searching nvidia_cecdfbfh
DEBUG: _find_set: found nvidia_cecdfbfh
DEBUG: _find_set: searching nvidia_cecdfbfh
DEBUG: _find_set: found nvidia_cecdfbfh
NOTICE: added /dev/sda to RAID set "nvidia_cecdfbfh"
DEBUG: checking nvidia device "/dev/sda"
DEBUG: checking nvidia device "/dev/sdb"
DEBUG: set status of set "nvidia_cecdfbfh" to 16
WARN: unlocking /var/lock/
DEBUG: freeing devices of RAID set "nvidia_cecdfbfh"
DEBUG: freeing device "nvidia_cecdfbfh", path "/dev/sda"
DEBUG: freeing device "nvidia_cecdfbfh", path "/dev/sdb"
ubuntu@ubuntu:~$ sudo dmraid -n
/dev/sdb (nvidia):
0x000 NVIDIA
0x008 size: 30
0x00c chksum: 3487021370
0x010 version: 100
0x012 unitNumber: 1
0x013 reserved: 131
0x014 capacity: 781422720
0x018 sectorSize: 512
0x01c productID: MIRROR 372.61G
0x02c productRevision: 100
0x030 unitFlags: 0
0x034 array->version: 6553668
0x038 array->
0x03c array->
0x040 array->
0x044 array->
0x048 array->raidJobCode: 0
0x049 array->stripeWidth: 1
0x04a array->
0x04b array->
0x04c array->raidLevel: 129
0x050 array->
0x054 array->
0x058 array->
0x05c array->stripeMask: 127
0x060 array->stripeSize: 128
0x064 array->
0x068 array->raidJobMark 0
0x06c array->
0x070 array->
0x074 array->flags 0x0
/dev/sda (nvidia):
0x000 NVIDIA
0x008 size: 30
0x00c chksum: 3487086906
0x010 version: 100
0x012 unitNumber: 0
0x013 reserved: 131
0x014 capacity: 781422720
0x018 sectorSize: 512
0x01c productID: MIRROR 372.61G
0x02c productRevision: 100
0x030 unitFlags: 0
0x034 array->version:...
Marko Novak (marko-novak) wrote : | #16 |
Hello! I'm experiencing almost exactly the same problem as Alan and Patrick with my RAID controller (Promise Fasttrak TX2300). In Ubuntu 7.10, this worked fine, however since the upgrade to Ubuntu 8.04 (2.6.24-16 kernel), the "dmraid" is not able to find the "/dev/mapper/pdc_*" devices. As mentioned, the symptoms are practically the same as the ones from Alan and Patrick:
1) ubuntu@ubuntu:~$ sudo dmraid -ay -vvvv -dddd
WARN: locking /var/lock/
NOTICE: /dev/sdb: asr discovering
NOTICE: /dev/sdb: ddf1 discovering
NOTICE: /dev/sdb: hpt37x discovering
NOTICE: /dev/sdb: hpt45x discovering
NOTICE: /dev/sdb: isw discovering
NOTICE: /dev/sdb: jmicron discovering
NOTICE: /dev/sdb: lsi discovering
NOTICE: /dev/sdb: nvidia discovering
NOTICE: /dev/sdb: pdc discovering
NOTICE: /dev/sdb: pdc metadata discovered
NOTICE: /dev/sdb: sil discovering
NOTICE: /dev/sdb: via discovering
NOTICE: /dev/sda: asr discovering
NOTICE: /dev/sda: ddf1 discovering
NOTICE: /dev/sda: hpt37x discovering
NOTICE: /dev/sda: hpt45x discovering
NOTICE: /dev/sda: isw discovering
NOTICE: /dev/sda: jmicron discovering
NOTICE: /dev/sda: lsi discovering
NOTICE: /dev/sda: nvidia discovering
NOTICE: /dev/sda: pdc discovering
NOTICE: /dev/sda: pdc metadata discovered
NOTICE: /dev/sda: sil discovering
NOTICE: /dev/sda: via discovering
DEBUG: _find_set: searching pdc_bgidgcdjhb
DEBUG: _find_set: not found pdc_bgidgcdjhb
DEBUG: _find_set: searching pdc_bgidgcdjhb
DEBUG: _find_set: not found pdc_bgidgcdjhb
NOTICE: added /dev/sdb to RAID set "pdc_bgidgcdjhb"
DEBUG: _find_set: searching pdc_bgidgcdjhb
DEBUG: _find_set: found pdc_bgidgcdjhb
DEBUG: _find_set: searching pdc_bgidgcdjhb
DEBUG: _find_set: found pdc_bgidgcdjhb
NOTICE: added /dev/sda to RAID set "pdc_bgidgcdjhb"
DEBUG: checking pdc device "/dev/sda"
DEBUG: checking pdc device "/dev/sdb"
DEBUG: set status of set "pdc_bgidgcdjhb" to 16
DEBUG: checking pdc device "/dev/sda"
DEBUG: checking pdc device "/dev/sdb"
DEBUG: set status of set "pdc_bgidgcdjhb" to 16
WARN: unlocking /var/lock/
DEBUG: freeing devices of RAID set "pdc_bgidgcdjhb"
DEBUG: freeing device "pdc_bgidgcdjhb", path "/dev/sda"
DEBUG: freeing device "pdc_bgidgcdjhb", path "/dev/sdb"
2) ubuntu@ubuntu:~$ sudo dmraid -n
/dev/sdb (pdc):
0x000 promise_id: "Promise Technology, Inc."
0x018 unknown_0: 0x20000
0x01c magic_0: 0x645a1023
0x020 unknown_1: 0x1000e
0x024 magic_1: 0x645a1023
0x028 unknown_2: 0xe
0x200 raid.flags: 0xfdfeffc0
0x204 raid.unknown_0: 0x7
0x205 raid.disk_number: 1
0x206 raid.channel: 1
0x207 raid.device: 0
0x208 raid.magic_0: 0x4a7f1023
0x20c raid.unknown_1: 0x1000e
0x210 raid.unknown_2: 0x0
0x214 raid.disk_secs: 625011376
0x218 raid.unknown_3: 0xffffffff
0x21c raid.unknown_4: 0x1
0x21e raid.status: 0xf
0x21f raid.type: 0x1
0x220 raid.total_disks: 2
0x221 raid.raid0_shift: 7
0x222 raid.raid0_disks: 1
0x223 raid.array_number: 0
0x224 raid.total_secs: 625011328
0x228 raid.cylinders: 38904
0x22a raid.heads: 254
0x22b raid.sectors: 63
0x22c raid.magic_1: 0x645a1023
0x230 raid.unknown_5: 0x100000e
0x234 raid.disk[
0x236 raid.disk[
0x237 raid.disk[0].dev...
Alan Ferrier (alan-ferrier) wrote : | #17 |
Confirming this problem still exists after today's kernel update 2.6.24-18-generic
Marko Novak (marko-novak) wrote : | #18 |
Hello guys!
Today I used the Knoppix 5.1 live CD in which "dmraid" works ok (Alan, thanks for the tip... ;) ). I managed to mount my raid partitions and repair the "grub/menu.lst" file (I have come across this dmraid issue while trying to fix the "menu.lst" file which became broken during the upgrade from Ubuntu 7.10 to Ubuntu 8.04).
Now, here comes an interesting part: as soon as I fixed the "menu.lst" file, I was able to boot my upgraded Ubuntu 8.04 system without a problem! The "dmraid" works ok, even though I'm
using the Ubuntu's "2.6.24-16-generic" kernel (i.e. the same kernel version as is present on the Ubuntu 8.04 live CD).
However, I would still like to figure out why dmraid doesn't work on Ubuntu 8.04 live CD. Now that I have 2 configurations (one working and one not), can you perhaps advise me which printouts/log files to compare to spot the source of the problem? I can of course upload them to this page: you can probably find the anomalies much faster than I. :)
Vojta Grec (vojtagrec) wrote : | #19 |
Hi there!
Recently I have installed hardy x86_64 with nvraid configured as RAID 0 (mirroring), no problems with dmraid during the install. Then I installed Windows in a different partition of the array and then booted the LiveCD to reinstall grub and dmraid failed just like in the comments above.
I have solved this issue and it is deadly simple: if some of your RAID partitions is formatted as swap, the LiveCD mounts it improperly while booting and that prevents dmraid from functioning.
Described in detail: I'm mirroring two SATA discs, /dev/sda and /dev/sdb. Together they make an array (/dev/nvidia_
Vojta Grec (vojtagrec) wrote : | #20 |
Oh my, of course this should be /dev/mapper/
And I have to add that no problems with dmraid occured only when the disk was not partitioned (didn't contain any partition marked as swap respectedly).
Tiger!P (ubuntu-tigerp) wrote : | #21 |
Vojta: Thank you very much for this comment. This also helps for my situation (using a via SATA mirror RAID).
Marko Novak (marko-novak) wrote : | #22 |
Hello Vojta!
Yes indeed! Your advice also solves my issue!
If I execute the "swapoff -a" command after the Ubuntu 8.04 has started, the dmraid package work fine
and I'm able to see all of my RAID partitions in "/dev/mapper/" directory!
Thank you very much for solving this one!!!
Changed in dmraid: | |
assignee: | nobody → vojtagrec |
status: | New → Confirmed |
Phillip Susi (psusi) wrote : Re: livecd auto mounting swap partitions detected on dmraid volume members | #23 |
If swapoff fixes the issue for you, please post the output of blkid. My guess is that blkid is not correctly identifying the disks as raid members, which would prevent them being auto mounted.
Changed in dmraid: | |
importance: | Undecided → Medium |
TonyH (5-launchpad-trog-bofh-org-za) wrote : | #24 |
Right, I've just tripped over this issue after re-installing my windows partition. Swapoff -a solved it, Here's the output of blkid:
ubuntu@ubuntu:~$ sudo blkid
/dev/sda1: UUID="040C25400
/dev/sda2: UUID="1a74b366-
/dev/sda5: TYPE="swap" UUID="02ebdbc8-
/dev/sdb1: UUID="040C25400
/dev/sdb2: UUID="1a74b366-
/dev/sdb5: TYPE="swap" UUID="02ebdbc8-
/dev/loop0: TYPE="squashfs"
/dev/mapper/
/dev/mapper/
/dev/mapper/
ubuntu@ubuntu:~$
This box was originally installed using Gutsy.
Tiger!P (ubuntu-tigerp) wrote : | #25 |
I have run blkid 3 times, first after booting the live CD, second after `swapoff -a` en third after `apt-get install dmraid`
The first output (after booting the live CD):
/dev/sda1: UUID="D4DC7D78D
/dev/sda5: UUID="3A30D91E3
/dev/sda6: TYPE="swap" UUID="d6d263ec-
/dev/sda7: UUID="59d50497-
/dev/sdb1: UUID="D4DC7D78D
/dev/sdb5: UUID="3A30D91E3
/dev/sdb6: TYPE="swap" UUID="d6d263ec-
/dev/sdb7: UUID="59d50497-
/dev/loop0: TYPE="squashfs"
The second output (after swapoff -a):
/dev/sda1: UUID="D4DC7D78D
/dev/sda5: UUID="3A30D91E3
/dev/sda6: TYPE="swap" UUID="d6d263ec-
/dev/sda7: UUID="59d50497-
/dev/sdb1: UUID="D4DC7D78D
/dev/sdb5: UUID="3A30D91E3
/dev/sdb6: TYPE="swap" UUID="d6d263ec-
/dev/sdb7: UUID="59d50497-
/dev/loop0: TYPE="squashfs"
The third output (apt-get install dmraid):
/dev/sda1: UUID="D4DC7D78D
/dev/sda5: UUID="3A30D91E3
/dev/sda6: TYPE="swap" UUID="d6d263ec-
/dev/sda7: UUID="59d50497-
/dev/sdb1: UUID="D4DC7D78D
/dev/sdb5: UUID="3A30D91E3
/dev/sdb6: TYPE="swap" UUID="d6d263ec-
/dev/sdb7: UUID="59d50497-
/dev/loop0: TYPE="squashfs"
/dev/mapper/
/dev/mapper/
/dev/mapper/
/dev/mapper/
I don't know if blkid should identify reiserfs partitions (via_bfdbdagceh8), but I don't see that one here.
ubuntu@ubuntu:~$ ls /dev/mapper/
control via_bfdbdagceh1 via_bfdbdagceh6 via_bfdbdagceh8
via_bfdbdagceh via_bfdbdagceh5 via_bfdbdagceh7
ubuntu@ubuntu:~$
Tiger!P (ubuntu-tigerp) wrote : | #26 |
It might be that my /dev/mapper/
Phillip Susi (psusi) wrote : | #27 |
Yea, I think the bug is in blkid... if you run blkid /dev/sda it should identify it as being a disk that is part of a raid set... the problem is that it still identifies the partitions detected on it as being valid partitions, when they are actually not.
Marko Novak (marko-novak) wrote : | #28 |
Hello Phillip!
Here is also the printout of my "sudo blkid":
1) The first output (after booting the live CD):
/dev/sda1: UUID="9470295F7
/dev/sda2: UUID="36D4ED0DD
/dev/sda5: UUID="e5bae0bd-
/dev/sda6: UUID="7386cb36-
/dev/sda7: UUID="721ebea7-
/dev/sdb1: UUID="9470295F7
/dev/sdb2: UUID="36D4ED0DD
/dev/sdb5: UUID="e5bae0bd-
/dev/sdb6: UUID="7386cb36-
/dev/sdb7: UUID="721ebea7-
/dev/loop0: TYPE="squashfs"
2) The second output (after swapoff -a):
/dev/sda1: UUID="9470295F7
/dev/sda2: UUID="36D4ED0DD
/dev/sda5: UUID="e5bae0bd-
/dev/sda6: TYPE="swap" UUID="7386cb36-
/dev/sda7: UUID="721ebea7-
/dev/sdb1: UUID="9470295F7
/dev/sdb2: UUID="36D4ED0DD
/dev/sdb5: UUID="e5bae0bd-
/dev/sdb6: TYPE="swap" UUID="7386cb36-
/dev/sdb7: UUID="721ebea7-
/dev/loop0: TYPE="squashfs"
3) The third output (apt-get install dmraid):
/dev/sda1: UUID="9470295F7
/dev/sda2: UUID="36D4ED0DD
/dev/sda5: UUID="e5bae0bd-
/dev/sda6: TYPE="swap" UUID="7386cb36-
/dev/sda7: UUID="721ebea7-
/dev/sdb1: UUID="9470295F7
/dev/sdb2: UUID="36D4ED0DD
/dev/sdb5: UUID="e5bae0bd-
/dev/sdb6: TYPE="swap" UUID="7386cb36-
/dev/sdb7: UUID="721ebea7-
/dev/loop0: TYPE="squashfs"
/dev/mapper/
/dev/mapper/
/dev/mapper/
/dev/mapper/
/dev/mapper/
The printout is very similar to that of Tiger!P. Even the raiserfs partition was detected.
Marko Novak (marko-novak) wrote : | #29 |
Hello Phillip!
One more thing: the "sudo blkid /dev/sda" doesn't output anything,
even if I execute it once the RAID is working.
Tormod Volden (tormodvolden) wrote : | #30 |
It seems from /scripts/
Phillip Susi (psusi) wrote : | #31 |
Hrm... it should definitely be respecting the volume identifiers udev gets from blkid rather than probing every block device it can find for that signature. At any rate, blkid should not be identifying partitions on a block device that is identified as a raid member. Or maybe dmraid should be overriding that identification?
Tormod Volden (tormodvolden) wrote : | #32 |
How can the initrd identify it as a raid member, when it doesn't have dmraid? Has blkid become fakeraid-aware? Also, last time I checked, udev used vol_id and not blkid, and they do not share the partition probing code for some reason.
Vojta Grec (vojtagrec) wrote : | #33 |
I think Tormod hit the nail on the head on that (but I don't know blkid internals too). The system solution would be to include dmraid on the live CD. It's very small package so it should fit there. If that's not possible, I propose following ugly hack: modify the dmraid initscript (or maybe .deb install script) to detect whether it is run on live CD and turn off the swap partitions used for raid if so (I think it should be possible to detect whether a swap partition is a part of raid or not before setting up the array). I'm not on my computer right now, I'll look into that when I'm home.
Phillip Susi (psusi) wrote : | #34 |
Oops, I mixed up blkid and vol_id. Please try running vol_id /dev/sda and see if it identifies it as a raid member. Last I checked, vol_id has bits of the signature detection code hacked out of dmraid built into it so it recognizes the disk device as a whole as being a raid member and exports that info to udev, but then udev ends up running vol_id on the individual partitions, which happily identifies them as normal, usable partitions, and this is not the case.
Once vol_id identifies the entire disk as a raid member, the partitions on it should be ignored...
Tiger!P (ubuntu-tigerp) wrote : | #35 |
The output I get is the same for every instance I run (just after booting the live CD, after swapoff -a and after install of dmraid).
ubuntu@ubuntu:~$ sudo vol_id /dev/sda
ID_FS_USAGE=raid
ID_FS_TYPE=
ID_FS_VERSION=1
ID_FS_UUID=
ID_FS_UUID_ENC=
ID_FS_LABEL=
ID_FS_LABEL_ENC=
ID_FS_LABEL_SAFE=
ubuntu@ubuntu:~$
So it seems that vol_id knows that I have raid.
Marko Novak (marko-novak) wrote : | #36 |
Hello!
Yes, I also get very similar output as Tiger!P in all the three stages, when invoking "sudo vol_id /dev/sda":
ID_FS_USAGE=raid
ID_FS_TYPE=
ID_FS_VERSION=
ID_FS_UUID=
ID_FS_UUID_ENC=
ID_FS_LABEL=
ID_FS_LABEL_ENC=
ID_FS_LABEL_SAFE=
Tormod Volden (tormodvolden) wrote : | #37 |
We could add this to the mentioned 13swap script:
vol_id ${device%%[0-9]*} | grep -q "^ID_FS_TYPE=raid" && continue
(unless the script gets fixed up to be less brute force)
Tormod Volden (tormodvolden) wrote : | #38 |
D'oh, that should have been ID_FS_USAGE and not ID_FS_TYPE. Anyway, I have tested that it works here, and I will try to get it pushed to casper. To test this out you can 1) modify the initrd.gz file on the live CD or 2) boot with break=casper-bottom and hand-edit the script file with "ed" or 3) boot with break=casper-bottom and copy the modified script file over from the stick/cd
Changed in udev: | |
assignee: | nobody → tormodvolden |
status: | New → In Progress |
Tormod Volden (tormodvolden) wrote : | #39 |
Changed in dmraid: | |
assignee: | vojtagrec → nobody |
status: | Confirmed → Invalid |
description: | updated |
Tormod Volden (tormodvolden) wrote : | #40 |
Did anyone experience data loss or desync of the RAID because of this bug? We should try to get this fixed in 8.04.1 or at least 8.04.2 since it must be fixed on the CD itself.
Changed in casper: | |
assignee: | tormodvolden → nobody |
status: | In Progress → Confirmed |
Phillip Susi (psusi) wrote : | #41 |
It seems to me that if vol_id reports the usage of the entire disk device as being raid, then that should carry over to all partitions on it, and a usage type of raid should preclude its use by anything other than raid.
Tormod Volden (tormodvolden) wrote : | #42 |
Yes, that would make sense for udev or anything else that enumerates through devices and partitions. But I think vol_id is just a simple one-block-device sniffer: it looks at a given block device like /dev/sda1 without looking at what would be the parent device. Those utilities using vol_id need to do better checking. OTOH the man page says that vol_id "detects various raid setups to prevent the recognition of raid members as a volume with a filesystem." which it seems that it fails to do. Maybe we should open a separate bug for vol_id (udev) and leave this one for the Desktop CD issue.
Changed in casper: | |
status: | Confirmed → Fix Committed |
Launchpad Janitor (janitor) wrote : | #43 |
This bug was fixed in the package casper - 1.133
---------------
casper (1.133) intrepid; urgency=low
[ Tormod Volden ]
* Do not use swap on RAID raw devices (LP: #136804)
[ Agostino Russo ]
* Test if livemedia is a directory before trying to mount it as a
loopfile
* Reverted changes to casper-helpers as requested by Mithrandir since
replaying the journal on a hibernated system would lead to file system
corruption.
-- Evan Dandrea <email address hidden> Wed, 18 Jun 2008 12:34:58 -0400
Changed in casper: | |
status: | Fix Committed → Fix Released |
Luke Yelavich (themuso) wrote : Re: [Bug 136804] Re: livecd auto mounting swap partitions detected on dmraid volume members | #44 |
May I draw your attention to https:/
Phillip Susi (psusi) wrote : Re: livecd auto mounting swap partitions detected on dmraid volume members | #45 |
I am confused as to what this issue has to do with casper. The problem is that udev should not even be running vol_id on the partitions on a raid disk since those partitions are not even really valid.... ideally the kernel should not even be detecting those partitions.
Tormod Volden (tormodvolden) wrote : | #46 |
I see what you mean, there shouldn't even exist /dev/sda1 etc, only /dev/sda if /dev/sda is a raid device. Yes, that would be a proper solution.
Changed in dmraid: | |
status: | Invalid → New |
Tormod Volden (tormodvolden) wrote : | #47 |
With the latest PATH changes for casper in intrepid, the released fix to be updated to use full paths.
Changed in casper: | |
status: | Fix Released → In Progress |
Changed in casper: | |
status: | In Progress → Fix Committed |
Tormod Volden (tormodvolden) wrote : | #48 |
- do not make device nodes for "partitions" on raid volumes Edit (682 bytes, text/plain)
Now for the "proper" fix of udev. I am not sure how this works for other kinds of raids, but for my fakeraid, the disk volumes (/dev/sda and /dev/sdb) have ID_FS_USAGE = "raid", and all partitions on these should not be exposed (since they are not really partitions). So if udev detects a device of type "partition", and the parent volume is "raid", it should just ignore it and not make a device node.
This is implemented in the attached patch for the udev rules. Note that the test is located just after the ID_* for the parent are imported. Any ideas for setups that could be broken by this modification?
Launchpad Janitor (janitor) wrote : | #49 |
This bug was fixed in the package casper - 1.138
---------------
casper (1.138) intrepid; urgency=low
[ Tormod Volden ]
* use full path for vol_id in swap-on-raid detection (LP: #136804)
[ Martin Pitt ]
* 33enable_
update-notifier gconf keys, to undo the corresponding change for disabling
apport right before the release.
* Add 45disable_
live system, since having guest sessions in a live session does not make
much sense. (See gdm-guest-login spec)
-- Martin Pitt <email address hidden> Thu, 31 Jul 2008 14:19:07 +0200
Changed in casper: | |
status: | Fix Committed → Fix Released |
Tormod Volden (tormodvolden) wrote : | #50 |
Did anyone try or look at my udev patch in comment 48? Or should this be fixed in a totally different way?
Scott James Remnant (Canonical) (canonical-scott) wrote : | #51 |
In conversation upstream, since the patch actually just reverses one we back out due to other bugs ;)
Phillip Susi (psusi) wrote : | #52 |
Any update on that upstream conversation Scott?
Changed in udev: | |
status: | New → In Progress |
Scott James Remnant (Canonical) (canonical-scott) wrote : | #53 |
We dropped any such patch ages ago, if this is still broken, it's broken upstream?
Changed in udev: | |
status: | In Progress → Fix Released |
Tormod Volden (tormodvolden) wrote : | #54 |
I do not see this being fixed on Jaunty Alpha-6, when running from the Desktop CD at least. Is the dmraid library needed at boot to detect dmraid members that should be hidden?
$ sudo vol_id /dev/sda
ID_FS_USAGE=raid
ID_FS_TYPE=
ID_FS_VERSION=
ID_FS_UUID=
ID_FS_UUID_ENC=
ID_FS_LABEL=
ID_FS_LABEL_ENC=
Changed in udev: | |
status: | Fix Released → New |
Scott James Remnant (Canonical) (canonical-scott) wrote : | #55 |
Please don't simply reopen a bug without proving that it exists.
All you supplied is the output of running vol_id yourself on the command-line.
Please demonstrate that *udev* is running vol_id on the partitions.
Changed in udev: | |
status: | New → Fix Released |
Tormod Volden (tormodvolden) wrote : | #56 |
Well, what I forgot to point out, is that /dev/sda1 /dev/sda2 etc still exist. They should not exist, right? With my vol_id run above I just wanted to show that the parent device is a recognised raid raw device.
You mentioned earlier having dropped a patch, which my patch in comment 48 would just reverse. However, when I look through /lib/udev/rules.d I can't find anything dealing with raid members.
I don't know if the problem really is "that *udev* is running vol_id on the partitions" but the result is that the raw device "partitions" appear.
Tormod Volden (tormodvolden) wrote : | #57 |
Tormod Volden (tormodvolden) wrote : | #58 |
However, today I see no /dev/sdaX (I have updated the live CD). They show up in ubiquity though, I guess partman does its own scan for partitions. So although the /dev/sdaX show up in the udev log they are not created and everything seems fine now.
Scott James Remnant (Canonical) (canonical-scott) wrote : Re: [Bug 136804] Re: vol_id run for partitions on RAID disks | #59 |
On Mon, 2009-03-16 at 09:00 +0000, Tormod Volden wrote:
> However, today I see no /dev/sdaX (I have updated the live CD). They
> show up in ubiquity though, I guess partman does its own scan for
> partitions. So although the /dev/sdaX show up in the udev log they are
> not created and everything seems fine now.
>
Glad to hear everything's working now :)
Scott
--
Scott James Remnant
<email address hidden>
i for got to add this:
root@ubuntu:~# dmraid -b
/dev/sda: 72303840 total, "WD-WMAKE2048591"
/dev/sdb: 72303840 total, "WD-WMAKH1153782"