vol_id run for partitions on RAID disks

Bug #136804 reported by Keven
20
Affects Status Importance Assigned to Milestone
casper (Ubuntu)
Fix Released
Undecided
Unassigned
Nominated for Hardy by Tormod Volden
udev (Ubuntu)
Fix Released
Medium
Unassigned
Nominated for Hardy by Tormod Volden

Bug Description

Binary package hint: dmraid

Hello,
    My hardware setup is as follows: evga 680i SLI motherboard and 2 36GB Western Digital Raptors(SATA). I went into the BIOS and created a RAID0 array using the 2 disks. Then, I pop in the Ubuntu 7.04 Live CD and download dmraid. it install and then i do:

root@ubuntu:~/Desktop# dmraid -r
/dev/sda: nvidia, "nvidia_dijifjhj", stripe, ok, 72303838 sectors, data@ 0

i notice that it does not show both physical drives but it shows the 72GB that they would have. next i go into disk and look at partitions and notice something:

Disk /dev/mapper/nvidia_dijifjhj: 37.0 GB, 37019516928 bytes
255 heads, 63 sectors/track, 4500 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

that shows 1 drives worth of space. next:

root@ubuntu:~/Desktop# dmraid -s
*** Active Set
name : nvidia_dijifjhj
size : 72303744
stride : 128
type : stripe
status : ok
subsets: 0
devs : 1
spares : 0

on this everything is right except for devs: 1. It should be 2 i think. One other thing is if i use gParted, /dev/mapper will not show up. Just /dev/sda and /dev/sdb

Thanks in advance,
Keven

WORKAROUND: Run "sudo swapoff -a" before installing dmraid

Revision history for this message
Keven (hicotton02) wrote :

i for got to add this:
root@ubuntu:~# dmraid -b
/dev/sda: 72303840 total, "WD-WMAKE2048591"
/dev/sdb: 72303840 total, "WD-WMAKH1153782"

Revision history for this message
Steve (igloocentral) wrote :

pls run:
sudo dmraid -tay -vvvv -dddd -f nvidia
sudo dmraid -n

Revision history for this message
Keven (hicotton02) wrote :
Download full text (3.3 KiB)

i waited for a couple of days then downloaded the alternative cd and made a software raid. but as you requested i ran the above commands

does this seem to be a bug or user error?

root@######:~# dmraid -tay -vvvv -dddd -f nvidia
NOTICE: checking format identifier nvidia
NOTICE: creating directory /var/lock/dmraid
WARN: locking /var/lock/dmraid/.lock
NOTICE: skipping removable device /dev/hdb
NOTICE: /dev/sda: nvidia discovering
NOTICE: /dev/sda: nvidia metadata discovered
NOTICE: /dev/sdb: nvidia discovering
DEBUG: _find_set: searching nvidia_dhhfcfca
DEBUG: _find_set: not found nvidia_dhhfcfca
DEBUG: _find_set: searching nvidia_dhhfcfca
DEBUG: _find_set: not found nvidia_dhhfcfca
NOTICE: added /dev/sda to RAID set "nvidia_dhhfcfca"
DEBUG: checking nvidia device "/dev/sda"
DEBUG: set status of set "nvidia_dhhfcfca" to 16
nvidia_dhhfcfca: 0 72303744 linear /dev/sda 0
INFO: Activating stripe RAID set "nvidia_dhhfcfca"
NOTICE: discovering partitions on "nvidia_dhhfcfca"
NOTICE: /dev/.static/dev/mapper/nvidia_dhhfcfca: dos discovering
NOTICE: /dev/.static/dev/mapper/nvidia_dhhfcfca: dos metadata discovered
DEBUG: _find_set: searching nvidia_dhhfcfca1
DEBUG: _find_set: not found nvidia_dhhfcfca1
DEBUG: _find_set: searching nvidia_dhhfcfca2
DEBUG: _find_set: not found nvidia_dhhfcfca2
DEBUG: _find_set: searching nvidia_dhhfcfca3
DEBUG: _find_set: not found nvidia_dhhfcfca3
NOTICE: created partitioned RAID set(s) for /dev/.static/dev/mapper/nvidia_dhhfcfca
nvidia_dhhfcfca1: 0 337302 linear /dev/.static/dev/mapper/nvidia_dhhfcfca 63
INFO: Activating partition RAID set "nvidia_dhhfcfca1"
nvidia_dhhfcfca2: 0 70493220 linear /dev/.static/dev/mapper/nvidia_dhhfcfca 337365
INFO: Activating partition RAID set "nvidia_dhhfcfca2"
nvidia_dhhfcfca3: 0 1461915 linear /dev/.static/dev/mapper/nvidia_dhhfcfca 70830585
INFO: Activating partition RAID set "nvidia_dhhfcfca3"
WARN: unlocking /var/lock/dmraid/.lock
DEBUG: freeing devices of RAID set "nvidia_dhhfcfca"
DEBUG: freeing device "nvidia_dhhfcfca", path "/dev/sda"
DEBUG: freeing devices of RAID set "nvidia_dhhfcfca1"
DEBUG: freeing device "nvidia_dhhfcfca1", path "/dev/.static/dev/mapper/nvidia_dhhfcfca"
DEBUG: freeing devices of RAID set "nvidia_dhhfcfca2"
DEBUG: freeing device "nvidia_dhhfcfca2", path "/dev/.static/dev/mapper/nvidia_dhhfcfca"
DEBUG: freeing devices of RAID set "nvidia_dhhfcfca3"
DEBUG: freeing device "nvidia_dhhfcfca3", path "/dev/.static/dev/mapper/nvidia_dhhfcfca"
root@######:~# dmraid -n
/dev/sda (nvidia):
0x000 NVIDIA

0x008 size: 30
0x00c chksum: 946633451
0x010 version: 100
0x012 unitNumber: 0
0x013 reserved: 255
0x014 capacity: 144603392
0x018 sectorSize: 512
0x01c productID: STRIPE 68.95G
0x02c productRevision: 100
0x030 unitFlags: 0
0x034 array->version: 6553668
0x038 array->signature[0]: 1900212879
0x03c array->signature[1]: 1648542816
0x040 array->signature[2]: 225094918
0x044 array->signature[3]: 1451584459
0x048 array->raidJobCode: 0
0x049 array->stripeWidth: 2
0x04a array->totalVolumes: 2
0x04b array->originalWidth: 2
0x04c array->raidLevel: 128
0x050 array->stripeBlockSize: 128
0x054 array->stripeBlockByteSize: 65536
0x058 array->stripeBlockPower: 7
0x05c array->st...

Read more...

Revision history for this message
Phillip Susi (psusi) wrote :

Wow, something is fubar there. dmraid appears to think that the raid is only 36 gigs and is made of only one drive. What version of Ubuntu and dmraid are you running? Can you blank the disks ( sudo dmraid if=/dev/zero of=/dev/sda bs=1MB, and again for /dev/sdb ) and start from scratch, recreating the array in the bios?

Revision history for this message
Keven (hicotton02) wrote :

theskaz@kevenpc:~$ dmraid --version
dmraid version: 1.0.0.rc13 (2006.10.11)
dmraid library version: 1.0.0.rc13 (2006.10.11)
device-mapper version: unknown

Ubuntu 7.04(Feisty Fawn)

umm... as far as starting from scratch, thats going to have to wait a few days. about 2-3 but yeah, ill wipe this system and start over as soon as my other system gets up and running :)

Revision history for this message
elkekas (jchusillos-deactivatedaccount) wrote :

I have the same problem, the configuration hardware is 2 raptor in raid 0 and the same main board Aus A8N- Sli but with Mandriva 2008 and Fedora 8 the raid is recog correctly and detecting, I probe with dmraid release RC14 but the same problem, with Ubuntu not posible, only one raptor device in the raid, with Mandrive and Fedora all OK, install and boot very good.

Revision history for this message
3vi1 (launchpad-net-eternaldusk) wrote :

I noticed this same thing while trying to setup a system using an Asus Striker II Formula MB and two 750GB drives as RAID1 yesterday. dmraid -ay produces no errors, and dmraid -r (and -s) has the right output, but the /dev/mapper device never gets created.

I just went ahead an used softraid for now, but that's not going to be acceptable for the purposes of some dual-booters.

Revision history for this message
3vi1 (launchpad-net-eternaldusk) wrote :

btw - I was using 8.04 and the latest dmraid from the repos.

Revision history for this message
Alan Ferrier (alan-ferrier) wrote :

This has been driving me slightly nuts since Fiesty's release (and it's persisted through Gutsy and doesn't seem to be fixed in Hardy beta's (up to 2.6.24-16) either). And yes, I know I should have raised it as a bug earlier, but I've just worked around it by booting into 2.6.20-12 which was the last kernel where it worked properly. My bad. I've an ASUS P5N-E SLI board (which I think uses the 680i chipset). Anyway, I've configured RAID-1 across two WD SATA drives for dual-booting into XP. Prior to 2.6.20-12 I was able to detect this fakeraid device with dmraid. Since 2.6.22, however, dmraid -ay gives the following message in /var/log/messages:

device-mapper: ioctl: error adding target to table

dmraid -tay works, however, so something's being detected:

nvidia_acihdbbb: 0 976773166 mirror core 2 131072 nosync 2 /dev/sda 0 /dev/sdb 0

I've Googled around a bit and some people are talking about it being a problem with duplicate UUIDs, which indeed there are on my system. blkid gives:

/dev/sda3: UUID="4EECF9B0ECF99287" TYPE="ntfs"
/dev/sdb3: UUID="4EECF9B0ECF99287" TYPE="ntfs"

but as the type is NTFS, there's no easy way to change this UUID.

I tried the same dmraid -ay on both the new Fedora 9 (which failed with the same error) and Knoppix 5.3.1, which uses 2.6.24 kernel - and it succeeded! So my very tentative assumption that it's a kernel bug introduced prior to 2.6.20 is wrong. udev? libdevmapper?

Would be very pleased to get any input or ideas. Any further info required, I'll be happy to post it here.

Revision history for this message
Phillip Susi (psusi) wrote :

Both drives have the same UUID because you have them mirrored, so of course they are the same. That is not an issue.

Please try running with -dddd -vvvv for maxium debug and verbose output, and check the last few lines of dmesg after for kernel errors.

Revision history for this message
Alan Ferrier (alan-ferrier) wrote :
Download full text (6.1 KiB)

I thought the UUID stuff was a red herring. Here's the output from 2.6.20-12, where everything works ok:

$ sudo dmraid -ay -vvvv -dddd
WARN: locking /var/lock/dmraid/.lock
NOTICE: /dev/sda: asr discovering
NOTICE: /dev/sda: ddf1 discovering
NOTICE: /dev/sda: hpt37x discovering
NOTICE: /dev/sda: hpt45x discovering
NOTICE: /dev/sda: isw discovering
NOTICE: /dev/sda: jmicron discovering
NOTICE: /dev/sda: lsi discovering
NOTICE: /dev/sda: nvidia discovering
NOTICE: /dev/sda: nvidia metadata discovered
NOTICE: /dev/sda: pdc discovering
NOTICE: /dev/sda: sil discovering
NOTICE: /dev/sda: via discovering
NOTICE: /dev/sdb: asr discovering
NOTICE: /dev/sdb: ddf1 discovering
NOTICE: /dev/sdb: hpt37x discovering
NOTICE: /dev/sdb: hpt45x discovering
NOTICE: /dev/sdb: isw discovering
NOTICE: /dev/sdb: jmicron discovering
NOTICE: /dev/sdb: lsi discovering
NOTICE: /dev/sdb: nvidia discovering
NOTICE: /dev/sdb: nvidia metadata discovered
NOTICE: /dev/sdb: pdc discovering
NOTICE: /dev/sdb: sil discovering
NOTICE: /dev/sdb: via discovering
NOTICE: /dev/sdc: asr discovering
NOTICE: /dev/sdc: ddf1 discovering
NOTICE: /dev/sdc: hpt37x discovering
NOTICE: /dev/sdc: hpt45x discovering
NOTICE: /dev/sdc: isw discovering
NOTICE: /dev/sdc: jmicron discovering
NOTICE: /dev/sdc: lsi discovering
NOTICE: /dev/sdc: nvidia discovering
NOTICE: /dev/sdc: pdc discovering
NOTICE: /dev/sdc: sil discovering
NOTICE: /dev/sdc: via discovering
DEBUG: _find_set: searching nvidia_acihdbbb
DEBUG: _find_set: not found nvidia_acihdbbb
DEBUG: _find_set: searching nvidia_acihdbbb
DEBUG: _find_set: not found nvidia_acihdbbb
NOTICE: added /dev/sda to RAID set "nvidia_acihdbbb"
DEBUG: _find_set: searching nvidia_acihdbbb
DEBUG: _find_set: found nvidia_acihdbbb
DEBUG: _find_set: searching nvidia_acihdbbb
DEBUG: _find_set: found nvidia_acihdbbb
NOTICE: added /dev/sdb to RAID set "nvidia_acihdbbb"
DEBUG: checking nvidia device "/dev/sda"
DEBUG: checking nvidia device "/dev/sdb"
DEBUG: set status of set "nvidia_acihdbbb" to 16
INFO: Activating mirror RAID set "nvidia_acihdbbb"
NOTICE: discovering partitions on "nvidia_acihdbbb"
NOTICE: /dev/mapper/nvidia_acihdbbb: dos discovering
NOTICE: /dev/mapper/nvidia_acihdbbb: dos metadata discovered
DEBUG: _find_set: searching nvidia_acihdbbb1
DEBUG: _find_set: not found nvidia_acihdbbb1
DEBUG: _find_set: searching nvidia_acihdbbb2
DEBUG: _find_set: not found nvidia_acihdbbb2
DEBUG: _find_set: searching nvidia_acihdbbb3
DEBUG: _find_set: not found nvidia_acihdbbb3
NOTICE: created partitioned RAID set(s) for /dev/mapper/nvidia_acihdbbb
INFO: Activating partition RAID set "nvidia_acihdbbb1"
INFO: Activating partition RAID set "nvidia_acihdbbb2"
INFO: Activating partition RAID set "nvidia_acihdbbb3"
WARN: unlocking /var/lock/dmraid/.lock
DEBUG: freeing devices of RAID set "nvidia_acihdbbb"
DEBUG: freeing device "nvidia_acihdbbb", path "/dev/sda"
DEBUG: freeing device "nvidia_acihdbbb", path "/dev/sdb"
DEBUG: freeing devices of RAID set "nvidia_acihdbbb1"
DEBUG: freeing device "nvidia_acihdbbb1", path "/dev/mapper/nvidia_acihdbbb"
DEBUG: freeing devices of RAID set "n...

Read more...

Revision history for this message
Phillip Susi (psusi) wrote :

It looks like you don't have the dm-mirror kernel module installed/loaded. Try modprobe dm-mirror before dmraid -ay.

Revision history for this message
Alan Ferrier (alan-ferrier) wrote :

Good thinking, Batman - but it ain't that:

$ lsmod | grep dm
dm_crypt 15364 0
dm_mirror 24832 0
dm_mod 62660 2 dm_crypt,dm_mirror

Also, dmraid -tay returns:

$ sudo dmraid -tay
nvidia_acihdbbb: 0 976773166 mirror core 2 131072 nosync 2 /dev/sda 0 /dev/sdb 0

so the mirror is being detected ok, and dmraid -n returns:

$ sudo dmraid -n
/dev/sdb (nvidia):
0x000 NVIDIA

0x008 size: 30
0x00c chksum: 4196806261
0x010 version: 100
0x012 unitNumber: 1
0x013 reserved: 0
0x014 capacity: 976773120
0x018 sectorSize: 512
0x01c productID: MIRROR 465.76G
0x02c productRevision: 100
0x030 unitFlags: 0
0x034 array->version: 6553668
0x038 array->signature[0]: 533272846
0x03c array->signature[1]: 1267463791
0x040 array->signature[2]: 1373640744
0x044 array->signature[3]: 1330779158
0x048 array->raidJobCode: 0
0x049 array->stripeWidth: 1
0x04a array->totalVolumes: 2
0x04b array->originalWidth: 1
0x04c array->raidLevel: 129
0x050 array->stripeBlockSize: 128
0x054 array->stripeBlockByteSize: 65536
0x058 array->stripeBlockPower: 7
0x05c array->stripeMask: 127
0x060 array->stripeSize: 128
0x064 array->stripeByteSize: 65536
0x068 array->raidJobMark 0
0x06c array->originalLevel 129
0x070 array->originalCapacity 976773120
0x074 array->flags 0x1

/dev/sda (nvidia):
0x000 NVIDIA

0x008 size: 30
0x00c chksum: 4196871797
0x010 version: 100
0x012 unitNumber: 0
0x013 reserved: 0
0x014 capacity: 976773120
0x018 sectorSize: 512
0x01c productID: MIRROR 465.76G
0x02c productRevision: 100
0x030 unitFlags: 0
0x034 array->version: 6553668
0x038 array->signature[0]: 533272846
0x03c array->signature[1]: 1267463791
0x040 array->signature[2]: 1373640744
0x044 array->signature[3]: 1330779158
0x048 array->raidJobCode: 0
0x049 array->stripeWidth: 1
0x04a array->totalVolumes: 2
0x04b array->originalWidth: 1
0x04c array->raidLevel: 129
0x050 array->stripeBlockSize: 128
0x054 array->stripeBlockByteSize: 65536
0x058 array->stripeBlockPower: 7
0x05c array->stripeMask: 127
0x060 array->stripeSize: 128
0x064 array->stripeByteSize: 65536
0x068 array->raidJobMark 0
0x06c array->originalLevel 129
0x070 array->originalCapacity 976773120
0x074 array->flags 0x1

However, dmraid -ay fails silently:

$ sudo dmraid -ay
$

but with the following in /var/log/messages:

May 5 19:25:45 ***-home kernel: [ 757.141136] device-mapper: ioctl: error adding target to table

Revision history for this message
Alan Ferrier (alan-ferrier) wrote :

Just to confirm, problem still exists in recently released 2.6.24-17 kernel.

Also, dmraid -ay occasionally returns:

"device-mapper: table 253:0: mirror: Device lookup failure"

at the command line on all failing kernels

Revision history for this message
Patrick Lowry (lowry) wrote :
Download full text (3.8 KiB)

Similar here. dmraid -ay silently fails and no /dev/mapper devices are created. I can confirm that it works in gentoo with 2.6.23, but not in ubuntu hardy liveCD.

misc. output:

ubuntu@ubuntu:~$ sudo dmraid -ay -vvvv -dddd
WARN: locking /var/lock/dmraid/.lock
NOTICE: /dev/sdb: asr discovering
NOTICE: /dev/sdb: ddf1 discovering
NOTICE: /dev/sdb: hpt37x discovering
NOTICE: /dev/sdb: hpt45x discovering
NOTICE: /dev/sdb: isw discovering
NOTICE: /dev/sdb: jmicron discovering
NOTICE: /dev/sdb: lsi discovering
NOTICE: /dev/sdb: nvidia discovering
NOTICE: /dev/sdb: nvidia metadata discovered
NOTICE: /dev/sdb: pdc discovering
NOTICE: /dev/sdb: sil discovering
NOTICE: /dev/sdb: via discovering
NOTICE: /dev/sda: asr discovering
NOTICE: /dev/sda: ddf1 discovering
NOTICE: /dev/sda: hpt37x discovering
NOTICE: /dev/sda: hpt45x discovering
NOTICE: /dev/sda: isw discovering
NOTICE: /dev/sda: jmicron discovering
NOTICE: /dev/sda: lsi discovering
NOTICE: /dev/sda: nvidia discovering
NOTICE: /dev/sda: nvidia metadata discovered
NOTICE: /dev/sda: pdc discovering
NOTICE: /dev/sda: sil discovering
NOTICE: /dev/sda: via discovering
DEBUG: _find_set: searching nvidia_cecdfbfh
DEBUG: _find_set: not found nvidia_cecdfbfh
DEBUG: _find_set: searching nvidia_cecdfbfh
DEBUG: _find_set: not found nvidia_cecdfbfh
NOTICE: added /dev/sdb to RAID set "nvidia_cecdfbfh"
DEBUG: _find_set: searching nvidia_cecdfbfh
DEBUG: _find_set: found nvidia_cecdfbfh
DEBUG: _find_set: searching nvidia_cecdfbfh
DEBUG: _find_set: found nvidia_cecdfbfh
NOTICE: added /dev/sda to RAID set "nvidia_cecdfbfh"
DEBUG: checking nvidia device "/dev/sda"
DEBUG: checking nvidia device "/dev/sdb"
DEBUG: set status of set "nvidia_cecdfbfh" to 16
WARN: unlocking /var/lock/dmraid/.lock
DEBUG: freeing devices of RAID set "nvidia_cecdfbfh"
DEBUG: freeing device "nvidia_cecdfbfh", path "/dev/sda"
DEBUG: freeing device "nvidia_cecdfbfh", path "/dev/sdb"

ubuntu@ubuntu:~$ sudo dmraid -n
/dev/sdb (nvidia):
0x000 NVIDIA

0x008 size: 30
0x00c chksum: 3487021370
0x010 version: 100
0x012 unitNumber: 1
0x013 reserved: 131
0x014 capacity: 781422720
0x018 sectorSize: 512
0x01c productID: MIRROR 372.61G
0x02c productRevision: 100
0x030 unitFlags: 0
0x034 array->version: 6553668
0x038 array->signature[0]: 1251801591
0x03c array->signature[1]: 132050278
0x040 array->signature[2]: 1018414995
0x044 array->signature[3]: 1056154375
0x048 array->raidJobCode: 0
0x049 array->stripeWidth: 1
0x04a array->totalVolumes: 2
0x04b array->originalWidth: 1
0x04c array->raidLevel: 129
0x050 array->stripeBlockSize: 128
0x054 array->stripeBlockByteSize: 65536
0x058 array->stripeBlockPower: 7
0x05c array->stripeMask: 127
0x060 array->stripeSize: 128
0x064 array->stripeByteSize: 65536
0x068 array->raidJobMark 0
0x06c array->originalLevel 129
0x070 array->originalCapacity 781422720
0x074 array->flags 0x0

/dev/sda (nvidia):
0x000 NVIDIA

0x008 size: 30
0x00c chksum: 3487086906
0x010 version: 100
0x012 unitNumber: 0
0x013 reserved: 131
0x014 capacity: 781422720
0x018 sectorSize: 512
0x01c productID: MIRROR 372.61G
0x02c productRevision: 100
0x030 unitFlags: 0
0x034 array->version:...

Read more...

Revision history for this message
Marko Novak (marko-novak) wrote :
Download full text (4.7 KiB)

Hello! I'm experiencing almost exactly the same problem as Alan and Patrick with my RAID controller (Promise Fasttrak TX2300). In Ubuntu 7.10, this worked fine, however since the upgrade to Ubuntu 8.04 (2.6.24-16 kernel), the "dmraid" is not able to find the "/dev/mapper/pdc_*" devices. As mentioned, the symptoms are practically the same as the ones from Alan and Patrick:

1) ubuntu@ubuntu:~$ sudo dmraid -ay -vvvv -dddd

WARN: locking /var/lock/dmraid/.lock
NOTICE: /dev/sdb: asr discovering
NOTICE: /dev/sdb: ddf1 discovering
NOTICE: /dev/sdb: hpt37x discovering
NOTICE: /dev/sdb: hpt45x discovering
NOTICE: /dev/sdb: isw discovering
NOTICE: /dev/sdb: jmicron discovering
NOTICE: /dev/sdb: lsi discovering
NOTICE: /dev/sdb: nvidia discovering
NOTICE: /dev/sdb: pdc discovering
NOTICE: /dev/sdb: pdc metadata discovered
NOTICE: /dev/sdb: sil discovering
NOTICE: /dev/sdb: via discovering
NOTICE: /dev/sda: asr discovering
NOTICE: /dev/sda: ddf1 discovering
NOTICE: /dev/sda: hpt37x discovering
NOTICE: /dev/sda: hpt45x discovering
NOTICE: /dev/sda: isw discovering
NOTICE: /dev/sda: jmicron discovering
NOTICE: /dev/sda: lsi discovering
NOTICE: /dev/sda: nvidia discovering
NOTICE: /dev/sda: pdc discovering
NOTICE: /dev/sda: pdc metadata discovered
NOTICE: /dev/sda: sil discovering
NOTICE: /dev/sda: via discovering
DEBUG: _find_set: searching pdc_bgidgcdjhb
DEBUG: _find_set: not found pdc_bgidgcdjhb
DEBUG: _find_set: searching pdc_bgidgcdjhb
DEBUG: _find_set: not found pdc_bgidgcdjhb
NOTICE: added /dev/sdb to RAID set "pdc_bgidgcdjhb"
DEBUG: _find_set: searching pdc_bgidgcdjhb
DEBUG: _find_set: found pdc_bgidgcdjhb
DEBUG: _find_set: searching pdc_bgidgcdjhb
DEBUG: _find_set: found pdc_bgidgcdjhb
NOTICE: added /dev/sda to RAID set "pdc_bgidgcdjhb"
DEBUG: checking pdc device "/dev/sda"
DEBUG: checking pdc device "/dev/sdb"
DEBUG: set status of set "pdc_bgidgcdjhb" to 16
DEBUG: checking pdc device "/dev/sda"
DEBUG: checking pdc device "/dev/sdb"
DEBUG: set status of set "pdc_bgidgcdjhb" to 16
WARN: unlocking /var/lock/dmraid/.lock
DEBUG: freeing devices of RAID set "pdc_bgidgcdjhb"
DEBUG: freeing device "pdc_bgidgcdjhb", path "/dev/sda"
DEBUG: freeing device "pdc_bgidgcdjhb", path "/dev/sdb"

2) ubuntu@ubuntu:~$ sudo dmraid -n

/dev/sdb (pdc):
0x000 promise_id: "Promise Technology, Inc."
0x018 unknown_0: 0x20000
0x01c magic_0: 0x645a1023
0x020 unknown_1: 0x1000e
0x024 magic_1: 0x645a1023
0x028 unknown_2: 0xe
0x200 raid.flags: 0xfdfeffc0
0x204 raid.unknown_0: 0x7
0x205 raid.disk_number: 1
0x206 raid.channel: 1
0x207 raid.device: 0
0x208 raid.magic_0: 0x4a7f1023
0x20c raid.unknown_1: 0x1000e
0x210 raid.unknown_2: 0x0
0x214 raid.disk_secs: 625011376
0x218 raid.unknown_3: 0xffffffff
0x21c raid.unknown_4: 0x1
0x21e raid.status: 0xf
0x21f raid.type: 0x1
0x220 raid.total_disks: 2
0x221 raid.raid0_shift: 7
0x222 raid.raid0_disks: 1
0x223 raid.array_number: 0
0x224 raid.total_secs: 625011328
0x228 raid.cylinders: 38904
0x22a raid.heads: 254
0x22b raid.sectors: 63
0x22c raid.magic_1: 0x645a1023
0x230 raid.unknown_5: 0x100000e
0x234 raid.disk[0].unknown_0: 0x7
0x236 raid.disk[0].channel: 0
0x237 raid.disk[0].dev...

Read more...

Revision history for this message
Alan Ferrier (alan-ferrier) wrote :

Confirming this problem still exists after today's kernel update 2.6.24-18-generic

Revision history for this message
Marko Novak (marko-novak) wrote :

Hello guys!

Today I used the Knoppix 5.1 live CD in which "dmraid" works ok (Alan, thanks for the tip... ;) ). I managed to mount my raid partitions and repair the "grub/menu.lst" file (I have come across this dmraid issue while trying to fix the "menu.lst" file which became broken during the upgrade from Ubuntu 7.10 to Ubuntu 8.04).

Now, here comes an interesting part: as soon as I fixed the "menu.lst" file, I was able to boot my upgraded Ubuntu 8.04 system without a problem! The "dmraid" works ok, even though I'm
using the Ubuntu's "2.6.24-16-generic" kernel (i.e. the same kernel version as is present on the Ubuntu 8.04 live CD).

However, I would still like to figure out why dmraid doesn't work on Ubuntu 8.04 live CD. Now that I have 2 configurations (one working and one not), can you perhaps advise me which printouts/log files to compare to spot the source of the problem? I can of course upload them to this page: you can probably find the anomalies much faster than I. :)

Revision history for this message
Vojta Grec (vojtagrec) wrote :

Hi there!

Recently I have installed hardy x86_64 with nvraid configured as RAID 0 (mirroring), no problems with dmraid during the install. Then I installed Windows in a different partition of the array and then booted the LiveCD to reinstall grub and dmraid failed just like in the comments above.

I have solved this issue and it is deadly simple: if some of your RAID partitions is formatted as swap, the LiveCD mounts it improperly while booting and that prevents dmraid from functioning.

Described in detail: I'm mirroring two SATA discs, /dev/sda and /dev/sdb. Together they make an array (/dev/nvidia_something). But because the LiveCD doesn't contain dmraid by default, these two disc are not detected as an array on boot and both swap partitions (/dev/sda7 and /dev/sdb7 in my case) are mounted separately (instead of /dev/nvidia_something7 which of course does not exist on boot). And that prevents dmraid of setting up the array devices. So deactivate the swap partitions with "swapoff" and everything should go on smoothly.

Revision history for this message
Vojta Grec (vojtagrec) wrote :

Oh my, of course this should be /dev/mapper/nvidia_something, sorry!

And I have to add that no problems with dmraid occured only when the disk was not partitioned (didn't contain any partition marked as swap respectedly).

Revision history for this message
Tiger!P (ubuntu-tigerp) wrote :

Vojta: Thank you very much for this comment. This also helps for my situation (using a via SATA mirror RAID).

Revision history for this message
Marko Novak (marko-novak) wrote :

Hello Vojta!

Yes indeed! Your advice also solves my issue!

If I execute the "swapoff -a" command after the Ubuntu 8.04 has started, the dmraid package work fine
and I'm able to see all of my RAID partitions in "/dev/mapper/" directory!

Thank you very much for solving this one!!!

Vojta Grec (vojtagrec)
Changed in dmraid:
assignee: nobody → vojtagrec
status: New → Confirmed
Revision history for this message
Phillip Susi (psusi) wrote : Re: livecd auto mounting swap partitions detected on dmraid volume members

If swapoff fixes the issue for you, please post the output of blkid. My guess is that blkid is not correctly identifying the disks as raid members, which would prevent them being auto mounted.

Changed in dmraid:
importance: Undecided → Medium
Revision history for this message
TonyH (5-launchpad-trog-bofh-org-za) wrote :

Right, I've just tripped over this issue after re-installing my windows partition. Swapoff -a solved it, Here's the output of blkid:

ubuntu@ubuntu:~$ sudo blkid
/dev/sda1: UUID="040C25400C252E5C" TYPE="ntfs"
/dev/sda2: UUID="1a74b366-7af2-4d05-a8cd-ccef1ee0dc9b" SEC_TYPE="ext2" TYPE="ext3"
/dev/sda5: TYPE="swap" UUID="02ebdbc8-e3f0-4420-8b00-1df8c1bff90f"
/dev/sdb1: UUID="040C25400C252E5C" TYPE="ntfs"
/dev/sdb2: UUID="1a74b366-7af2-4d05-a8cd-ccef1ee0dc9b" SEC_TYPE="ext2" TYPE="ext3"
/dev/sdb5: TYPE="swap" UUID="02ebdbc8-e3f0-4420-8b00-1df8c1bff90f"
/dev/loop0: TYPE="squashfs"
/dev/mapper/nvidia_ceidadia1: UUID="040C25400C252E5C" TYPE="ntfs"
/dev/mapper/nvidia_ceidadia5: TYPE="swap" UUID="02ebdbc8-e3f0-4420-8b00-1df8c1bff90f"
/dev/mapper/nvidia_ceidadia2: UUID="1a74b366-7af2-4d05-a8cd-ccef1ee0dc9b" SEC_TYPE="ext2" TYPE="ext3"
ubuntu@ubuntu:~$

This box was originally installed using Gutsy.

Revision history for this message
Tiger!P (ubuntu-tigerp) wrote :

I have run blkid 3 times, first after booting the live CD, second after `swapoff -a` en third after `apt-get install dmraid`

The first output (after booting the live CD):
/dev/sda1: UUID="D4DC7D78DC7D55A8" LABEL="XP_c_ntfs" TYPE="ntfs"
/dev/sda5: UUID="3A30D91E30D8E1C5" LABEL="XP_d_ntfs" TYPE="ntfs"
/dev/sda6: TYPE="swap" UUID="d6d263ec-37b2-450b-af33-4d3821419b29"
/dev/sda7: UUID="59d50497-240f-4d3f-a6f7-912dc12771b5" TYPE="ext2"
/dev/sdb1: UUID="D4DC7D78DC7D55A8" LABEL="XP_c_ntfs" TYPE="ntfs"
/dev/sdb5: UUID="3A30D91E30D8E1C5" LABEL="XP_d_ntfs" TYPE="ntfs"
/dev/sdb6: TYPE="swap" UUID="d6d263ec-37b2-450b-af33-4d3821419b29"
/dev/sdb7: UUID="59d50497-240f-4d3f-a6f7-912dc12771b5" TYPE="ext2"
/dev/loop0: TYPE="squashfs"

The second output (after swapoff -a):
/dev/sda1: UUID="D4DC7D78DC7D55A8" LABEL="XP_c_ntfs" TYPE="ntfs"
/dev/sda5: UUID="3A30D91E30D8E1C5" LABEL="XP_d_ntfs" TYPE="ntfs"
/dev/sda6: TYPE="swap" UUID="d6d263ec-37b2-450b-af33-4d3821419b29"
/dev/sda7: UUID="59d50497-240f-4d3f-a6f7-912dc12771b5" TYPE="ext2"
/dev/sdb1: UUID="D4DC7D78DC7D55A8" LABEL="XP_c_ntfs" TYPE="ntfs"
/dev/sdb5: UUID="3A30D91E30D8E1C5" LABEL="XP_d_ntfs" TYPE="ntfs"
/dev/sdb6: TYPE="swap" UUID="d6d263ec-37b2-450b-af33-4d3821419b29"
/dev/sdb7: UUID="59d50497-240f-4d3f-a6f7-912dc12771b5" TYPE="ext2"
/dev/loop0: TYPE="squashfs"

The third output (apt-get install dmraid):
/dev/sda1: UUID="D4DC7D78DC7D55A8" LABEL="XP_c_ntfs" TYPE="ntfs"
/dev/sda5: UUID="3A30D91E30D8E1C5" LABEL="XP_d_ntfs" TYPE="ntfs"
/dev/sda6: TYPE="swap" UUID="d6d263ec-37b2-450b-af33-4d3821419b29"
/dev/sda7: UUID="59d50497-240f-4d3f-a6f7-912dc12771b5" TYPE="ext2"
/dev/sdb1: UUID="D4DC7D78DC7D55A8" LABEL="XP_c_ntfs" TYPE="ntfs"
/dev/sdb5: UUID="3A30D91E30D8E1C5" LABEL="XP_d_ntfs" TYPE="ntfs"
/dev/sdb6: TYPE="swap" UUID="d6d263ec-37b2-450b-af33-4d3821419b29"
/dev/sdb7: UUID="59d50497-240f-4d3f-a6f7-912dc12771b5" TYPE="ext2"
/dev/loop0: TYPE="squashfs"
/dev/mapper/via_bfdbdagceh1: UUID="D4DC7D78DC7D55A8" LABEL="XP_c_ntfs" TYPE="ntfs"
/dev/mapper/via_bfdbdagceh7: UUID="59d50497-240f-4d3f-a6f7-912dc12771b5" TYPE="ext2"
/dev/mapper/via_bfdbdagceh6: TYPE="swap" UUID="d6d263ec-37b2-450b-af33-4d3821419b29"
/dev/mapper/via_bfdbdagceh5: UUID="3A30D91E30D8E1C5" LABEL="XP_d_ntfs" TYPE="ntfs"

I don't know if blkid should identify reiserfs partitions (via_bfdbdagceh8), but I don't see that one here.

ubuntu@ubuntu:~$ ls /dev/mapper/
control via_bfdbdagceh1 via_bfdbdagceh6 via_bfdbdagceh8
via_bfdbdagceh via_bfdbdagceh5 via_bfdbdagceh7
ubuntu@ubuntu:~$

Revision history for this message
Tiger!P (ubuntu-tigerp) wrote :

It might be that my /dev/mapper/via_bfdbdagceh8 didn't have reiserfs after all.

Revision history for this message
Phillip Susi (psusi) wrote :

Yea, I think the bug is in blkid... if you run blkid /dev/sda it should identify it as being a disk that is part of a raid set... the problem is that it still identifies the partitions detected on it as being valid partitions, when they are actually not.

Revision history for this message
Marko Novak (marko-novak) wrote :

Hello Phillip!

Here is also the printout of my "sudo blkid":

1) The first output (after booting the live CD):
/dev/sda1: UUID="9470295F702948F6" TYPE="ntfs"
/dev/sda2: UUID="36D4ED0DD4ECCFE3" LABEL="Local Disk" TYPE="ntfs"
/dev/sda5: UUID="e5bae0bd-02ff-4932-8dca-3f3e2a8d2bf4" SEC_TYPE="ext2" TYPE="ext3"
/dev/sda6: UUID="7386cb36-a15b-49c6-a874-317b42d8a7d9" TYPE="swap"
/dev/sda7: UUID="721ebea7-0443-4fc0-b09c-8978a98dbbff" TYPE="reiserfs"
/dev/sdb1: UUID="9470295F702948F6" TYPE="ntfs"
/dev/sdb2: UUID="36D4ED0DD4ECCFE3" LABEL="Local Disk" TYPE="ntfs"
/dev/sdb5: UUID="e5bae0bd-02ff-4932-8dca-3f3e2a8d2bf4" SEC_TYPE="ext2" TYPE="ext3"
/dev/sdb6: UUID="7386cb36-a15b-49c6-a874-317b42d8a7d9" TYPE="swap"
/dev/sdb7: UUID="721ebea7-0443-4fc0-b09c-8978a98dbbff" TYPE="reiserfs"
/dev/loop0: TYPE="squashfs"

2) The second output (after swapoff -a):
/dev/sda1: UUID="9470295F702948F6" TYPE="ntfs"
/dev/sda2: UUID="36D4ED0DD4ECCFE3" LABEL="Local Disk" TYPE="ntfs"
/dev/sda5: UUID="e5bae0bd-02ff-4932-8dca-3f3e2a8d2bf4" SEC_TYPE="ext2" TYPE="ext3"
/dev/sda6: TYPE="swap" UUID="7386cb36-a15b-49c6-a874-317b42d8a7d9"
/dev/sda7: UUID="721ebea7-0443-4fc0-b09c-8978a98dbbff" TYPE="reiserfs"
/dev/sdb1: UUID="9470295F702948F6" TYPE="ntfs"
/dev/sdb2: UUID="36D4ED0DD4ECCFE3" LABEL="Local Disk" TYPE="ntfs"
/dev/sdb5: UUID="e5bae0bd-02ff-4932-8dca-3f3e2a8d2bf4" SEC_TYPE="ext2" TYPE="ext3"
/dev/sdb6: TYPE="swap" UUID="7386cb36-a15b-49c6-a874-317b42d8a7d9"
/dev/sdb7: UUID="721ebea7-0443-4fc0-b09c-8978a98dbbff" TYPE="reiserfs"
/dev/loop0: TYPE="squashfs"

3) The third output (apt-get install dmraid):
/dev/sda1: UUID="9470295F702948F6" TYPE="ntfs"
/dev/sda2: UUID="36D4ED0DD4ECCFE3" LABEL="Local Disk" TYPE="ntfs"
/dev/sda5: UUID="e5bae0bd-02ff-4932-8dca-3f3e2a8d2bf4" SEC_TYPE="ext2" TYPE="ext3"
/dev/sda6: TYPE="swap" UUID="7386cb36-a15b-49c6-a874-317b42d8a7d9"
/dev/sda7: UUID="721ebea7-0443-4fc0-b09c-8978a98dbbff" TYPE="reiserfs"
/dev/sdb1: UUID="9470295F702948F6" TYPE="ntfs"
/dev/sdb2: UUID="36D4ED0DD4ECCFE3" LABEL="Local Disk" TYPE="ntfs"
/dev/sdb5: UUID="e5bae0bd-02ff-4932-8dca-3f3e2a8d2bf4" SEC_TYPE="ext2" TYPE="ext3"
/dev/sdb6: TYPE="swap" UUID="7386cb36-a15b-49c6-a874-317b42d8a7d9"
/dev/sdb7: UUID="721ebea7-0443-4fc0-b09c-8978a98dbbff" TYPE="reiserfs"
/dev/loop0: TYPE="squashfs"
/dev/mapper/pdc_bgidgcdjhb6: TYPE="swap" UUID="7386cb36-a15b-49c6-a874-317b42d8a7d9"
/dev/mapper/pdc_bgidgcdjhb5: UUID="e5bae0bd-02ff-4932-8dca-3f3e2a8d2bf4" SEC_TYPE="ext2" TYPE="ext3"
/dev/mapper/pdc_bgidgcdjhb2: UUID="36D4ED0DD4ECCFE3" LABEL="Local Disk" TYPE="ntfs"
/dev/mapper/pdc_bgidgcdjhb1: UUID="9470295F702948F6" TYPE="ntfs"
/dev/mapper/pdc_bgidgcdjhb7: UUID="721ebea7-0443-4fc0-b09c-8978a98dbbff" TYPE="reiserfs"

The printout is very similar to that of Tiger!P. Even the raiserfs partition was detected.

Revision history for this message
Marko Novak (marko-novak) wrote :

Hello Phillip!

One more thing: the "sudo blkid /dev/sda" doesn't output anything,
even if I execute it once the RAID is working.

Revision history for this message
Tormod Volden (tormodvolden) wrote :

It seems from /scripts/casper-bottom/13swap on the live CD initrd that it does not use blkid nor vol_id, but just looks for "SWAPSPACE" etc 4086 bytes into the partition.

Revision history for this message
Phillip Susi (psusi) wrote :

Hrm... it should definitely be respecting the volume identifiers udev gets from blkid rather than probing every block device it can find for that signature. At any rate, blkid should not be identifying partitions on a block device that is identified as a raid member. Or maybe dmraid should be overriding that identification?

Revision history for this message
Tormod Volden (tormodvolden) wrote :

How can the initrd identify it as a raid member, when it doesn't have dmraid? Has blkid become fakeraid-aware? Also, last time I checked, udev used vol_id and not blkid, and they do not share the partition probing code for some reason.

Revision history for this message
Vojta Grec (vojtagrec) wrote :

I think Tormod hit the nail on the head on that (but I don't know blkid internals too). The system solution would be to include dmraid on the live CD. It's very small package so it should fit there. If that's not possible, I propose following ugly hack: modify the dmraid initscript (or maybe .deb install script) to detect whether it is run on live CD and turn off the swap partitions used for raid if so (I think it should be possible to detect whether a swap partition is a part of raid or not before setting up the array). I'm not on my computer right now, I'll look into that when I'm home.

Revision history for this message
Phillip Susi (psusi) wrote :

Oops, I mixed up blkid and vol_id. Please try running vol_id /dev/sda and see if it identifies it as a raid member. Last I checked, vol_id has bits of the signature detection code hacked out of dmraid built into it so it recognizes the disk device as a whole as being a raid member and exports that info to udev, but then udev ends up running vol_id on the individual partitions, which happily identifies them as normal, usable partitions, and this is not the case.

Once vol_id identifies the entire disk as a raid member, the partitions on it should be ignored...

Revision history for this message
Tiger!P (ubuntu-tigerp) wrote :

The output I get is the same for every instance I run (just after booting the live CD, after swapoff -a and after install of dmraid).

ubuntu@ubuntu:~$ sudo vol_id /dev/sda
ID_FS_USAGE=raid
ID_FS_TYPE=via_raid_member
ID_FS_VERSION=1
ID_FS_UUID=
ID_FS_UUID_ENC=
ID_FS_LABEL=
ID_FS_LABEL_ENC=
ID_FS_LABEL_SAFE=
ubuntu@ubuntu:~$

So it seems that vol_id knows that I have raid.

Revision history for this message
Marko Novak (marko-novak) wrote :

Hello!

Yes, I also get very similar output as Tiger!P in all the three stages, when invoking "sudo vol_id /dev/sda":

ID_FS_USAGE=raid
ID_FS_TYPE=promise_fasttrack_raid_member
ID_FS_VERSION=
ID_FS_UUID=
ID_FS_UUID_ENC=
ID_FS_LABEL=
ID_FS_LABEL_ENC=
ID_FS_LABEL_SAFE=

Revision history for this message
Tormod Volden (tormodvolden) wrote :

We could add this to the mentioned 13swap script:
 vol_id ${device%%[0-9]*} | grep -q "^ID_FS_TYPE=raid" && continue
(unless the script gets fixed up to be less brute force)

Revision history for this message
Tormod Volden (tormodvolden) wrote :

D'oh, that should have been ID_FS_USAGE and not ID_FS_TYPE. Anyway, I have tested that it works here, and I will try to get it pushed to casper. To test this out you can 1) modify the initrd.gz file on the live CD or 2) boot with break=casper-bottom and hand-edit the script file with "ed" or 3) boot with break=casper-bottom and copy the modified script file over from the stick/cd

Changed in udev:
assignee: nobody → tormodvolden
status: New → In Progress
Revision history for this message
Tormod Volden (tormodvolden) wrote :
Changed in dmraid:
assignee: vojtagrec → nobody
status: Confirmed → Invalid
description: updated
Revision history for this message
Tormod Volden (tormodvolden) wrote :

Did anyone experience data loss or desync of the RAID because of this bug? We should try to get this fixed in 8.04.1 or at least 8.04.2 since it must be fixed on the CD itself.

Changed in casper:
assignee: tormodvolden → nobody
status: In Progress → Confirmed
Revision history for this message
Phillip Susi (psusi) wrote :

It seems to me that if vol_id reports the usage of the entire disk device as being raid, then that should carry over to all partitions on it, and a usage type of raid should preclude its use by anything other than raid.

Revision history for this message
Tormod Volden (tormodvolden) wrote :

Yes, that would make sense for udev or anything else that enumerates through devices and partitions. But I think vol_id is just a simple one-block-device sniffer: it looks at a given block device like /dev/sda1 without looking at what would be the parent device. Those utilities using vol_id need to do better checking. OTOH the man page says that vol_id "detects various raid setups to prevent the recognition of raid members as a volume with a filesystem." which it seems that it fails to do. Maybe we should open a separate bug for vol_id (udev) and leave this one for the Desktop CD issue.

Evan (ev)
Changed in casper:
status: Confirmed → Fix Committed
Revision history for this message
Launchpad Janitor (janitor) wrote :

This bug was fixed in the package casper - 1.133

---------------
casper (1.133) intrepid; urgency=low

  [ Tormod Volden ]
  * Do not use swap on RAID raw devices (LP: #136804)

  [ Agostino Russo ]
  * Test if livemedia is a directory before trying to mount it as a
    loopfile
  * Reverted changes to casper-helpers as requested by Mithrandir since
    replaying the journal on a hibernated system would lead to file system
    corruption.

 -- Evan Dandrea <email address hidden> Wed, 18 Jun 2008 12:34:58 -0400

Changed in casper:
status: Fix Committed → Fix Released
Revision history for this message
Luke Yelavich (themuso) wrote : Re: [Bug 136804] Re: livecd auto mounting swap partitions detected on dmraid volume members

May I draw your attention to https://wiki.ubuntu.com/DmraidSupport. The issues with vol_id etc are documented in this spec.

Revision history for this message
Phillip Susi (psusi) wrote : Re: livecd auto mounting swap partitions detected on dmraid volume members

I am confused as to what this issue has to do with casper. The problem is that udev should not even be running vol_id on the partitions on a raid disk since those partitions are not even really valid.... ideally the kernel should not even be detecting those partitions.

Revision history for this message
Tormod Volden (tormodvolden) wrote :

I see what you mean, there shouldn't even exist /dev/sda1 etc, only /dev/sda if /dev/sda is a raid device. Yes, that would be a proper solution.

Changed in dmraid:
status: Invalid → New
Revision history for this message
Tormod Volden (tormodvolden) wrote :

With the latest PATH changes for casper in intrepid, the released fix to be updated to use full paths.

Changed in casper:
status: Fix Released → In Progress
Changed in casper:
status: In Progress → Fix Committed
Revision history for this message
Tormod Volden (tormodvolden) wrote :

Now for the "proper" fix of udev. I am not sure how this works for other kinds of raids, but for my fakeraid, the disk volumes (/dev/sda and /dev/sdb) have ID_FS_USAGE = "raid", and all partitions on these should not be exposed (since they are not really partitions). So if udev detects a device of type "partition", and the parent volume is "raid", it should just ignore it and not make a device node.

This is implemented in the attached patch for the udev rules. Note that the test is located just after the ID_* for the parent are imported. Any ideas for setups that could be broken by this modification?

Revision history for this message
Launchpad Janitor (janitor) wrote :

This bug was fixed in the package casper - 1.138

---------------
casper (1.138) intrepid; urgency=low

  [ Tormod Volden ]
  * use full path for vol_id in swap-on-raid detection (LP: #136804)

  [ Martin Pitt ]
  * 33enable_apport_crashes: Change the apport default file, not the
    update-notifier gconf keys, to undo the corresponding change for disabling
    apport right before the release.
  * Add 45disable_guest_account: Purge the gdm-guest-session package from the
    live system, since having guest sessions in a live session does not make
    much sense. (See gdm-guest-login spec)

 -- Martin Pitt <email address hidden> Thu, 31 Jul 2008 14:19:07 +0200

Changed in casper:
status: Fix Committed → Fix Released
Revision history for this message
Tormod Volden (tormodvolden) wrote :

Did anyone try or look at my udev patch in comment 48? Or should this be fixed in a totally different way?

Revision history for this message
Scott James Remnant (Canonical) (canonical-scott) wrote :

In conversation upstream, since the patch actually just reverses one we back out due to other bugs ;)

Revision history for this message
Phillip Susi (psusi) wrote :

Any update on that upstream conversation Scott?

Changed in udev:
status: New → In Progress
Revision history for this message
Scott James Remnant (Canonical) (canonical-scott) wrote :

We dropped any such patch ages ago, if this is still broken, it's broken upstream?

Changed in udev:
status: In Progress → Fix Released
Revision history for this message
Tormod Volden (tormodvolden) wrote :

I do not see this being fixed on Jaunty Alpha-6, when running from the Desktop CD at least. Is the dmraid library needed at boot to detect dmraid members that should be hidden?

$ sudo vol_id /dev/sda
ID_FS_USAGE=raid
ID_FS_TYPE=isw_raid_member
ID_FS_VERSION=1.1.00
ID_FS_UUID=
ID_FS_UUID_ENC=
ID_FS_LABEL=
ID_FS_LABEL_ENC=

Changed in udev:
status: Fix Released → New
Revision history for this message
Scott James Remnant (Canonical) (canonical-scott) wrote :

Please don't simply reopen a bug without proving that it exists.

All you supplied is the output of running vol_id yourself on the command-line.

Please demonstrate that *udev* is running vol_id on the partitions.

Changed in udev:
status: New → Fix Released
Revision history for this message
Tormod Volden (tormodvolden) wrote :

Well, what I forgot to point out, is that /dev/sda1 /dev/sda2 etc still exist. They should not exist, right? With my vol_id run above I just wanted to show that the parent device is a recognised raid raw device.

You mentioned earlier having dropped a patch, which my patch in comment 48 would just reverse. However, when I look through /lib/udev/rules.d I can't find anything dealing with raid members.

I don't know if the problem really is "that *udev* is running vol_id on the partitions" but the result is that the raw device "partitions" appear.

Revision history for this message
Tormod Volden (tormodvolden) wrote :

Here is the udev log. Since the /dev/sdaX entries contains ID_FS_UUID I suppose it means udev is running vol_id on these partitions?

Revision history for this message
Tormod Volden (tormodvolden) wrote :

However, today I see no /dev/sdaX (I have updated the live CD). They show up in ubiquity though, I guess partman does its own scan for partitions. So although the /dev/sdaX show up in the udev log they are not created and everything seems fine now.

Revision history for this message
Scott James Remnant (Canonical) (canonical-scott) wrote : Re: [Bug 136804] Re: vol_id run for partitions on RAID disks

On Mon, 2009-03-16 at 09:00 +0000, Tormod Volden wrote:

> However, today I see no /dev/sdaX (I have updated the live CD). They
> show up in ubiquity though, I guess partman does its own scan for
> partitions. So although the /dev/sdaX show up in the udev log they are
> not created and everything seems fine now.
>
Glad to hear everything's working now :)

Scott
--
Scott James Remnant
<email address hidden>

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.