raid degraded after update to Karmic beta

Bug #449876 reported by Jyrki Pulliainen
24
This bug affects 5 people
Affects Status Importance Assigned to Milestone
dmraid (Ubuntu)
Invalid
Medium
Unassigned

Bug Description

After upgrade to Karmic Beta, udev no longer finds /dev/sda1, /dev/sda2, /dev/sdb1 or /dev/sdb2, only devices /dev/sda and /dev/sdb are visible. This also causes one raid (/dev/md0) to disappear and /dev/md1 to be degraded.

However, since system files are on /dev/md1, the system works. Only the RAID array is degraded, so it does not work. The /dev/md1 should be assembled of /dev/sda2 and /dev/sdb2.

Problem seems to be with udev, since kernel sees the missing partitions on boot (according to dmesg):
[ 1.890383] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[ 1.895670] sda:
[ 1.903143] sda1 sda2
...
[ 1.896009] sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[ 1.896106] sdb:
[ 1.896285] sdc: sdb1 sdb2

Those partitions are also visible with fdisk -l (output attached)

Revision history for this message
Jyrki Pulliainen (jyrki-pulliainen) wrote :
Revision history for this message
Jyrki Pulliainen (jyrki-pulliainen) wrote :
Revision history for this message
Jyrki Pulliainen (jyrki-pulliainen) wrote :
Revision history for this message
Jyrki Pulliainen (jyrki-pulliainen) wrote :

Change to correct Project

affects: ubuntu-on-ec2 → udev
Revision history for this message
Jyrki Pulliainen (jyrki-pulliainen) wrote :

Version of udev is 147~-5

Revision history for this message
robegue (r087r70) wrote :

I'm also encountering this bug after upgrading to karmic (2.6.31-13).
I can still boot using an older kernel (2.6.27-14).

the /proc/mdstat says that the arrays are inactive and 'mdadm --A -s' in the busybox doesn't work. If i stop the md0 with 'mdadm -S /dev/md0' and then reassemble, it is assembled with a single drive instead of two.

In summary the array is marked as degraded even if it's not.

Revision history for this message
Scott James Remnant (Canonical) (canonical-scott) wrote :

Could you run "apport-collect 449876" and also provide the output of "blkid" Thanks

Changed in udev (Ubuntu):
status: New → Incomplete
importance: Undecided → Medium
affects: udev → null
Revision history for this message
robegue (r087r70) wrote : apport-collect data

Architecture: amd64
CustomUdevRuleFiles: 45-libnjb5.rules 10-vboxdrv.rules 65-libmtp.rules 85-pcmcia.rules 45-libmtp7.rules 025_logitechmouse.rules 40-permissions.rules.dpkg-old 50-virtualbox-ose.rules 50-libpisock9.rules 50-xserver-xorg-input-wacom.rules
DistroRelease: Ubuntu 9.10
Lsusb:
 Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
 Bus 001 Device 004: ID 046d:c016 Logitech, Inc. M-UV69a/HP M-UV96 Optical Wheel Mouse
 Bus 001 Device 003: ID 413c:2005 Dell Computer Corp. RT7D50 Keyboard
 Bus 001 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
MachineType: System manufacturer System Product Name
NonfreeKernelModules: nvidia
Package: udev 147~-5
PackageArchitecture: amd64
ProcCmdLine: root=/dev/md0 ro
ProcEnviron:
 SHELL=/bin/bash
 LC_NUMERIC=C
 PATH=(custom, user)
 LANG=it_IT.UTF-8
 LANGUAGE=
ProcVersionSignature: Ubuntu 2.6.27-14.38-generic
Uname: Linux 2.6.27-14-generic x86_64
UserGroups: adm admin audio avahi avahi-autoipd backup bin cdrom clamav crontab dhcp dialout dictd dip disk fax floppy fuse haldaemon klog lp lpadmin mail messagebus netdev news nvram plugdev powerdev sambashare scanner src ssh sudo sys syslog tape tty users utmp uucp vboxusers video voice www-data
dmi.bios.date: 09/25/2006
dmi.bios.vendor: Phoenix Technologies, LTD
dmi.bios.version: ASUS M2N-E ACPI BIOS Revision 0402
dmi.board.name: M2N-E
dmi.board.vendor: ASUSTeK Computer INC.
dmi.board.version: 1.XX
dmi.chassis.asset.tag: 123456789000
dmi.chassis.type: 3
dmi.chassis.vendor: Chassis Manufacture
dmi.chassis.version: Chassis Version
dmi.modalias: dmi:bvnPhoenixTechnologies,LTD:bvrASUSM2N-EACPIBIOSRevision0402:bd09/25/2006:svnSystemmanufacturer:pnSystemProductName:pvrSystemVersion:rvnASUSTeKComputerINC.:rnM2N-E:rvr1.XX:cvnChassisManufacture:ct3:cvrChassisVersion:
dmi.product.name: System Product Name
dmi.product.version: System Version
dmi.sys.vendor: System manufacturer

Revision history for this message
robegue (r087r70) wrote : BootDmesg.txt
Revision history for this message
robegue (r087r70) wrote : CurrentDmesg.txt
Revision history for this message
robegue (r087r70) wrote : Dependencies.txt
Revision history for this message
robegue (r087r70) wrote : Lspci.txt
Revision history for this message
robegue (r087r70) wrote : ProcCpuinfo.txt
Revision history for this message
robegue (r087r70) wrote : ProcInterrupts.txt
Revision history for this message
robegue (r087r70) wrote : ProcModules.txt
Revision history for this message
robegue (r087r70) wrote : UdevDb.txt
Revision history for this message
robegue (r087r70) wrote : UdevLog.gz
Revision history for this message
robegue (r087r70) wrote : XsessionErrors.txt
Changed in udev (Ubuntu):
status: Incomplete → New
tags: added: apport-collected
Revision history for this message
robegue (r087r70) wrote : Re: udev causes raid to degrade after update to Karmic beta

blkid output:

/dev/sda1: UUID="7ce8485c-8898-4850-5671-009ebb5df50a" TYPE="linux_raid_member"
/dev/sda2: UUID="4f0243d1-0183-5e33-a49e-0bc1f24d2d5a" TYPE="linux_raid_member"
/dev/sda5: UUID="a3d5f583-4764-d3b5-fadd-18e4d3a667fa" TYPE="linux_raid_member"
/dev/sda6: UUID="c478e150-9856-9965-1d58-0bf658a6e076" TYPE="linux_raid_member"
/dev/sdb1: UUID="7ce8485c-8898-4850-5671-009ebb5df50a" TYPE="linux_raid_member"
/dev/sdb2: UUID="4f0243d1-0183-5e33-a49e-0bc1f24d2d5a" TYPE="linux_raid_member"
/dev/sdb5: UUID="a3d5f583-4764-d3b5-fadd-18e4d3a667fa" TYPE="linux_raid_member"
/dev/sdb6: UUID="c478e150-9856-9965-1d58-0bf658a6e076" TYPE="linux_raid_member"
/dev/md0: UUID="3e651827-866a-44b5-922a-94de001742ae" TYPE="ext3"
/dev/md1: UUID="7e5223ba-dcb9-4a99-929f-98ecc9e6c889" TYPE="ext3"
/dev/md2: UUID="5ecad822-9d68-47a5-9033-6cecb7a23da3" TYPE="xfs"

Revision history for this message
robegue (r087r70) wrote :

#cat /etc/fstab

proc /proc proc defaults 0 0
usbfs /proc/bus/usb usbfs defaults,devmode=666 0 0
/dev/md0 / ext3 noatime,errors=remount-ro 0 1
/dev/md1 /home ext3 noatime 0 2
/dev/md2 /scratch xfs noatime 0 2
/dev/mapper/md4 /crypted ext3 noatime,noauto,users 0 0
/dev/cdrom /media/cdrom udf,iso9660 user,noauto 0 0

Revision history for this message
Scott James Remnant (Canonical) (canonical-scott) wrote :
Download full text (4.6 KiB)

From your collected data:

UDEV [1255421703.174825] add /devices/pci0000:00/0000:00:05.0/host0/target0:0:0/0:0:0:0/block/sda/sda1 (block)
UDEV_LOG=3
ACTION=add
DEVPATH=/devices/pci0000:00/0000:00:05.0/host0/target0:0:0/0:0:0:0/block/sda/sda1
SUBSYSTEM=block
DEVTYPE=partition
SEQNUM=2041
ID_TYPE=disk
ID_BUS=ata
ID_MODEL=MAXTOR_STM3320820AS
ID_MODEL_ENC=MAXTOR\x20STM3320820AS\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20
ID_REVISION=3.AAE
ID_SERIAL=MAXTOR_STM3320820AS_6QF086N1
ID_SERIAL_SHORT=6QF086N1
ID_SCSI_COMPAT=SATA_MAXTOR_STM33208_6QF086N1
ID_PATH=pci-0000:00:05.0-scsi-0:0:0:0
ID_FS_VERSION=0.90.0
ID_FS_TYPE=linux_raid_member
ID_FS_USAGE=raid
ID_FS_UUID=7ce8485c-8898-4850-5671-009ebb5df50a
ID_FS_UUID_ENC=7ce8485c-8898-4850-5671-009ebb5df50a
DKD_MEDIA_AVAILABLE=1
MD_LEVEL=raid0
MD_DEVICES=2
MD_UUID=7ce8485c:88984850:5671009e:bb5df50a
MD_UPDATE_TIME=1213353077
MD_EVENTS=1
DKD_PRESENTATION_NOPOLICY=0
DEVNAME=/dev/sda1
MAJOR=8
MINOR=1
DEVLINKS=/dev/block/8:1 /dev/disk/by-id/ata-MAXTOR_STM3320820AS_6QF086N1-part1 /dev/disk/by-id/scsi-SATA_MAXTOR_STM33208_6QF086N1-part1 /dev/disk/by-path/pci-0000:00:05.0-scsi-0:0:0:0-part1

UDEV [1255421703.178081] add /devices/pci0000:00/0000:00:05.0/host0/target0:0:0/0:0:0:0/block/sda/sda2 (block)
UDEV_LOG=3
ACTION=add
DEVPATH=/devices/pci0000:00/0000:00:05.0/host0/target0:0:0/0:0:0:0/block/sda/sda2
SUBSYSTEM=block
DEVTYPE=partition
SEQNUM=2042
ID_TYPE=disk
ID_BUS=ata
ID_MODEL=MAXTOR_STM3320820AS
ID_MODEL_ENC=MAXTOR\x20STM3320820AS\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20
ID_REVISION=3.AAE
ID_SERIAL=MAXTOR_STM3320820AS_6QF086N1
ID_SERIAL_SHORT=6QF086N1
ID_SCSI_COMPAT=SATA_MAXTOR_STM33208_6QF086N1
ID_PATH=pci-0000:00:05.0-scsi-0:0:0:0
ID_FS_VERSION=0.90.0
ID_FS_TYPE=linux_raid_member
ID_FS_USAGE=raid
ID_FS_UUID=4f0243d1-0183-5e33-a49e-0bc1f24d2d5a
ID_FS_UUID_ENC=4f0243d1-0183-5e33-a49e-0bc1f24d2d5a
DKD_MEDIA_AVAILABLE=1
MD_LEVEL=raid1
MD_DEVICES=2
MD_UUID=4f0243d1:01835e33:a49e0bc1:f24d2d5a
MD_UPDATE_TIME=1177080331
MD_EVENTS=160863
DKD_PRESENTATION_NOPOLICY=0
DEVNAME=/dev/sda2
MAJOR=8
MINOR=2
DEVLINKS=/dev/block/8:2 /dev/disk/by-id/ata-MAXTOR_STM3320820AS_6QF086N1-part2 /dev/disk/by-id/scsi-SATA_MAXTOR_STM33208_6QF086N1-part2 /dev/disk/by-path/pci-0000:00:05.0-scsi-0:0:0:0-part2

UDEV [1255421703.633748] add /devices/pci0000:00/0000:00:05.0/host1/target1:0:0/1:0:0:0/block/sdb/sdb1 (block)
UDEV_LOG=3
ACTION=add
DEVPATH=/devices/pci0000:00/0000:00:05.0/host1/target1:0:0/1:0:0:0/block/sdb/sdb1
SUBSYSTEM=block
DEVTYPE=partition
SEQNUM=2054
ID_TYPE=disk
ID_BUS=ata
ID_MODEL=MAXTOR_STM3320820AS
ID_MODEL_ENC=MAXTOR\x20STM3320820AS\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20
ID_REVISION=3.AAE
ID_SERIAL=MAXTOR_STM3320820AS_6QF086NY
ID_SERIAL_SHORT=6QF086NY
ID_SCSI_COMPAT=SATA_MAXTOR_STM33208_6QF086NY
ID_PATH=pci-0000:00:05.0-scsi-1:0:0:0
ID_FS_VERSION=0.90.0
ID_FS_TYPE=linux_raid_member
ID_FS_USAGE=raid
ID_FS_UUID=7ce8485c-8898-4850-5671-009ebb5df50a
ID_FS_UUID_ENC=7ce8485c-8898-4850-5671-009ebb5df50a
DKD_MEDIA_AVAILABLE=1
MD_LEVEL=raid0
MD_DEVICES=2
MD_UUID=7ce8485c:88984850:5671...

Read more...

Revision history for this message
Scott James Remnant (Canonical) (canonical-scott) wrote :

Also

UDEV [1255421699.902552] add /devices/virtual/block/md0 (block)
UDEV_LOG=3
ACTION=add
DEVPATH=/devices/virtual/block/md0
SUBSYSTEM=block
DEVTYPE=disk
SEQNUM=2781
MD_LEVEL=raid1
MD_DEVICES=2
MD_METADATA=00.90
MD_UUID=4f0243d1:01835e33:a49e0bc1:f24d2d5a
ID_FS_UUID=3e651827-866a-44b5-922a-94de001742ae
ID_FS_UUID_ENC=3e651827-866a-44b5-922a-94de001742ae
ID_FS_SEC_TYPE=ext2
ID_FS_VERSION=1.0
ID_FS_TYPE=ext3
ID_FS_USAGE=filesystem
DKD_MEDIA_AVAILABLE=1
DKD_PRESENTATION_NOPOLICY=1
DEVNAME=/dev/md0
MAJOR=9
MINOR=0
DEVLINKS=/dev/block/9:0 /dev/disk/by-id/md-uuid-4f0243d1:01835e33:a49e0bc1:f24d2d5a /dev/disk/by-uuid/3e651827-866a-44b5-922a-94de001742ae

UDEV [1255421701.429721] change /devices/virtual/block/md1 (block)
UDEV_LOG=3
ACTION=change
DEVPATH=/devices/virtual/block/md1
SUBSYSTEM=block
DEVTYPE=disk
SEQNUM=2807
MD_LEVEL=raid1
MD_DEVICES=2
MD_METADATA=00.90
MD_UUID=a3d5f583:4764d3b5:fadd18e4:d3a667fa
ID_FS_UUID=7e5223ba-dcb9-4a99-929f-98ecc9e6c889
ID_FS_UUID_ENC=7e5223ba-dcb9-4a99-929f-98ecc9e6c889
ID_FS_SEC_TYPE=ext2
ID_FS_VERSION=1.0
ID_FS_TYPE=ext3
ID_FS_USAGE=filesystem
FSTAB_NAME=/dev/md1
FSTAB_DIR=/home
FSTAB_TYPE=ext3
FSTAB_OPTS=noatime
FSTAB_FREQ=0
FSTAB_PASSNO=2
DKD_MEDIA_AVAILABLE=1
DKD_PRESENTATION_NOPOLICY=1
DEVNAME=/dev/md1
MAJOR=9
MINOR=1
DEVLINKS=/dev/block/9:1 /dev/disk/by-id/md-uuid-a3d5f583:4764d3b5:fadd18e4:d3a667fa /dev/disk/by-uuid/7e5223ba-dcb9-4a99-929f-98ecc9e6c889

UDEV [1255421699.949313] add /devices/virtual/block/md2 (block)
UDEV_LOG=3
ACTION=add
DEVPATH=/devices/virtual/block/md2
SUBSYSTEM=block
DEVTYPE=disk
SEQNUM=2783
MD_LEVEL=raid0
MD_DEVICES=2
MD_METADATA=00.90
MD_UUID=c478e150:98569965:1d580bf6:58a6e076
ID_FS_UUID=5ecad822-9d68-47a5-9033-6cecb7a23da3
ID_FS_UUID_ENC=5ecad822-9d68-47a5-9033-6cecb7a23da3
ID_FS_TYPE=xfs
ID_FS_USAGE=filesystem
FSTAB_NAME=/dev/md2
FSTAB_DIR=/scratch
FSTAB_TYPE=xfs
FSTAB_OPTS=noatime
FSTAB_FREQ=0
FSTAB_PASSNO=2
DKD_MEDIA_AVAILABLE=1
DKD_PRESENTATION_NOPOLICY=1
DEVNAME=/dev/md2
MAJOR=9
MINOR=2
DEVLINKS=/dev/block/9:2 /dev/disk/by-id/md-uuid-c478e150:98569965:1d580bf6:58a6e076 /dev/disk/by-uuid/5ecad822-9d68-47a5-9033-6cecb7a23da3

Revision history for this message
Scott James Remnant (Canonical) (canonical-scott) wrote :

robegue: your collected data does not support your assertion, all your drives are detected and your RAID arrays are assembled and active. If you have a problem, it's with mdadm

Revision history for this message
robegue (r087r70) wrote :

>robegue: your collected data does not support your assertion, all your drives
>are detected and your RAID arrays are assembled and active. If you
>have a problem, it's with mdadm

Of course my RAID are assembled and active: as stated above I'm using the 2.6.27 kernel with a working initrd image!
The problem is that I can not use any newer kernel because I can not generate any new working initrd image!

Revision history for this message
Jyrki Pulliainen (jyrki-pulliainen) wrote : apport-collect data

Architecture: i386
CustomUdevRuleFiles: 86-hpmud-hp_laserjet_p1505.rules 45-libmtp7.rules 86-hpmud-hp_laserjet_p1006.rules 86-hpmud-hp_laserjet_p1008.rules 86-hpmud-hp_laserjet_1005_series.rules 86-hpmud-hp_laserjet_1018.rules 86-hpmud-hp_laserjet_1020.rules 45-libnjb5.rules 86-hpmud-hp_laserjet_1000.rules 86-hpmud-hp_laserjet_p1005.rules 86-hpmud-hp_laserjet_p1007.rules
DistroRelease: Ubuntu 9.10
MachineType: System manufacturer P5K PRO
NonfreeKernelModules: nvidia
Package: udev 147~-6
PackageArchitecture: i386
ProcCmdLine: root=/dev/md1 ro quiet splash
ProcEnviron:
 SHELL=/bin/bash
 LANG=en_DK.UTF-8
 LANGUAGE=fi_FI:fi:en_GB:en
ProcVersionSignature: Ubuntu 2.6.31-14.48-generic-pae
Uname: Linux 2.6.31-14-generic-pae i686
UserGroups:

dmi.bios.date: 04/18/2008
dmi.bios.vendor: American Megatrends Inc.
dmi.bios.version: 1002
dmi.board.asset.tag: To Be Filled By O.E.M.
dmi.board.name: P5K PRO
dmi.board.vendor: ASUSTeK Computer INC.
dmi.board.version: Rev 1.xx
dmi.chassis.asset.tag: Asset-1234567890
dmi.chassis.type: 3
dmi.chassis.vendor: Chassis Manufacture
dmi.chassis.version: Chassis Version
dmi.modalias: dmi:bvnAmericanMegatrendsInc.:bvr1002:bd04/18/2008:svnSystemmanufacturer:pnP5KPRO:pvrSystemVersion:rvnASUSTeKComputerINC.:rnP5KPRO:rvrRev1.xx:cvnChassisManufacture:ct3:cvrChassisVersion:
dmi.product.name: P5K PRO
dmi.product.version: System Version
dmi.sys.vendor: System manufacturer

Revision history for this message
Jyrki Pulliainen (jyrki-pulliainen) wrote : BootDmesg.txt
Revision history for this message
Jyrki Pulliainen (jyrki-pulliainen) wrote : CurrentDmesg.txt
Revision history for this message
Jyrki Pulliainen (jyrki-pulliainen) wrote : Dependencies.txt
Revision history for this message
Jyrki Pulliainen (jyrki-pulliainen) wrote : Lspci.txt
Revision history for this message
Jyrki Pulliainen (jyrki-pulliainen) wrote : Lsusb.txt
Revision history for this message
Jyrki Pulliainen (jyrki-pulliainen) wrote : ProcCpuinfo.txt
Revision history for this message
Jyrki Pulliainen (jyrki-pulliainen) wrote : ProcInterrupts.txt
Revision history for this message
Jyrki Pulliainen (jyrki-pulliainen) wrote : ProcModules.txt
Revision history for this message
Jyrki Pulliainen (jyrki-pulliainen) wrote : UdevDb.txt
Revision history for this message
Jyrki Pulliainen (jyrki-pulliainen) wrote : UdevLog.txt
Revision history for this message
Jyrki Pulliainen (jyrki-pulliainen) wrote : Re: udev causes raid to degrade after update to Karmic beta
Revision history for this message
Jyrki Pulliainen (jyrki-pulliainen) wrote :

Just a quick update: After RC came out, the partitions are still missing

Revision history for this message
Jyrki Pulliainen (jyrki-pulliainen) wrote :

Final out, still no change.

However, could this be related to the Intel fake raid controller I have onboard? It's disabled in bios (the SATA mode is AHCI, not RAID), but still worth noting

Revision history for this message
Jyrki Pulliainen (jyrki-pulliainen) wrote :

Oh, and the motherboard chipset is Intel P35 (Asus P5K Pro)

Revision history for this message
robegue (r087r70) wrote :

I don't think it's related to the intel controller because I'm on nvidia controller (disabled in bios) and feell the same bug.

Revision history for this message
Jyrki Pulliainen (jyrki-pulliainen) wrote :

Ah, problem solved: The reason is that mdadm thinks the partitions are controlled by fake raid on motherboard. Adding nodmraid to boot options solves the problem.

However, this is still an issue, as the motherboard raid is disabled on bios and the mdadm should not use dmraid then

Revision history for this message
Scott James Remnant (Canonical) (canonical-scott) wrote :

Moving to dmraid given "nodmraid" comment

affects: udev (Ubuntu) → dmraid (Ubuntu)
summary: - udev causes raid to degrade after update to Karmic beta
+ raid degraded after update to Karmic beta
Revision history for this message
Tormod Volden (tormodvolden) wrote :

The fake raid is on the disks, not on the motherboard. Linux/dmraid can not see if you have disabled the fakeraid support in the BIOS, but it sees the fakeraid signatures on the disks. If your BIOS does not have an option to delete the fakeraid configuration from the disks, use dmraid -E.

Revision history for this message
robegue (r087r70) wrote :

using nodmraid in boot options works.
Instead, i'm not sure how to use the dmraid -E to definitely solve the issue: is it a safe operation? how is the full command line for /dev/md0?

Revision history for this message
Jyrki Pulliainen (jyrki-pulliainen) wrote :

Use dmraid -r to see, if your system has any dmraid controlled disks. Then, to remove the dmraid signatures, use: dmraid -r /dev/sdX -E, where /dev/sdX is a disk displayed by the dmraid -r command. A safe bet is to do a cross compare between the devices listed by dmraid -r and the devices listed in /proc/mdstat, the ones in /proc/mdstat *probably* should not be controlled by dmraid.

However, please note that removing the signatures from wrong disks might render your system unbootable or it might even result in a data loss

Revision history for this message
NaSH (lenashou) wrote :

it seems that you're right, it's an old signature on disks. I was using some of these disks to test raid, 2 years ago..

sudo dmraid -r
/dev/sda: "sil" and "nvidia" formats discovered (using nvidia)!
/dev/sdc: nvidia, "nvidia_cbccehbc", stripe, ok, 488397166 sectors, data@ 0
/dev/sdb: nvidia, "nvidia_cbccehbc", stripe, ok, 488397166 sectors, data@ 0
/dev/sda: nvidia, "nvidia_fhjffiga", stripe, ok, 490234750 sectors, data@ 0

so i guess that signatures are on sda only ? or on sdb and sdc too ?

so, trying dmraid -r /dev/sda -E should solve the pblm

Revision history for this message
robegue (r087r70) wrote :

$ sudo dmraid -r
/dev/sdb: nvidia, "nvidia_cbajegji", mirror, ok, 625142446 sectors, data@ 0
/dev/sda: nvidia, "nvidia_cbajegji", mirror, ok, 625142446 sectors, data@ 0
$

can I wait for a dmraid patch or MUST I dmraid -E the disks? I'm afraid to lose data since this is a production machine...

Revision history for this message
Tormod Volden (tormodvolden) wrote :

If you have false metadata on the disks, you _should_ remove this. Meanwhile you can use the nodmraid boot option to ignore all fakeraid signatures.

Revision history for this message
Danny Wood (danwood76) wrote :

There will not be a patch to dmraid as this is not really a bug.
This has been discussed before.

The issue is that you haven't removed the raid metadata when you destroyed the array.
Normally you can remove these without issue using dmraid -E as the metadata is usually in a region of the disk that you cannot access anyhow.

Also its worth going through the BIOS route to see if you can delete the array first, you may need to enable the onboard RAID to get into the nvidia / intel / whatever control panel to do it. This way dmraid wont delete anything that might be useful.

Revision history for this message
robegue (r087r70) wrote :

Thanks for the replies. It's still not clear to me why if I boot with the 2.6.27-14 jaunty kernel this issue does not appear? Why with the 2.6.31 karmic kernel I need to pass the nodmraid option or even modify the disks metadata, while with previous kernels I didn't?

Revision history for this message
Danny Wood (danwood76) wrote :

It's because dmraid now hides the root devices of a RAID set. This was implemented to fix a whole host of issues with block devices and udev. Also it makes the whole dmraid system a lot better.

This is why you are getting issues but your issues are caused because you have the RAID metadata on your disks.
Just purge the metadata and you wont get dmraid hiding your block devs anymore.
The data you loose is in the last blocks on the disk and will not be used by anything, the fact that dmraid can detect the data there means it can delete it.

Revision history for this message
Michael Nagel (nailor) wrote :

closing task on NULL project

Changed in null:
status: New → Invalid
Revision history for this message
Eva Drud (eva-drud) wrote :

To me it's still not really clear why the installer (as well as gparted, as it seems) hide the devices, while I can access them via Nautilus in the live system. I did not try writing, though. This confused me a lot. Is there a reason for this behaviour?

Revision history for this message
Danny Wood (danwood76) wrote :

It hides them so that you cannot accidentally destroy your RAID array when installing with Ubiquity or using the system in general.

Phillip Susi (psusi)
Changed in dmraid (Ubuntu):
status: New → Invalid
Curtis Hovey (sinzui)
no longer affects: null
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Duplicates of this bug

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.