RAID1 array containing a RAID0 array as a member is started in degraded mode upon reboot

Bug #1217344 reported by Rune K. Svendsen
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
mdadm (Ubuntu)
New
Undecided
Unassigned

Bug Description

For some reason, a functioning RAID1 array that contains a RAID0 array as a member does not properly assemble after rebooting.

Before rebooting, when everything is running fine, this is what /proc/mdstat looks like:

    rune@rune-desktop:~$ cat /proc/mdstat
    Personalities : [raid0] [linear] [multipath] [raid1] [raid6] [raid5] [raid4] [raid10]
    md0 : active raid0 sdd1[1] sdc1[0]
          1953522688 blocks super 1.2 512k chunks

    md2 : active raid1 md0[2] sdb1[0]
          1953382488 blocks super 1.2 [2/2] [UU]

    unused devices: <none>

After rebooting it looks like this:

    rune@rune-desktop:~$ cat /proc/mdstat
    Personalities : [raid0] [linear] [multipath] [raid1] [raid6] [raid5] [raid4] [raid10]
    md0 : active raid0 sdd1[1] sdc1[0]
          1953522688 blocks super 1.2 512k chunks

    md2 : active (auto-read-only) raid1 sdb1[0]
          1953382488 blocks super 1.2 [2/1] [U_]

    unused devices: <none>

This can be fixed by running:

    sudo mdadm /dev/md2 --add /dev/md0

But I'd rather not have to do this upon every reboot.

ProblemType: Bug
DistroRelease: Ubuntu 13.04
Package: mdadm 3.2.5-5ubuntu2
ProcVersionSignature: Ubuntu 3.8.0-29.42-generic 3.8.13.5
Uname: Linux 3.8.0-29-generic x86_64
ApportVersion: 2.9.2-0ubuntu8.3
Architecture: amd64
Date: Tue Aug 27 14:30:03 2013
EcryptfsInUse: Yes
InstallationDate: Installed on 2012-06-09 (443 days ago)
InstallationMedia: Ubuntu 12.04 LTS "Precise Pangolin" - Release amd64 (20120425)
MDadmExamine.dev.sda:
 /dev/sda:
    MBR Magic : aa55
 Partition[0] : 41943040 sectors at 1024 (type 83)
 Partition[1] : 110162944 sectors at 41944064 (type 83)
 Partition[2] : 4194304 sectors at 152107008 (type 82)
MDadmExamine.dev.sda1: Error: command ['/sbin/mdadm', '-E', '/dev/sda1'] failed with exit code 1: mdadm: No md superblock detected on /dev/sda1.
MDadmExamine.dev.sda2: Error: command ['/sbin/mdadm', '-E', '/dev/sda2'] failed with exit code 1: mdadm: No md superblock detected on /dev/sda2.
MDadmExamine.dev.sda3: Error: command ['/sbin/mdadm', '-E', '/dev/sda3'] failed with exit code 1: mdadm: No md superblock detected on /dev/sda3.
MDadmExamine.dev.sdb:
 /dev/sdb:
    MBR Magic : aa55
 Partition[0] : 3907027120 sectors at 2048 (type 83)
MDadmExamine.dev.sdc:
 /dev/sdc:
    MBR Magic : aa55
 Partition[0] : 1953523120 sectors at 2048 (type 83)
MDadmExamine.dev.sdd:
 /dev/sdd:
    MBR Magic : aa55
 Partition[0] : 1953523120 sectors at 2048 (type 83)
MDadmExamine.dev.sde:
 /dev/sde:
    MBR Magic : aa55
 Partition[0] : 500118129 sectors at 63 (type 83)
MDadmExamine.dev.sde1: Error: command ['/sbin/mdadm', '-E', '/dev/sde1'] failed with exit code 1: mdadm: No md superblock detected on /dev/sde1.
MachineType: . .
MarkForUpload: True
ProcEnviron:
 LANGUAGE=en_US:en
 TERM=xterm
 PATH=(custom, no user)
 LANG=en_US.UTF-8
 SHELL=/bin/bash
ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-3.8.0-29-generic root=UUID=53d93e82-bd1a-4b17-b657-d8694c5adb68 ro quiet splash vt.handoff=7
SourcePackage: mdadm
UpgradeStatus: Upgraded to raring on 2013-05-03 (115 days ago)
dmi.bios.date: 05/26/2008
dmi.bios.vendor: Phoenix Technologies, LTD
dmi.bios.version: 6.00 PG
dmi.board.name: IP35 Pro XE(Intel P35-ICH9R)
dmi.board.vendor: http://www.abit.com.tw/
dmi.board.version: 1.0
dmi.chassis.type: 3
dmi.chassis.vendor: System Enclosure Manufacter
dmi.chassis.version: OEM
dmi.modalias: dmi:bvnPhoenixTechnologies,LTD:bvr6.00PG:bd05/26/2008:svn.:pn.:pvrSystemVersion:rvnhttp//www.abit.com.tw/:rnIP35ProXE(IntelP35-ICH9R):rvr1.0:cvnSystemEnclosureManufacter:ct3:cvrOEM:
dmi.product.name: .
dmi.product.version: System Version
dmi.sys.vendor: .

Revision history for this message
Rune K. Svendsen (runeks) wrote :
Revision history for this message
Rune K. Svendsen (runeks) wrote :
Download full text (3.4 KiB)

Here are some relevant lines from syslog:

    Aug 27 14:27:23 rune-desktop kernel: [ 1.943848] md: bind<sdb1>
    Aug 27 14:27:23 rune-desktop kernel: [ 1.958267] md: bind<sdc1>
    Aug 27 14:27:23 rune-desktop kernel: [ 2.036123] md: bind<sdd1>
    Aug 27 14:27:23 rune-desktop kernel: [ 2.037575] md: raid0 personality registered for level 0
    Aug 27 14:27:23 rune-desktop kernel: [ 2.037679] md/raid0:md0: md_size is 3907045376 sectors.
    Aug 27 14:27:23 rune-desktop kernel: [ 2.037682] md: RAID0 configuration for md0 - 1 zone
    Aug 27 14:27:23 rune-desktop kernel: [ 2.037683] md: zone0=[sdc1/sdd1]
    Aug 27 14:27:23 rune-desktop kernel: [ 2.037692] md0: detected capacity change from 0 to 2000407232512
    Aug 27 14:27:23 rune-desktop kernel: [ 2.038966] md0: unknown partition table
    Aug 27 14:27:23 rune-desktop kernel: [ 2.231766] md: linear personality registered for level -1
    Aug 27 14:27:23 rune-desktop kernel: [ 2.233460] md: multipath personality registered for level -4
    Aug 27 14:27:23 rune-desktop kernel: [ 2.236043] md: raid1 personality registered for level 1
    Aug 27 14:27:23 rune-desktop kernel: [ 2.480805] md: raid6 personality registered for level 6
    Aug 27 14:27:23 rune-desktop kernel: [ 2.480807] md: raid5 personality registered for level 5
    Aug 27 14:27:23 rune-desktop kernel: [ 2.480808] md: raid4 personality registered for level 4
    Aug 27 14:27:23 rune-desktop kernel: [ 2.484824] md: raid10 personality registered for level 10
    Aug 27 14:27:23 rune-desktop kernel: [ 2.523690] md/raid1:md2: active with 1 out of 2 mirrors
    Aug 27 14:27:23 rune-desktop kernel: [ 2.523720] md2: detected capacity change from 0 to 2000263667712
    Aug 27 14:27:23 rune-desktop kernel: [ 2.552606] md2: unknown partition table
    Aug 27 14:27:24 rune-desktop mdadm[1706]: DegradedArray event detected on md device /dev/md2
    Aug 27 14:27:24 rune-desktop mdadm[1706]: DeviceDisappeared event detected on md device /dev/md0, component device Wrong-Level
    Aug 27 14:27:25 rune-desktop dbus[717]: [system] Activating service name='org.freedesktop.systemd1' (using servicehelper)
    Aug 27 14:27:25 rune-desktop dbus[717]: [system] Successfully activated service 'org.freedesktop.systemd1'
    Aug 27 14:27:30 rune-desktop udisksd[2754]: Error creating watch for file /sys/devices/virtual/block/md0/md/sync_action: No such file or directory (g-file-error-quark, 4)
    Aug 27 14:27:30 rune-desktop udisksd[2754]: Error creating watch for file /sys/devices/virtual/block/md0/md/degraded: No such file or directory (g-file-error-quark, 4)
    Aug 27 14:33:27 rune-desktop kernel: [ 373.794337] md: bind<md0>
    Aug 27 14:33:27 rune-desktop kernel: [ 373.832043] disk 1, wo:1, o:1, dev:md0
    Aug 27 14:33:27 rune-desktop kernel: [ 373.832103] md: recovery of RAID array md2
    Aug 27 14:33:27 rune-desktop kernel: [ 373.832108] md: minimum _guaranteed_ speed: 1000 KB/sec/disk.
    Aug 27 14:33:27 rune-desktop kernel: [ 373.832110] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.
    Aug 27 14:33:27 rune-desktop kernel: [ 373....

Read more...

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.