raid5 always assembled in degraded mode after boot

Bug #104251 reported by johnny
6
Affects Status Importance Assigned to Milestone
mdadm (Ubuntu)
New
Undecided
Unassigned

Bug Description

Binary package hint: mdadm

I have a raid5 array, with one of the components being a linear-raid device. (boot/root partitions are not on the raid arrays; I have lvm on top of the raid5 array, but I don't think it's relevant to the problem)

After each boot, the raid5 device (md0) is assembled before (and without) the linear md1 device, in degraded mode:

# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [linear]
md0 : active raid5 sda1[0] sdb1[1]
      976767744 blocks level 5, 128k chunk, algorithm 2 [3/2] [UU_]

md1 : active linear sdc1[0] hda4[2] hdb1[1]
      496167232 blocks 64k rounding

unused devices: <none>

# grep md /var/log/syslog
[...]
Apr 7 13:27:43 moya kernel: [ 45.281163] md: md1 stopped.
Apr 7 13:27:43 moya kernel: [ 45.506045] md: md0 stopped.
Apr 7 13:27:43 moya kernel: [ 45.738657] md: bind<sdb1>
Apr 7 13:27:43 moya kernel: [ 45.738863] md: bind<sda1>
Apr 7 13:27:43 moya kernel: [ 46.371805] md: raid6 personality registered for level 6
Apr 7 13:27:43 moya kernel: [ 46.371807] md: raid5 personality registered for level 5
Apr 7 13:27:43 moya kernel: [ 46.371809] md: raid4 personality registered for level 4
Apr 7 13:27:43 moya kernel: [ 46.376358] raid5: allocated 3163kB for md0
Apr 7 13:27:43 moya kernel: [ 46.376362] raid5: raid level 5 set md0 active with 2 out of 3 devices, algorithm 2
Apr 7 13:27:43 moya kernel: [ 46.497668] md: bind<hdb1>
Apr 7 13:27:43 moya kernel: [ 46.497859] md: bind<hda4>
Apr 7 13:27:43 moya kernel: [ 46.498035] md: bind<sdc1>
Apr 7 13:27:43 moya kernel: [ 46.575570] md: linear personality registered for level -1
Apr 7 13:27:55 moya mdadm: DegradedArray event detected on md device /dev/md0
Apr 7 13:27:55 moya mdadm: DeviceDisappeared event detected on md device /dev/md1, component device Wrong-Level

(Judging from the last line above, md1 should not have been assembled, right? However, it's up and running OK at the end of the boot process, and all of its component devices seem to have been detected 7 seconds before that log entry.)

I then have to manually add back md1 to the md0 array:

# mdadm /dev/md0 --add /dev/md1
mdadm: re-added /dev/md1

# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [linear]
md0 : active raid5 md1[3] sda1[0] sdb1[1]
      976767744 blocks level 5, 128k chunk, algorithm 2 [3/2] [UU_]
      [>....................] recovery = 0.0% (57512/488383872) finish=282.5min speed=28756K/sec

md1 : active linear sdc1[0] hda4[2] hdb1[1]
      496167232 blocks 64k rounding

unused devices: <none>

# grep md /var/log/syslog
Apr 7 13:30:11 moya kernel: [ 202.481493] md: bind<md1>
Apr 7 13:30:11 moya kernel: [ 202.497434] disk 2, o:1, dev:md1
Apr 7 13:30:11 moya kernel: [ 202.498680] md: recovery of RAID array md0
Apr 7 13:30:11 moya kernel: [ 202.498686] md: minimum _guaranteed_ speed: 1000 KB/sec/disk.
Apr 7 13:30:11 moya kernel: [ 202.498690] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.
Apr 7 13:30:11 moya kernel: [ 202.498696] md: using 128k window, over a total of 488383872 blocks.
Apr 7 13:30:11 moya mdadm: RebuildStarted event detected on md device /dev/md0

5 hours later the array rebuilt completes, but the same thing happens on the next boot.

System details:

Ubuntu feisty, up-to-date as of today / Apr-7.

# uname -a
Linux moya 2.6.20-14-generic #2 SMP Mon Apr 2 20:37:49 UTC 2007 i686 GNU/Linux

mdadm 2.5.6-7ubuntu5

# grep -v '^#' /etc/mdadm/mdadm.conf
DEVICE /dev/md1
DEVICE partitions
CREATE owner=root group=disk mode=0660 auto=yes
HOMEHOST <system>
ARRAY /dev/md1 level=linear num-devices=3 UUID=918d69f3:89dfa17d:3c3617d1:d2f62254
ARRAY /dev/md0 level=raid5 num-devices=3 UUID=465151a3:e6e82ccb:3c3617d1:d2f62254

I added the 'DEVICE /dev/md1' in there, but it didn't make a difference.

Revision history for this message
Sami J. Laine (sjlain) wrote :

This seems to affect raid1 as well.

I have several (swap, /, /home and some others) partitions on raid1 mirrors (each with 3 disks). I do not use LVM at all. Howevery, on boot each and every raid1 device is assembled in degraded mode with only 1 out of 3 mirrors active.

After manually syncing all mirrors they're assembled on degraded mode in next reboot again.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.