software RAID arrays fail to start on boot
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
mdadm (Ubuntu) |
Invalid
|
Undecided
|
Unassigned |
Bug Description
Some software RAID arrays fail to start on boot. Exactly two of my arrays (but not always the same two!) do not start, on every single boot, and I have done 24 boots since I started taking detailed notes.
Have been running Ubuntu 12.04 with latest updates. Two days ago I selectively upgraded mdadm to 3.2.5 from -proposed, as suggested in bug #942106; that upgrade helped some other people, but not me. Over the last few months, various updates in kernel and mdadm have resulted in great improvement of symptoms, but no complete cure so far.
Note that the following symptoms once regularly occurred on this system, but have NOT occurred in the past few weeks:
- Having to wait for a degraded array to resync
- Having to manually re-attach a component (usually a spare) that had become detached
- Having to drop to the command line to zero a superblock before reattaching a component
- Having an array containing swap fail to start
- Having to use anything other than Disk Utility to get arrays running properly again
This system has six SATA drives on two controllers. It contains seven RAID arrays, including RAID 1, RAID 10, and RAID 6; all are listed in fstab. Some use 0.90.0 metadata and some use 1.2 metadata. The root filesystem is not on a RAID array (at least not any more; I got tired of that REAL fast) but everything else (including /boot and all swap) is on RAID. One array is used for /boot, two for swap, and the other four are just there for testing purposes.
BOOT_DEGRADED is set. All partitions are GPT. Not using LUKS or LVM. All drives are 2TB and by various manufacturers, and I suspect some have 512B physical sectors and some have 2KB sectors. This is an AMD64 system with 8GB RAM.
This system has had about four different versions of Ubuntu on it over the last few years, and has had multiple RAID arrays on it from the beginning. (This is why some of the arrays are still using 0.90.0 metadata, and why there are so many arrays; some arrays are old partitions containing root and home and such from earlier incarnations.) RAID worked fine until the system was upgraded to Oneiric early in 2012 (no, the problem did not start with Precise).
I have carefully tested the system every time an updated kernel or mdadm has appeared, ever since the problem started. The behavior has gradually improved over the last several months. This latest proposed version of mdadm (3.2.5), thankfully, did not result in regressions, but also did not result in significant improvement on this system; have rebooted five times since then and the behavior is consistent.
When the problem first started, on Oneiric, I had the root file system on RAID. This was unpleasant. I stopped using the system for a while, as I had another one running Maverick, which was reliable.
When I noticed some discussion of possibly related bugs on the Linux RAID list (I've been lurking there for years) I decided to test the system some more. By then Precise was out, so I upgraded. That did not help. Eventually I backed up all data onto another system and did a clean install of Precise on a non-RAID partition, which made the system tolerable. I left /boot on a RAID1 array (on all six drives), but that does not prevent the system from booting even if /boot does not start during Ubuntu startup (I assume because GRUB can find /boot even if Ubuntu later can't).
I started taking detailed notes in May (seven cramped pages so far). Have rebooted 24 times since then. On every boot, exactly two arrays did not start. Which arrays they were, varied from boot to boot; could be any of the arrays (but recently, swap arrays are not affected). No apparent correlation with metadata type or RAID level.
ProblemType: Bug
DistroRelease: Ubuntu 12.04
Package: mdadm 3.2.5-1ubuntu0.2
ProcVersionSign
Uname: Linux 3.2.0-29-generic x86_64
ApportVersion: 2.0.1-0ubuntu12
Architecture: amd64
Date: Mon Aug 13 12:10:36 2012
InstallationMedia: Ubuntu 12.04 LTS "Precise Pangolin" - Release amd64 (20120425)
MDadmExamine.
/dev/sda:
MBR Magic : aa55
Partition[0] : 3907029167 sectors at 1 (type ee)
MDadmExamine.
MDadmExamine.
MDadmExamine.
MDadmExamine.
MDadmExamine.
MDadmExamine.
MDadmExamine.
/dev/sdb:
MBR Magic : aa55
Partition[0] : 3907029167 sectors at 1 (type ee)
MDadmExamine.
MDadmExamine.
MDadmExamine.
MDadmExamine.
MDadmExamine.
MDadmExamine.
MDadmExamine.
/dev/sdc:
MBR Magic : aa55
Partition[0] : 3907029167 sectors at 1 (type ee)
MDadmExamine.
MDadmExamine.
MDadmExamine.
MDadmExamine.
MDadmExamine.
MDadmExamine.
/dev/sdd:
MBR Magic : aa55
Partition[0] : 3907029167 sectors at 1 (type ee)
MDadmExamine.
MDadmExamine.
MDadmExamine.
MDadmExamine.
MDadmExamine.
MDadmExamine.
MDadmExamine.
/dev/sdf:
MBR Magic : aa55
Partition[0] : 3907029167 sectors at 1 (type ee)
MDadmExamine.
MDadmExamine.
MDadmExamine.
MDadmExamine.
MDadmExamine.
MDadmExamine.
MDadmExamine.
/dev/sdg:
MBR Magic : aa55
Partition[0] : 3907029167 sectors at 1 (type ee)
MDadmExamine.
MDadmExamine.
MDadmExamine.
MDadmExamine.
MDadmExamine.
MDadmExamine.
MachineType: System manufacturer System Product Name
ProcEnviron:
TERM=xterm
PATH=(custom, no user)
LANG=en_US.UTF-8
SHELL=/bin/bash
ProcKernelCmdLine: BOOT_IMAGE=
SourcePackage: mdadm
UpgradeStatus: No upgrade log present (probably fresh install)
dmi.bios.date: 10/08/2010
dmi.bios.vendor: American Megatrends Inc.
dmi.bios.version: 2701
dmi.board.
dmi.board.name: M3A78-EM
dmi.board.vendor: ASUSTeK Computer INC.
dmi.board.version: Rev X.0x
dmi.chassis.
dmi.chassis.type: 3
dmi.chassis.vendor: Chassis Manufacture
dmi.chassis.
dmi.modalias: dmi:bvnAmerican
dmi.product.name: System Product Name
dmi.product.
dmi.sys.vendor: System manufacturer
Please ignore this bug report.
I now believe this problem was caused by a configuration error in my RAID setup. In particular, two different arrays had the same 'name' (I think this is the 'name' recorded in the array superblocks -- IMHO, 'name' has become an over-conflated term in the context of linux software-RAID). Apparently having duplicate names is not a good idea.
I have no recollection of ever explicitly assigning these names; I think this was done automatically by mdadm, several versions ago (probably over a year ago). Since many bugs in mdadm have been fixed since then, we should probably assume that this issue has been fixed unless somebody reports similar symptoms again.
I have fixed my system by re-creating one of the arrays without that duplicate name. I have now rebooted several times, and the symptoms have not recurred.
If there is a bug here, it is that it is (or was?) possible to create two arrays with the same name, with no obvious warning given at the time.