[->UUIDudev] mdadm software raid breaks on intrepid-jaunty upgrade

Bug #330298 reported by PrivateUser132781
52
This bug affects 5 people
Affects Status Importance Assigned to Milestone
mdadm (Ubuntu)
Confirmed
Medium
Unassigned
Nominated for Jaunty by David Reitz

Bug Description

Upon upgrading from Intrepid to Jaunty, my software RAID5 array did not mount.

It appears the problem may have been caused by /etc/mdadm/mdadm.conf being silently overwritten by a default config file. After executing the following mdadm --examine --scan --config=mdadm.conf >> /etc/mdadm/mdadm.conf in a recovery shell, the array mounts just fine.

Revision history for this message
Yannis Tsop (ogiannhs) wrote :

I have the same problem here. After upgrading to 9.04 yesterday the system does not boot (it hangs after discovering the disks). It does not boot even with the old kernel. I have not tried the above solution yet, but it seemed to me that there was a problem with mdadm.

Revision history for this message
Yannis Tsop (ogiannhs) wrote :

There is nothing I could do about it. The old kernel gave me a busybox shell but I did not manage to do anything. The new kernel did not even offer this option :( . I will try from a chrooted cd to see what I can do. I have a RAID + LVM setup for my pc.

Revision history for this message
Pantelis Koukousoulas (pktoss) wrote :

I don't think this is a duplicate bug of #332270. #332270 is about a problem in udev rules while this one is clearly an mdadm packaging problem (packages should *not* silently override files in /etc. Especially if the consequence is that they remove your RAID arrays.)

I was hit by this bug too and both the assessment and the workaround suggested by the OP are the same here as well.

Revision history for this message
Jan Claeys (janc) wrote :

I'm not sure yet if my problem is caused by the same bug, but my /home which is on an "md" (raid 1) device doesn't get mounted automaticly anymore since I upgraded to jaunty.

The strange thing is: at least sometimes I could see the md device assembled when I looked at it, but /home wasn't mounted. (I didn't always look immediately at it though, so this might somehow be time-dependent?)

Also, I use UUID in /etc/fstab so the fact that mdadm renamed devices can't be the reason.
(Maybe the renaming was because a config file got replaced, as some other people in this report suggest?)

Revision history for this message
David Reitz (dreitz) wrote :

I do not think this is a duplicate, even though this bug is marked as such currently. As soon as I added a proper entry in /etc/mdadm/mdadm.conf for my /dev/md2 device and rebooted, everything came back up as expected (after upgrading to jaunty). This is clearly not related to a udev rule problem.

Revision history for this message
Slalomsk8er (launchpad-slalomsk8er) wrote :

Same problem for me. I have described my symptoms and solution at https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/136252/comments/5 as it was configuration related.

Revision history for this message
Bill Smith (bsmith1051) wrote :

Why is this marked as a dupe of bug #314879 when that bug was fixed back in January and people are still experiencing this with 9.04 'Gold' ?

Hew (hew)
Changed in mdadm (Ubuntu):
importance: Undecided → Medium
status: New → Confirmed
tags: added: intrepid2jaunty
Revision history for this message
Davias (davias) wrote :

Thank you for providing a solution - I printed the page and started to update from 8.10 to 9.04 on AMD64 with /, /home and swap on md0, md1 & md2 on RAID1, plus a RAID0 on md3 using two spare partitions on my 2 SATA disks.

But... The upgrade just went as smooth as silk! The system booted just fine. For now I just had to reconfigure VMWare server 2.

Not so for md3, the RAID0. Raid monitor says md_d3 (instead of calling it md3): inactive sdb4[0](S)

Any help? TIA

Revision history for this message
Bill Smith (bsmith1051) wrote :

Davias,
Thanks for your report! It's good to know it's not a blatant and consistent problem. For help fixing your md3, however, please post a request on the Ubuntu forums -- probably the Server Platforms group is best, http://ubuntuforums.org/forumdisplay.php?f=339

If you figure-out what specifically went wrong, then please post it here as it may help troubleshoot what's going on.

Revision history for this message
ggb-uk (ggb-uber) wrote :

Had similar problem here (RAID 1, upgrade to 9.04, and fail to boot).

Tried to boot from one of my older kernels, and realised that the upgrade process had removed most of my historic kernels on the boot drive.

The realised I was still booting from kernel 2.6.27-14, but that I now also had 2.6.28-12. Asked grub to boot from 2.6.28-12 and it booted fine.

Revision history for this message
c23gooey (c23gooey) wrote :

FYI

i had a similar problem upgrading from 8.10 to 9.04

boot failure occurs after loading mdadm

so i logged into the maintenance console and used the command supplied by the OP.

mdadm --examine --scan --config=mdadm.conf >> /etc/mdadm/mdadm.conf

this command added the necessary lines to mdadm.conf and everything ran fine after that

Revision history for this message
Delta (deruta) wrote :

I still have this problem - appeared after jaunty upgrade.

The machine with 3 disks in raid5 fails to assemble the array upon startup. I have not found a solution to this problem (mdadm --examine --scan --config=mdadm.conf >> /etc/mdadm/mdadm.conf) and others do not appear to work in my case.

The only solution that works is booting the new system with the old 2.6.27-14 kernel. This also drops to busybox in a few seconds. But with this kernel it is possible to start ths system by reassambling the arrays with mdadm -As;exit. This is not possbile with the kernel that comes with jaunty (including the -14 interation).

My other computer with 2 drives and raid 0 and 1 arrays upgraded without problems.

Also, as much as I understand my mdadm.conf files in the kernel image appear to be correct - both on the new kernel in jaunty and the one in 2.6.27-14. Yet it does not start normally.

Revision history for this message
RnSC (webclark) wrote :

The title definitely happened to me.

System boots off of a simple, separate disk.
Two 500GB disks (sdb, sdc) were mirrored with md, then a VG created, a LV, and ext3 on top. Simple, worked great. /etc/mdadm/mdadm.conf did NOT list the configuration, but rather depended on a scan to figure it all out every boot. I did not set this up. I just ran mdadm and vgxxx lvxxx commands to create it all on install. Never touched the .conf.

Upgraded 0810 to 0904.
This is where I get fuzzy, did not keep good records.

On boot, fsck failed saying that the volume did not exist.
mdadm at various times has told me that sdb did not exist, had a bad superblock, and was in use by another process. No doubt I caused some of my own problems. At a point in the past on a re-install of 0810 I they were not recognized and I recovered by setting one of the mirrors to fail, removing it, and readding it. Did not work this time. I continued to fiddle, as well as learn the syntax of mdadm (man page confusing (to me)).

At all times mdadm --examine /dev/sdb (or sdc) told me that both mirrors were fine / clean.

As I fiddled, one disk came on-line. I rebooted, and it was gone. I got it back. At one point I had two. Fool, I rebooted. Lost. After much fooling (wouldn't come back) I got one back. Am backing up to a non-md ext3 on a USB drive! Plan to wipe the disks and reinstall from scratch.

Error messages saying that /dev/sdb does not exist when I can dd blocks from it, or that it is in use by another process when (1) I have not run anything and (2) lsof does not show anything, and Statements that the Superblock is bad on a mirror that has been running for six monthes, and was healthy when I pushed the "upgrade" button and dead on reboot at the end of the install process ... are not helpful at all! Plus things were inconsistent.

It sure looks like something buggy / flakey is happening. I would suspect flakey hardware except that the system has been stable for 6 or 8 years, including up until I pressed "Upgrade". Two hours later, this.

Should I assume that 0904 is inherently stable, that this was just a botched "upgrade" proceedure not covering something that was changed, or should I re-install 0810?

Opinions and your rationale would be GREATLY appreciated.

Revision history for this message
RnSC (webclark) wrote :

Continuing with my 8/31 2009 post, I erased the disks and reinstalled 9.04. Still had problem. Reading the mdadm wiki, kernel detection and assembly of arrays is considered deprecated, and only works with version 0.9 superblocks anyway. I use version 1.2 superblocks so that they are near the start of the disk so I can zero them out in a reasonable time when I want to reuse the disks. 8.10 did not have any problem assembling these. I don't know if it is because it FOUND them, or if it looked at mdadm.conf. I did not KNOW about mdadm.conf until a few days ago, so if it assembled them from mdadm.conf, mdadm must have created the file for me. Since arrays using version 1.2 superblocks *cannot* be auto-assembled, my mdadm.conf in 8.10 must have been created for me.

I did a "mdadm --examine --scan >> /etc/mdadm/mdadm.conf" and the system works fine. (Well, there are other mdadm segfault problems which I just put a bug report in for, but that is another story).

So I conclude that *whatever* is supposed to create mdadm.conf when you create an array with command line mdadm commands worked in 8.10, and is broken in 9.04.

Revision history for this message
ceg (ceg) wrote :

With UUID-based raid assembly that does not rely on mdadm.conf maintanance this is not an issue. Bug #158918

summary: - mdadm software raid breaks on intrepid-jaunty upgrade
+ [->UUIDudev] mdadm software raid breaks on intrepid-jaunty upgrade
Revision history for this message
chtugha (b1721045) wrote :

Thanky you so much!! My fresh ubuntu 10.10 server installation decided to kick my raids after I added another device and rebootet some time later ( /dev/dm-1 ) ... no update done whatsoever.. You saved my day ! ..had to run the command as root btw!

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.