Please upgrade to a non-outdated version

Bug #495370 reported by Tuomas Jormola
158
This bug affects 31 people
Affects Status Importance Assigned to Milestone
mdadm (Ubuntu)
Fix Released
Undecided
Dimitri John Ledkov
Nominated for Maverick by ceg
Nominated for Natty by causticsoda
Lucid
Won't Fix
Undecided
Dimitri John Ledkov

Bug Description

Binary package hint: mdadm

Hi,

The current upstream version of mdadm is 3.1.1. It would be great to have this version in lucid, which is an LTS release. It contains many nice, advanced features since the current version in lucid, which is 2.6.7. These include e.g.

- conversion between RAID levels
- changing of chunk size
- user space metadata updates

Thanks.

Tags: lucid upgrade
tags: added: upgrade
Revision history for this message
causticsoda (glenn-thomasfamily) wrote :

I agree it would be excellent to have a newer version of mdadm for lucid. I note that debian has 3.0.3 in "squeeze" http://packages.debian.org/squeeze/mdadm

description: updated
Revision history for this message
s34gull (s34gull) wrote :

Another vote for adoption of 3.1.x.

tags: added: lucid
Revision history for this message
stephen howell (showell) wrote :

Please upgrade to 3.1.x, the re-shape features are much improved and the defaults improve performance and data safety.

Revision history for this message
Ilya Barygin (randomaction) wrote :

3.1.1 is in Debian unstable.

Revision history for this message
Vikram Dhillon (dhillon-v10) wrote :

Seems like we might have to wait for some time: http://release.debian.org/migration/testing.pl?package=mdadm but when it does go over to Testing I will merge it over :) Thanks.

Changed in mdadm (Ubuntu):
assignee: nobody → Vikram Dhillon (dhillon-v10)
status: New → Confirmed
Revision history for this message
Erik Reuter (misc71) wrote :

Version 3.0.3 is in testing. If you cannot use the 3.1.1 in unstable, can't Ubuntu at least use the 3.0.3 in testing? Ubuntu is the only major linux distribution that is shipping with such an outdated version of mdadm.

Revision history for this message
Jools Wills (jools) wrote :

Due to the state/age of mdadm and raid on ubuntu, I've been seriously considering switching back to debian. Currently I am running my own package of 3.1.2 based on ubuntu's package (with the hotplug\udev way of building arrays). I basically grabbed the ubuntu source, uupdated, and manually fixed up any rejected patches, and reversed any I thought were no longer needed.

Revision history for this message
Jools Wills (jools) wrote :

(and i changed the path mdadm uses to store its map file when constructing arrays on boot from initramfs), although the better fix is to add the missing folder /var/run/mdadm to the mdadm initramfs script i guess.

Revision history for this message
ceg (ceg) wrote :

Hi Jools,

wow, you know to roll packages, and fixed some of the longstanding bugs allready. Could you upload them or have them build in your Personal Package Archive (PPA) https://help.launchpad.net/Packaging/PPA
I think maybe the initramfs creation hook (/debian/initramfs/hook) of the mdadm package could add /var/run/mdadm to initramfs, did you manage to carry the map file from initramfs into the running system?

Revision history for this message
Jools Wills (jools) wrote :

I wouldn't say I'm good at building packages, but I know how to update existing ones a little.. Didn't try getting the map files across from the initramfs to the root fs. Does mdadm require it once the array is already constructed ?

Revision history for this message
ceg (ceg) wrote :

All right, maybe you could try if that ppa facility can build your modified source package into one that is installable with a ppa sources.list entry. I think mdadm --incremental needs the map file in the system to be successfull with later hot-plugging of raid devices. At least what I remember from when I last tried is that hot-plugging worked in initramfs, but not in the running system. I don't know, maybe initramfs already does some copying of /var/run and it will just work if you enable mdadm to create the mapfile there and it can can also find it there later.

Revision history for this message
Jools Wills (jools) wrote :

I haven't sorted a ppa yet, but I stuck my package up here http://malus.exotica.org.uk/~buzz/mdadm/

My changes:
3.1.2 wants to put its map file by default at /var/run/map so

init-top script to make a /var/run folder for the map file (would be
better in initramfs init - but I only want to maintain one package here
and not mess with the important stuff)

init-bottom script to copy the map file to .initramfs in /dev (temporary
as root is read only at this point).

change in the init script that starts the monitor service to copy the
map file to /var/run/map

change in package postinst, to not try and run a non
existent /dev/MAKEDEV

If you try this out, please make sure you can recover your system and
install the old mdadm back. However, it works for me.

Revision history for this message
ceg (ceg) wrote :

Jools packages http://malus.exotica.org.uk/~buzz/mdadm can be considered as patching this bug.

ceg (ceg)
tags: added: patch
Revision history for this message
thometal (thometal) wrote :

it should be at least updated to version 3.1.2 because of an annoying reshape/grow bug in 3.1.1

Revision history for this message
ceg (ceg) wrote :

Installed the 3.1.2 package on a machine and ...

it was the first time mdadm --incremental actually succeeded in re-adding a drive when it was attached to an ubuntu machine after booting. Thumbs up!

You wrote about /var/run/map, but in the running system I see mdadm created /var/run/mdadm.map and /var/run/mdadm only contains monitor.pid, is that intended?

I think instead of recreating the dir on every boot, it could be added to the initramfs staging area with the mdadm's /debian/initramfs/hook script (mkdir -p <$var-containig-staging-rootpoint>/var/run/mdadm) and thus be added to the image.

What I haven't checked is if udev is stopped when the initramfs map file is copied so no new devices show up and may be written to the old location until the real root fs is set up and used.

Revision history for this message
Jools Wills (jools) wrote :

The first version copied the map file to /var/run/map. I then changed my mind as I saw ubuntu had a mdadm folder there, and I changed the location to /var/run/mdadm/map. I'm running the 3.1.2-0ubuntu2 version and the map file is working correctly from the mdadm folder.

Revision history for this message
ceg (ceg) wrote :

3.1.2-0ubuntu2 here too, and if you ls /var/run/md* in the running system you don't have the mdadm.map file?

From the comments I am not sure if 3.1.2 is supposed to already have that unplug support or not, at least for me it does not yet unbind unplugged disks automatically. https://raid.wiki.kernel.org/index.php/Hotplug

Revision history for this message
Jools Wills (jools) wrote :

I have jools@aero:~$ ls -l /var/run/mdadm/map
-rw------- 1 root root 54 2010-04-21 22:27 /var/run/mdadm/map

Btw I did try making the initramfs folder from the hook, but it never worked. I wonder if it ignores empty folders, which is why i went for the init-top script instead.

Revision history for this message
ceg (ceg) wrote :

$ mdadm --version
mdadm - v3.1.2 - 10th March 2010

$ls -l /var/run/md*
-rw------- 1 root root 108 2010-04-23 20:51 /var/run/mdadm.map

/var/run/mdadm:
-rw-r--r-- 1 root root 5 2010-04-23 20:51 monitor.pid

Revision history for this message
Jools Wills (jools) wrote :

I'm slightly confused then since my scripts specifically copy to /var/run/mdadm/map. mdadm.map is the third choice of mdadm should it not be able to make files in the first location that is set in the makefile config. however even if it did make a map file there in the initramfs, my script wouldnt copy it.

my initramfs init-bottom has "cp /var/run/mdadm/map /dev/.initramfs/varrun/map"

and my /etc/init.d/mdadm has

MAPFILE=/dev/.initramfs/varrun/map

if [ -f $MAPFILE ]; then
   mv $MAPFILE /var/run/mdadm/map
fi

so I don't know how you get the mdadm.map file, unless its some odd thing with mdadm reading my earlier map file and rewriting that one ?

Revision history for this message
ceg (ceg) wrote : Re: [Bug 495370] Re: Please upgrade to 3.1.x for lucid

Maybe udev rules fire before /etc/init.d/mdadm is processed.

But it also looks pretty empty under /dev/.initramfs on this 9.10
machine, so nothing is copied.

#ls -R /dev/.initramfs
/dev/.initramfs:
varrun

/dev/.initramfs/varrun:
sendsigs.omit

Whats the best way to boot into a initramfs shell to see whats there?

Revision history for this message
ceg (ceg) wrote : Re: Please upgrade to 3.1.x for lucid

Just found the --rebuild-map option in the new version. That may even be cleaner, to just rebuild the map in the main system before udev is restarted.

Revision history for this message
ceg (ceg) wrote :

Upstream actually ships with udev hotplug rules by now, maybe those should be used instead of maintaining ubuntu ones.

With the successor to 3.1.2 mdadm seems to also ship unplug rules.

Revision history for this message
ceg (ceg) wrote :

Debian's package repository has 3.1.2 and ships the upstream udev rules.
http://git.debian.org/?p=pkg-mdadm/mdadm.git;a=log

Revision history for this message
Philip Muškovac (yofel) wrote :

The attachement isn't a patch. Removing patch flag and tag and unsubscribing reviewers.

tags: removed: patch
Revision history for this message
Max Waterman (davidmaxwaterman) wrote :

I've upgraded to lucid and I get a message complaining about a missing /var/run/mdadm/map when I boot up (blank screen with the message at the top).

$ sudo /sbin/mdadm --rebuild-map /dev/md2
$ ls /var/run/mdadm/map
ls: cannot access /var/run/mdadm/map: No such file or directory
$ ls -l /var/run/mdadm/
total 4
-rw-r--r-- 1 root root 5 2010-05-29 08:29 monitor.pid
$ ls -l /var/run/mdadm.map
-rw------- 1 root root 54 2010-05-29 08:29 /var/run/mdadm.map

Is there something I can do to get rid of this message?

Max.
ii mdadm 3.1.2-0ubuntu2

Revision history for this message
Vikram Dhillon (dhillon-v10) wrote : Re: [Bug 495370] Re: Please upgrade to 3.1.x for lucid

On Sat, May 29, 2010 at 05:48:32AM -0000, Max Waterman wrote:
> I've upgraded to lucid and I get a message complaining about a missing
> /var/run/mdadm/map when I boot up (blank screen with the message at the
> top).
>
> $ sudo /sbin/mdadm --rebuild-map /dev/md2
> $ ls /var/run/mdadm/map
> ls: cannot access /var/run/mdadm/map: No such file or directory
> $ ls -l /var/run/mdadm/
> total 4
> -rw-r--r-- 1 root root 5 2010-05-29 08:29 monitor.pid
> $ ls -l /var/run/mdadm.map
> -rw------- 1 root root 54 2010-05-29 08:29 /var/run/mdadm.map
>
> Is there something I can do to get rid of this message?
>
> Max.
> ii mdadm 3.1.2-0ubuntu2
>
> --
> Please upgrade to 3.1.x for lucid
> https://bugs.launchpad.net/bugs/495370
> You received this bug notification because you are a bug assignee.
>
> Status in “mdadm” package in Ubuntu: Confirmed
>
> Bug description:
> Binary package hint: mdadm
>
> Hi,
>
> The current upstream version of mdadm is 3.1.1. It would be great to have this version in lucid, which is an LTS release. It contains many nice, advanced features since the current version in lucid, which is 2.6.7. These include e.g.
>
> - conversion between RAID levels
> - changing of chunk size
> - user space metadata updates
>
> Thanks.
>
>
>
>

I apologize for not being able to get this bug done in time, if someone else wants to take over, feel free to do so. Thanks.

 assignee nobody

--
Regards,
Vikram Dhillon

Changed in mdadm (Ubuntu):
assignee: Vikram Dhillon (dhillon-v10) → nobody
Revision history for this message
ceg (ceg) wrote : Re: Please upgrade to 3.1.x for lucid

Something is wrong with the non-existing maintenance of a basic system component like mdadm in ubuntu. Not even the updated debian packages get synced.

summary: - Please upgrade to 3.1.x for lucid
+ Please upgrade to a non-outdated version
Revision history for this message
Simon Eisenmann (longsleep) wrote :

Err .. this ticket needs some love.

Its a shame that lucid still has mdadm 2.6

What can be done to get the most recent version available to lucid users through ppa or backport and into the next ubuntu release?

Revision history for this message
Rune K. Svendsen (runeks) wrote :

Would it be completely crazy to try to install mdadm 3.0.3-2 from Debian squeeze in Lucid?:
http://packages.debian.org/squeeze/mdadm

Well I just tried it. It did spit out an error:

    rune@runescomp:~/Desktop$ sudo dpkg -i mdadm_3.0.3-2_i386.deb
    Selecting previously deselected package mdadm.
    (Reading database ... 197151 files and directories currently installed.)
    Unpacking mdadm (from mdadm_3.0.3-2_i386.deb) ...
    Setting up mdadm (3.0.3-2) ...
    Generating array device nodes... /var/lib/dpkg/info/mdadm.postinst: 158: /dev/MAKEDEV: not found
    failed.
    Generating mdadm.conf... done.
    update-initramfs: deferring update (trigger activated)
     * Starting MD monitoring service mdadm --monitor [ OK ]
     * Assembling MD arrays... [ OK ]

    Processing triggers for ureadahead ...
    Processing triggers for man-db ...
    Processing triggers for doc-base ...
    Processing 6 added doc-base file(s)...
    Registering documents with scrollkeeper...
    Processing triggers for initramfs-tools ...
    update-initramfs: Generating /boot/initrd.img-2.6.32-22-generic
    W: mdadm: /etc/mdadm/mdadm.conf defines no arrays.
    W: mdadm: no arrays defined in configuration file.

Is this insanely dangerous to do? Or just somewhat dangerous?
I haven't had time to test it yet, but I will when I finish copying some files over to my new HDD.

Revision history for this message
Jools Wills (jools) wrote :

yes its dangerous. debian most likely has completely difference initramfs scripts. you could leave your machine unbootable. If you want a newer version you can try my packages

http://malus.exotica.org.uk/~buzz/mdadm/

Revision history for this message
Rune K. Svendsen (runeks) wrote : Re: [Bug 495370] Re: Please upgrade to a non-outdated version

I see. Thanks for the heads up.
How would you rate the reliability of your package? Would you recommend
that I just stick with the version in the repos unless I'm feeling
adventurous, or would you say it's relatively safe to use your package?
I guess you'd be the one to know if you've touched some sensitive areas
in the package, in order to adapt it to Lucid...

Max Waterman describes an issue in comment #26, do you reckon that this
is something that is easily fixed?

> yes its dangerous. debian most likely has completely difference
> initramfs scripts. you could leave your machine unbootable. If you want
> a newer version you can try my packages
>
> http://malus.exotica.org.uk/~buzz/mdadm/
>

Revision history for this message
Jools Wills (jools) wrote :

I have not adapted a debian package to lucid, rather upgraded the ubuntu package to a newer version and fixed some bugs. The package works fine for me at least. Not sure about issue 26. I'd have to have a look. I'm recently considering to "upgrade" to debian though on my system, as ubuntu is letting these core components stagnate.

Revision history for this message
Rune K. Svendsen (runeks) wrote :

Cool. I'll give it a whirl as well then. I have backups of my data on a
non-RAID partition anyway.

I must admit that I was surprised as well, when I saw that Lucid shipped
with a 20 month old build of this component. Wouldn't this mean that the
Server edition uses this version as well? If that's the case, then it
must mean that very few people actually use Ubuntu for server use, or,
at least, that the ones who do, do not use RAID.

It would be nice to get some attention from the developers, to hear why
this hasn't progressed further.

On the other hand, only 14 people have marked this bug as affecting
them, so I guess it's not entirely inappropriate for Canonical to not
prioritize this more highly than they do.

> I have not adapted a debian package to lucid, rather upgraded the ubuntu
> package to a newer version and fixed some bugs. The package works fine
> for me at least. Not sure about issue 26. I'd have to have a look. I'm
> recently considering to "upgrade" to debian though on my system, as
> ubuntu is letting these core components stagnate.
>

Revision history for this message
Rune K. Svendsen (runeks) wrote :

I just tried to install your package. I received an error, which was this:

cp: cannot stat `/lib/udev/rules.d/65-mdadm.vol_id.rules': No such file or directory

at the very end of the installation process. Not sure whether it has any significance.

Revision history for this message
Jools Wills (jools) wrote :

Shouldn't happen. Might be safest right now to use the standard package until I have time to take a look.

Revision history for this message
Rune K. Svendsen (runeks) wrote :

Woops... Too late :). It's in the process of syncing right now...
But as mentioned earlier, the data I'm going to put on the RAID
partition resides on a second, non-RAID partition. So as far as I can
tell I'm not risking anything.

But I _would_ appreciate you taking a look at it though...

The only "rules"-file that contain "mdadm" is the following file:

        rune@runescomp:/lib/udev$ find / -iname "*mdadm*rules*"
        2>/dev/null
        /lib/udev/rules.d/85-mdadm.rules

> Shouldn't happen. Might be safest right now to use the standard package
> until I have time to take a look.
>

Revision history for this message
Rune K. Svendsen (runeks) wrote :

It seems that the file "65-mdadm.vol_id.rules" in "/lib/udev/rules.d/" was used by mdadm in Karmic (http://packages.ubuntu.com/karmic/i386/mdadm/filelist), and changed in Lucid to "65-mdadm-blkid.rules" with the fixing of Bug #493772.

I'm guessing your package is based on the Karmic version of mdadm?

Installing your package, this file is called "65-vol_id.rules", and is identical to both the file installed with Lucid's version of mdadm (2.6.7.1-1ubuntu15), called "65-mdadm-blkid.rules", and Karmic's version of mdadm ("65-mdadm.vol_id.rules").

It seems the above error comes from the initramfs hook script that your package places in "/usr/share/initramfs-tools/hooks/mdadm", which contains the following code:

    # copy the udev rules
    for rules in 65-mdadm.vol_id.rules 85-mdadm.rules; do
     cp -p /lib/udev/rules.d/$rules ${DESTDIR}/lib/udev/rules.d
    done

When the installation script runs "update-initramfs" (and when update-initramfs is run subsequently, independantly of the installation script), the error appears:

    rune@runescomp:~/Desktop/mdadm-new$ sudo update-initramfs -u
    update-initramfs: Generating /boot/initrd.img-2.6.32-23-generic
    cp: cannot stat `/lib/udev/rules.d/65-mdadm.vol_id.rules': No such file or directory

In "/usr/share/initramfs-tools/hooks/mdadm", changing "65-mdadm.vol_id.rules" to "65-vol_id.rules" in the above line from the hook script, the error disappears.

So unless something more is lurking beneath the surface - I don't know enough about the inner workings of either mdadm of Linux in general to assess that - the fix seems to be fairly simple. Changing the above line in "/usr/share/initramfs-tools/hooks/mdadm". Or, alternatively, to follow Lucid's naming convention of the file, rename the file to "65-mdadm-blkid.rules" and change this accordingly in the hook script.

Revision history for this message
Jools Wills (jools) wrote :

Yes it is a karmic package. Ii get a chance I could upgrade it to lucid. Your fix is probably fine though. What we need though is someone as part of ubuntu to maintain this though. I will never put ubuntu on a software raid machine again :/

Revision history for this message
RnSC (webclark) wrote :

I plan on running 10.04 until a few months after the *next* LTS is released. I don't know how to interpret what I see here. Will the process result in a newer/better stable mdadm being released for 10.04 at some point? If so or if maybe, how far out is that likely to be? Thanks.

Revision history for this message
Jools Wills (jools) wrote :

I finally got around to doing some upgrades on my fileserver. Thought about switching back to debian, but I thought for now, I'll just get my mdadm package working again properly on lucid.

http://malus.exotica.org.uk/~buzz/mdadm/lucid/

binaries for x86 are there.
you can build from source with dpkg-source -x mdadm_3.1.4-1.dsc then cd to the folder and dpkg-buildpackage

I have only tested this on a virtualbox setup, as I've not yet finished my upgrade. I include changes from lucid mdadm (2.6.7-1ubuntu15). the map file is solved now in mdadm itself as by default it chooses /dev/.mdadm as a location to put it which is available throughout the boot process.

No warranty with this of course, but I hope it is of some use.

Revision history for this message
Jools Wills (jools) wrote :

Just to follow up, I am now running my 3.1.4 package on my fileserver. Would be interested to know if anyone else is using it, whilst we wait for an official mdadm update.

Revision history for this message
Amos Hayes (ahayes-polkaroo) wrote :

Hi Jools. Thanks very much for the packages. I wanted to migrate a RAID 5 to RAID 6 on 10.04 and you saved the day. I too would like to see this picked up for a 10.04 backport or the like. :)

I was thrown off a bit when the device was renamed after installing/restarting, but I sorted it out. My conversion looks like it will take a while though.

My experience looked like:

# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid5 sdf[3] sde[2] sdd[1] sdc[0]
      5860543488 blocks level 5, 128k chunk, algorithm 2 [4/4] [UUUU]

unused devices: <none>

# dpkg -i mdadm_3.1.4-1_i386.deb
(Reading database ... 43484 files and directories currently installed.)
Preparing to replace mdadm 2.6.7.1-1ubuntu15 (using mdadm_3.1.4-1_i386.deb) ...
 * Stopping MD monitoring service mdadm --monitor
   ...done.
Unpacking replacement mdadm ...
Setting up mdadm (3.1.4-1) ...
Generating array device nodes... Removing any system startup links for /etc/init.d/mdadm-raid ...
update-initramfs: deferring update (trigger activated)
update-rc.d: warning: mdadm start runlevel arguments (2 3 4 5) do not match LSB Default-Start values (S)
update-rc.d: warning: mdadm stop runlevel arguments (0 1 6) do not match LSB Default-Stop values (0 6)
 * Starting MD monitoring service mdadm --monitor
   ...done.

Processing triggers for man-db ...
Processing triggers for initramfs-tools ...
update-initramfs: Generating /boot/initrd.img-2.6.32-25-generic-pae

# reboot

# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md127 : inactive sde[2] sdf[3] sdd[1] sdc[0]
      7814057984 blocks

unused devices: <none>

# vgchange -an
# mdadm --stop /dev/md127
# mdadm --assemble /dev/md0 /dev/sdc /dev/sdd /dev/sde /dev/sdf

# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid5 sdc[0] sdf[3] sde[2] sdd[1]
      5860543488 blocks level 5, 128k chunk, algorithm 2 [4/4] [UUUU]

unused devices: <none>

# mdadm --add /dev/md0 /dev/sdb
mdadm: added /dev/sdb

# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid5 sdb[4](S) sdc[0] sdf[3] sde[2] sdd[1]
      5860543488 blocks level 5, 128k chunk, algorithm 2 [4/4] [UUUU]

unused devices: <none>

# mdadm --grow /dev/md0 --level=6 --backup-file=/root/backup-md0
mdadm level of /dev/md0 changed to raid6
# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid6 sdb[4] sdc[0] sdf[3] sde[2] sdd[1]
      5860543488 blocks super 0.91 level 6, 128k chunk, algorithm 18 [5/4] [UUUU_]
      [>....................] reshape = 0.0% (16384/1953514496) finish=3970.5min speed=8192K/sec

unused devices: <none>

Revision history for this message
Jools Wills (jools) wrote :

Glad it's working for you. In regards to the device renaming, it's because it didn't recognise the array as belonging to the machine so it assembled it as md127 to not conflict with any other arrays. Please see

http://www.spinics.net/lists/raid/msg30175.html

You probably want to update your mdadm.conf and update initramfs again. I have

# definitions of existing MD arrays
ARRAY /dev/md/1 metadata=1.2 UUID=875d492f:d0853755:2f97ef10:fa38ee0b name=aero:1

without the name=aero:1 (the line was generated with mdadm --detail --scan), I think my array came back as md127 also.

Revision history for this message
Rune K. Svendsen (runeks) wrote :

Hi Jools
Thanks for updating the package for Lucid. I have installed it without any issues. I attach a log of the output of dpkg.

Revision history for this message
Cristian Calin (cristi-calin) wrote :

Hi Jools,

Do you think it is safe to build and install you package also on maverick ? I'm trying to ditch dmraid in favor of mdadm 3 which supports Intel Matrix metadata, and so far ubuntu isn't much help with it's outdated package.

Thanks,
Cristian

Revision history for this message
Jools Wills (jools) wrote :

Funnily enough I just updated a machine to maverick with it to test, and it worked fine. btw i put it on a PPA https://launchpad.net/~jools/+archive/mdadm if you are interested (apt-add-repository ppa:jools/mdadm). note the version is slightly different from the one i had before to make it more correct with the version naming rules.

Revision history for this message
Cristian Calin (cristi-calin) wrote :

Thanks, I'll try it in a test system to see if I can migrate from dmraid to mdadm before I break my main system. I tried the same scenario with debian and it failed to initialize the matrix. After some tinkering I got it to assemble the matrix but when trying to mount the volume I ended up with a nice kernel panic in md.c. Hope the ubuntu kernel survives this one. I'll get back with a report.

Revision history for this message
Alf Gaida (agaida) wrote :

I'm used the mdadm packages for i386/amd64 directly from debian/squeeze without problems in 10.04 and 10.10. I think it's time for Canonical to move. But this is not a problem for me anymore. I changend my distribution to debian/squeeze after that.

Revision history for this message
causticsoda (glenn-thomasfamily) wrote :

Is there much chance of a updated version of mdadm for 11.04? I am going to want to migrate my RAID5 to RAID6 soon, and I am thinking of just switching to Debian, as I am not technically competent enough to install a newer version into Ubuntu myself, and I would rather use a supported version.

Revision history for this message
Cristian Calin (cristi-calin) wrote :

I'm sorry to report that I experienced the same kernel panic with the ubuntu kernel, both generic and server flavors. Guess I'm stuck with dmraid for now ... or just move to SuSE (which works with mdadm 3.0.3).

Revision history for this message
RnSC (webclark) wrote :

Jools, I think I about to take the plunge, but would like to ask a few questions first. I have been reading all of the mdadm wiki doc, manpages, scripts, rules, etc. until I am just beginning to *think* that I understand the problem somewhat, and am coming back around to the fact that you have done all of this. This is making me more comfortable with your package.

What versions is your PPA good for? Is this still built on the karmic or is it built on the lucid package now? (I want to run 10.04 until next LTS).

It seems like virtually all of the complexity is in init.d scripts and udev rules to have things happen automatically. If you were willing to just do an assemble manually and mount, or in a very few line script at startup, this could be trivially simple IF (big IF) you are not booting off of it. Do I have the right idea?

Further, it seems like the complexity for booting off of root is really not very complex, the difficulty is knowing where the few tentacles are that need to be twiddled in a fairly trivial way. Again, is this the right idea?

If you have not changed any of the 3.1.4 C source, just tracked down the script and initfs stuff and fixed it, then I would conclude that there should be very little risk in using your package... that you have done what needs to be done and we can rely on it. Anything residual problems would be in the little scripts and references automating things, not in the fundamental md/mdadm stuff, and that could be fixed manually with a few mdadm commands if I knew what I was doing better.

Would you please comment on all of this? I think we, thanks to you, may be on the verge of getting where we want to go or are there already.

Thanks. And even more, thanks for your work!

Revision history for this message
Jools Wills (jools) wrote :

>What versions is your PPA good for?

It is for lucid. will also work on maverick. Don't use it on Karmic.

>It seems like virtually all of the complexity is in init.d scripts and udev rules to have things happen automatically. If you were willing to just do an assemble manually and mount, or in a very few line script at startup, this could be trivially simple IF (big IF) you are not booting off of it. Do I have the right idea?

the complexity is mostly down to handling hotplugging events correctly during bootup and so on.

>If you have not changed any of the 3.1.4 C source, just tracked down the script and initfs stuff and fixed it,

Originally I took the karmic ubuntu package, and applied the ubuntu diffs over the 3.1.2 source. fixing up anything that didn't apply cleanly. I then fixed a couple of issues regarding handling of the map file (adding some additional script and some minor .c modifications), and some other things. I then upgraded again to 3.1.4 and applied new patch from ubuntu lucid, and removed all my map handling stuff as it was no longer needed due to the new location for storing the map file. so now its basically the same as the lucid/maverick package, cept its up to date, and with a working map file that makes it from initramfs stage to root stage, as well as a fix for the MAKEDEV install error. only missing one patch they have put in very recently for copying mdadm.conf, which was "untested" which I thought I might want to test before i add it.

I was running my package on lucid, and now im running it on maverick. everything is working fine.

Revision history for this message
raas (andras-horvath-gmail) wrote :

On Mon, Oct 18, 2010 at 4:40 AM, Jools Wills <email address hidden> wrote:
>
> the complexity is mostly down to handling hotplugging events correctly
> during bootup and so on.

Indeed. I have a box with 48 disks (on 6 controllers) in a number of
MD arrays and stock 10.04 can't assemble them during bootup, although
it manages just fine with hardware having only a few drives. :(

Andras

Revision history for this message
RnSC (webclark) wrote :

Jools,

I am trying to sort this out, and another similar thread came to life today - 603582. It looks like you have both done the same work. Are you aware of his work, what is the difference? Is this duplicated effort?

It seems like it would be better for our cause (and admittedly any pre-release users like me) if the knowledge of both of your were pooled to form a single version for people to use in the short term, and to be rolled into a future Ubuntu release in the long term.

Thanks again.

--Ray

Revision history for this message
Michael Vogt (mvo) wrote :

I'm testing the changes now and will upload after that. It would be great if we can start a conversation with debian if they can take some of our patches as it seems to be a pretty large delta and e.g. the degrated support is something that they probably want as well.

Revision history for this message
fermulator (fermulator) wrote :

I'm re-opening this defect. It was marked as a duplicate of bug #603582, that bug was resolved, but it didn't back-port to 10.04 (lucid!) Still running 10.04.04 LTS

Revision history for this message
fermulator (fermulator) wrote :

fermulator@fermmy-server:~$ cat /etc/issue
Ubuntu 10.04.4 LTS \n \l
fermulator@fermmy-server:~$ uname -a
Linux fermmy-server 2.6.32-49-generic-pae #111-Ubuntu SMP Thu Jun 20 21:44:04 UTC 2013 i686 GNU/Linux
fermulator@fermmy-server:~$ mdadm --version
mdadm - v2.6.7.1 - 15th October 2008

Revision history for this message
Dimitri John Ledkov (xnox) wrote :

You can request backports at https://help.ubuntu.com/community/UbuntuBackports

Unfortunately such a major package upgrade does not qualify for a Stable Release Update.

Please do not reopen this bug, but instead follow the Backports request procedure as linked above.

Changed in mdadm (Ubuntu Lucid):
status: New → Won't Fix
assignee: nobody → Dmitrijs Ledkovs (xnox)
Changed in mdadm (Ubuntu):
status: Confirmed → Fix Released
assignee: nobody → Dmitrijs Ledkovs (xnox)
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.