Please merge mdadm 3.2.3-2 from Debian unstable (main) (multiple regressions)

Bug #920324 reported by Timo Aaltonen on 2012-01-23
28
This bug affects 5 people
Affects Status Importance Assigned to Milestone
mdadm (Debian)
Fix Released
Unknown
mdadm (Ubuntu)
High
Clint Byrum

Bug Description

Looking at the list of fixes upgrading mdadm from 3.2.2 to 3.2.3 shows some severe regressions were fixed.

mdadm: re-add fails (regression in mdadm-3.2.2-1ubuntu1)
http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=651880

mdadm: segmentation fault when converting raid10 to raid0
http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=645563

mdadm-raid does not assemble mds on multipath devices
http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=641584

This ticket was originally about only one of them:

/etc/cron.daily/mdadm fails if mdadm monitor daemon is already running
http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=641886

Reproducing:

1. Install Ubuntu 12.04 with md raid.

My system has:

$ cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md1 : active raid1 sdb2[1] sda2[0]
      522048 blocks [2/2] [UU]

md0 : active raid1 sdb1[1] sda1[0]
      10490304 blocks [2/2] [UU]

md2 : active raid1 sdc1[0] sdd1[1]
      805305143 blocks super 1.2 [2/2] [UU]

unused devices: <none>

2. Try running this mdadm command, observe unexpected results:

# mdadm --monitor --scan --oneshot
mdadm: Only one autorebuild process allowed in scan mode, aborting

3. Also cron will send mail:

/etc/cron.daily/mdadm:
mdadm: Only one autorebuild process allowed in scan mode, aborting
run-parts: /etc/cron.daily/mdadm exited with return code 1

Because when you install the mdadm package this crontab file is created:

$ dpkg -S /etc/cron.daily/mdadm
mdadm: /etc/cron.daily/mdadm

4. Stop any mdadm processes (but only for testing, it is unacceptable to turn off mdadm monitoring.)

# service mdadm stop

5. Try running, observe expected results:
# mdadm --monitor --scan --oneshot
(no output)

This issue was fixed in Debian, please Debian Import this version:

Format: 1.8
Date: Wed, 18 Jan 2012 22:33:01 +0400
Source: mdadm
Binary: mdadm mdadm-udeb
Architecture: source i386
Version: 3.2.3-2
Distribution: unstable
Urgency: low
Maintainer: Debian mdadm maintainers <email address hidden>
Changed-By: Michael Tokarev <email address hidden>
Description:
 mdadm - tool to administer Linux MD arrays (software RAID)
 mdadm-udeb - tool to administer Linux MD arrays (software RAID) (udeb)
Closes: 607375 628667 633880 637068 641584 641886 641972 645563 650630 651737 651880 652547 655212
Changes:
 mdadm (3.2.3-2) unstable; urgency=low
 .
   [ Michael Tokarev ]
   * new upstream bugfix/stable version, with lots of fixes all over.
     Closes: #641886, #628667, #645563, #651880, #607375, #633880
   * update Neil's email (Closes: #650630)
   * update mdadd.sh to version 1.52 (Closes: #655212)
   * fixed a typo (RAID6 vs RAID10) in FAQ (Closes: #637068)
   * declare ordering dependency for multipath-tools-boot in
     mdadm-raid init script (Closes: #641584)
     While at it, remove mention of devfsd
   * added Slovak (sk.po) po-debconf translation from Slavko <email address hidden>
     (Closes: #641972)
   * set nice value of the check/resync thread too, together with I/O
     scheduling class, based on patch by Sergey B Kirpichev (Closes: #652547)
   * small changes for debian/checkarray
   * (internal) move files from contrib/* topgit branches into debian directory
   * remove dh_testroot from clean target
   * add myself to uploaders
 .
   [ Peter Eisentraut ]
   * Added support for "status" action to mdadm init script (Closes: #651737)

Related branches

Changed in mdadm (Debian):
status: Unknown → Fix Released
Launchpad Janitor (janitor) wrote :

Status changed to 'Confirmed' because the bug affects multiple users.

Changed in mdadm (Ubuntu):
status: New → Confirmed
summary: - noise from /etc/cron.daily/mdadm
+ /etc/cron.daily/mdadm fails if mdadm monitor daemon is already running
description: updated
tags: added: testcase
tags: added: precise
description: updated
summary: - /etc/cron.daily/mdadm fails if mdadm monitor daemon is already running
+ Please sync mdam 3.2.3-2 from Debian unstable (main) (was:
+ /etc/cron.daily/mdadm fails if mdadm monitor daemon is already running)

Title was updated based on advice in:
https://wiki.ubuntu.com/UbuntuDevelopment/NewPackages

summary: - Please sync mdam 3.2.3-2 from Debian unstable (main) (was:
+ Please sync mdadm 3.2.3-2 from Debian unstable (main) (was:
/etc/cron.daily/mdadm fails if mdadm monitor daemon is already running)
description: updated
summary: - Please sync mdadm 3.2.3-2 from Debian unstable (main) (was:
- /etc/cron.daily/mdadm fails if mdadm monitor daemon is already running)
+ Please sync mdadm 3.2.3-2 from Debian unstable (main) (multiple
+ regressions)
description: updated
tags: added: regression-update
Timo Aaltonen (tjaalton) on 2012-02-08
summary: - Please sync mdadm 3.2.3-2 from Debian unstable (main) (multiple
+ Please merge mdadm 3.2.3-2 from Debian unstable (main) (multiple
regressions)
Changed in mdadm (Ubuntu):
importance: Undecided → High

I have a server running with the 3.2.3-2 from my PPA now.

$ lsb_release -ds
Ubuntu precise (development branch)
$ uname -srvi
Linux 3.2.0-14-generic #23-Ubuntu SMP Fri Feb 3 23:17:59 UTC 2012 x86_64

$ ps ax | grep mdadm | grep -v grep
15618 ? Ss 0:00 /sbin/mdadm --monitor --pid-file /var/run/mdadm/monitor.pid --daemonise --scan --syslog

$ sudo mdadm --monitor --scan --oneshot
$ echo $?
0

Successful reboot of a second system, this one has ext2 /boot on RAID-1 md0 and LVM on RAID-0 md1

$ cat /proc/mdstat
Personalities : [raid0] [raid1]
md1 : active raid0 sda2[0] sdb2[1]
      1953130496 blocks super 1.2 512k chunks

md0 : active raid1 sda1[0] sdb1[1]
      194548 blocks super 1.2 [2/2] [UU]

[ 3.260937] mdadm: sending ioctl 800c0910 to a partition!
[ 3.260990] mdadm: sending ioctl 800c0910 to a partition!
[ 3.261055] mdadm: sending ioctl 1261 to a partition!
[ 3.261107] mdadm: sending ioctl 1261 to a partition!
[ 3.261720] mdadm: sending ioctl 800c0910 to a partition!
[ 3.261766] mdadm: sending ioctl 800c0910 to a partition!
[ 3.266310] md: md0 stopped.
[ 3.293294] md: bind<sdb1>
[ 3.293532] md: bind<sda1>
[ 3.294449] bio: create slab <bio-1> at 1
[ 3.294569] md/raid1:md0: active with 2 out of 2 mirrors
[ 3.294628] md0: detected capacity change from 0 to 199217152
[ 3.295273] md0: unknown partition table
[ 3.299025] md: md1 stopped.
[ 3.304593] md: bind<sdb2>
[ 3.304810] md: bind<sda2>
[ 3.305961] md/raid0:md1: md_size is 3906260992 sectors.
[ 3.306014] md: RAID0 configuration for md1 - 1 zone
[ 3.306067] md: zone0=[sda2/sdb2]
[ 3.306232] zone-offset= 0KB, device-offset= 0KB, size=1953130496KB
[ 3.306300]
[ 3.306356] md1: detected capacity change from 0 to 2000005627904
[ 3.334425] md1: unknown partition table

Changed in mdadm (Ubuntu):
status: Confirmed → In Progress
assignee: nobody → Clint Byrum (clint-fewbar)
Launchpad Janitor (janitor) wrote :
Download full text (3.7 KiB)

This bug was fixed in the package mdadm - 3.2.3-2ubuntu1

---------------
mdadm (3.2.3-2ubuntu1) precise; urgency=low

  * Merge from Debian testing. (LP: #920324) Remaining changes:
    - Call checks in local-premount to avoid race condition with udev
      and opening a degraded array.
    - d/initramfs/mdadm-functions: Record in /run when boot-degraded
      question has been asked so that it is only asked once
    - pass --test to mdadm to enable result codes for degraded arrays.
    - Build udeb with -O2 on ppc64, working around a link error.
    - debian/control: we need udev and util-linux in the right version. We
      also remove the build dependency from quilt and docbook-to-man as both
      are not used in Ubuntus mdadm.
    - debian/initramfs/hook: kept the Ubuntus version for handling the absence
      of active raid arrays in <initramfs>/etc/mdadm/mdadm.conf
    - debian/initramfs/script.local-top.DEBIAN, debian/mdadm-startall,
      debian/mdadm.raid.DEBIAN: removed. udev does its job now instead.
    - debian/mdadm-startall.sgml, debian/mdadm-startall.8: documentation of
      unused startall script
    - debian/mdadm.config, debian/mdadm.postinst - let udev do the handling
      instead. Resolved merge conflict by keeping Ubuntu's version.
    - debian/mdadm.postinst, debian/mdadm.config, initramfs/init-premount:
      boot-degraded enablement; maintain udev starting of RAID devices;
      init-premount hook script for the initramfs, to provide information at
      boot
    - debian/mkconf.in is the older mkconf. Kept the Ubuntu version.
    - debian/rules: Kept Ubuntus version for installing apport hooks, not
      installing un-used startall script and for adding a udev rule
      corresponding to mdadm.
    - debian/install-rc, check.d/_numbers, check.d/root_on_raid: Ubuntu partman
      installer changes
    - debian/presubj: Dropped this unused bug reporting file. Instead use
      source_mdadm.py act as an apport hook for bug handling.
    - rename debian/mdadm.vol_id.udev to debian/mdadm.mdadm-blkid.udev so that
      the rules file ends up with a more reasonable name
    - d/p/debian-changes-3.1.4-1+8efb9d1ubuntu4: mdadm udev rule
      incrementally adds mdadm member when detected. Starting such an
      array in degraded mode is possible by mdadm -IRs. Using mdadm
      -ARs without stopping the array first does nothing when no
      mdarray-unassociated device is available. Using mdadm -IRs to
      start a previously partially assembled array through incremental
      mode. Keeping the mdadm -ARs for assembling arrays which were for
      some reason not assembled through incremental mode (i.e through
      mdadm's udev rule).

mdadm (3.2.3-2) unstable; urgency=low

  [ Michael Tokarev ]
  * new upstream bugfix/stable version, with lots of fixes all over.
    Closes: #641886, #628667, #645563, #651880, #607375, #633880
  * update Neil's email (Closes: #650630)
  * update mdadd.sh to version 1.52 (Closes: #655212)
  * fixed a typo (RAID6 vs RAID10) in FAQ (Closes: #637068)
  * declare ordering dependency for multipath-tools-boot in
    mdadm-raid init script (Closes: #641584)
    While at it, remove mention of de...

Read more...

Changed in mdadm (Ubuntu):
status: In Progress → Fix Released
John Center (john-center) wrote :

I just installed this package & it appears mdmon is missing. -John

Download full text (4.3 KiB)

Eh? Speak up boy, no one's listening to this old bug report, sonny.

________________________________
 From: John Center <email address hidden>
To: <email address hidden>
Sent: Friday, March 16, 2012 2:58 PM
Subject: [Bug 920324] Re: Please merge mdadm 3.2.3-2 from Debian unstable (main) (multiple regressions)

I just installed this package & it appears mdmon is missing.  -John

--
You received this bug notification because you are subscribed to the bug
report.
https://bugs.launchpad.net/bugs/920324

Title:
  Please merge mdadm 3.2.3-2 from Debian unstable (main) (multiple
  regressions)

Status in “mdadm” package in Ubuntu:
  Fix Released
Status in “mdadm” package in Debian:
  Fix Released

Bug description:
  Looking at the list of fixes upgrading mdadm from 3.2.2 to 3.2.3 shows
  some severe regressions were fixed.

  mdadm: re-add fails (regression in mdadm-3.2.2-1ubuntu1)
  http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=651880

  mdadm: segmentation fault when converting raid10 to raid0
  http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=645563

  mdadm-raid does not assemble mds on multipath devices
  http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=641584

  This ticket was originally about only one of them:

  /etc/cron.daily/mdadm fails if mdadm monitor daemon is already running
  http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=641886

  Reproducing:

  1. Install Ubuntu 12.04 with md raid.

  My system has:

  $ cat /proc/mdstat
  Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
  md1 : active raid1 sdb2[1] sda2[0]
        522048 blocks [2/2] [UU]

  md0 : active raid1 sdb1[1] sda1[0]
        10490304 blocks [2/2] [UU]

  md2 : active raid1 sdc1[0] sdd1[1]
        805305143 blocks super 1.2 [2/2] [UU]

  unused devices: <none>

  2. Try running this mdadm command, observe unexpected results:

  # mdadm --monitor --scan --oneshot
  mdadm: Only one autorebuild process allowed in scan mode, aborting

  3. Also cron will send mail:

  /etc/cron.daily/mdadm:
  mdadm: Only one autorebuild process allowed in scan mode, aborting
  run-parts: /etc/cron.daily/mdadm exited with return code 1

  Because when you install the mdadm package this crontab file is
  created:

  $ dpkg -S /etc/cron.daily/mdadm
  mdadm: /etc/cron.daily/mdadm

  4. Stop any mdadm processes (but only for testing, it is unacceptable
  to turn off mdadm monitoring.)

  # service mdadm stop

  5. Try running, observe expected results:
  # mdadm --monitor --scan --oneshot
  (no output)

  This issue was fixed in Debian, please Debian Import this version:

  Format: 1.8
  Date: Wed, 18 Jan 2012 22:33:01 +0400
  Source: mdadm
  Binary: mdadm mdadm-udeb
  Architecture: source i386
  Version: 3.2.3-2
  Distribution: unstable
  Urgency: low
  Maintainer: Debian mdadm maintainers <email address hidden>
  Changed-By: Michael Tokarev <email address hidden>
  Description:
   mdadm      - tool to administer Linux MD arrays (software RAID)
   mdadm-udeb - tool to administer Linux MD arrays (software RAID) (udeb)
  Closes: 607375 628667 633880 637068 641584 641886 641972 645563 650630 651737 651880 652547 655212
  Changes:
   mdad...

Read more...

Clint Byrum (clint-fewbar) wrote :

John, you're right, mdmon is not shipped. I don't think it ever was actually. So I opened this bug report:

https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/957494

Please track it there.

OMG! Service with a :)

John Center (john-center) wrote :

mdmon is included with the Oneiric mdadm package. I'm testing Precise beta1 via the live cd & saw my array coming up readonly after installing it. It took me awhile to figure out what was wrong. (Google is your friend, sometimes... ;-))

Sonny? I haven't been called that since the '60's. :-)

To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.