mdadm doesn't assemble imsm raids during normal boot

Bug #1318351 reported by Martin Stjernholm
82
This bug affects 17 people
Affects Status Importance Assigned to Milestone
mdadm (Ubuntu)
Fix Released
Undecided
Dimitri John Ledkov

Bug Description

I have a non-root Intel "fakeraid" volume which is not getting assembled automatically at startup. I can assemble it just fine with "sudo mdadm --assemble --scan".

While trying to debug this, I encountered that it is being assembled when I'm booting in debug (aka recovery) mode. It turns out that nomdmonisw and nomdmonddf are passed to the kernel during normal boot only, and this is due to /etc/default/grub.d/dmraid2mdadm.cfg containing:

GRUB_CMDLINE_LINUX_DEFAULT="$GRUB_CMDLINE_LINUX_DEFAULT nomdmonddf nomdmonisw"

Commenting out that line fixes the problem.

I've gathered that this is an effort to migrate from dmraid to mdadm for fakeraids. I don't understand how it's supposed to work, but in my case dmraid is not installed, and this setting is an interference. (The background is that I recently added a 3 TB raid1 and was therefore forced to abandon dmraid in favor of mdadm since the former doesn't handle volumes larger than ~2 TB. So I dropped dmraid and set up mdadm from scratch for this new raid.)

Also, I believe it's a bug that these kernel arguments are different between normal and recovery boot.

My mdadm is 3.2.5-5ubuntu4 in a fresh trusty install.

Revision history for this message
Dimitri John Ledkov (xnox) wrote : Re: [Bug 1318351] [NEW] mdadm doesn't assemble imsm raids during normal boot

On 11 May 2014 15:12, Martin Stjernholm <email address hidden> wrote:
> Public bug reported:
>
> I have a non-root Intel "fakeraid" volume which is not getting assembled
> automatically at startup. I can assemble it just fine with "sudo mdadm
> --assemble --scan".
>
> While trying to debug this, I encountered that it is being assembled
> when I'm booting in debug (aka recovery) mode. It turns out that
> nomdmonisw and nomdmonddf are passed to the kernel during normal boot
> only, and this is due to /etc/default/grub.d/dmraid2mdadm.cfg
> containing:
>
> GRUB_CMDLINE_LINUX_DEFAULT="$GRUB_CMDLINE_LINUX_DEFAULT nomdmonddf
> nomdmonisw"
>
> Commenting out that line fixes the problem.
>
> I've gathered that this is an effort to migrate from dmraid to mdadm for
> fakeraids. I don't understand how it's supposed to work, but in my case
> dmraid is not installed, and this setting is an interference. (The
> background is that I recently added a 3 TB raid1 and was therefore
> forced to abandon dmraid in favor of mdadm since the former doesn't
> handle volumes larger than ~2 TB. So I dropped dmraid and set up mdadm
> from scratch for this new raid.)
>
> Also, I believe it's a bug that these kernel arguments are different
> between normal and recovery boot.
>
> My mdadm is 3.2.5-5ubuntu4 in a fresh trusty install.
>

Correct, I think i should be able to adjust packaging such that no
changes are required when dmraid is not installed.
At the moment automatic conversion to mdadm is not performed, as
/etc/fstab is not automatically updated to modify device names from
dmraid naming scheme to mdadm naming scheme.
Once, that is in place i'll drop dmraid2mdadm.cfg file.

--
Regards,

Dimitri.

Changed in mdadm (Ubuntu):
status: New → Confirmed
assignee: nobody → Dimitri John Ledkov (xnox)
Revision history for this message
Martin Stjernholm (msub) wrote :

That'd be great, but if it still is some time off then I suggest making this more visible, since as it is right now the mdadm package doesn't work that well for non-dmraid fakeraid users.

I believe a decent way would be to add a screen to "dpkg-reconfigure mdadm" explaining this and offering to remove dmraid2mdadm.cfg. That'd at least saved a lot of time for me since that was the first thing I tried to make it pick up my raid config.

Revision history for this message
Anders Sjölund (8-anders) wrote :

This bug also affects when you fresh install 14.04 to a (Intel) fakeraid mirror array.
The installer finds the imsm signature and asks if you want to install on a mdadm array.
But the installer picks upp the "nomdmonddf nomdmonisw" grub config, and the server then boots using dmraid.

What I then need to do is comment out the line in /etc/default/grub.d/dmraid2mdadm.cfg, and run dpkg-reconfigure mdadm and reboot.
This time mdadm is configured correctly, and the /dev/md126 device is used instead of /dev/mapper/isw...

I hope this will be addressed in future releases, and it would not hurt if there is an official note on this.

Revision history for this message
andreas.tr (andreas-tropschug) wrote :

This line reappeared in an update to Trusty. This is over my head. I do not know enough about this stuff to know if it is a regression or an actual improvement. I denied the change to my grub.conf to preserve my current status quo using mdadm raid5. Should I accept the change? Is this "bug" back?

Revision history for this message
WinEunuchs2Unix (ricklee518) wrote :

Comment #3 was super helpful after I ran dmraid -rE and the most recent kernel version wouldn't boot anymore.

Revision history for this message
Mathijs Brands (mjbrands) wrote :

I ran into similar issues, with the added 'bonus' if the RAID5 giving bad performance (running in degraded mode) and no protection against disk failure (again, degraded mode).

When installing 14.04 LTS Server on an HP Z620 workstation (Intel C602 chipset), the installer detects the RAID5 array (3 disks, freshly created in the Intel Matrix firmware), assembles it with mdadm and starts syncing the disks (since this isn't done when creating it in the firmware and is left up to the hardware).

When the installation has finished and the machine reboots, syncing has not completed yet and would be continued after the reboot by mdadm (if it were used). Instead, because nomdmonddf and nomdmonisw are set in the default GRUB options dmraid gets used instead of mdadm and it looks like the syncing does not resume. 'dmraid -s' will show status ok for the array (even though it has not completely synced). If I then shut the system down and unplug a disk, the Intel firmware shows Failed instead of Degraded (which it should do it the disks were synced and parity complete) and the array is no longer bootable.

My conclusion is that the sync was never completed. I have tested a similar scenario using mdadm on CentOS 7 and the array did go into degraded mode and was still bootable when one disk was removed.

I'll try the suggestion in post #3 and see if my array then properly resyncs and can tolerate losing a single disk (in a 3-disk array).

Revision history for this message
Gilles DOFFE (gdoffe) wrote :

I confirm post #3 on Ubuntu 14.04.1 server, fresh install.

I had to remove /etc/default/grub.d/dmraid2mdadm.cfg and generate /etc/mdadm/mdadm.conf before restarting.

Revision history for this message
Hever (hev-w) wrote :

Same problem.

Fresh install Ubuntu 14.04.3 server on Dell T20 machine (Intel Rapid Storage Technology/RST) with two disks in RAID1.

Have to remove dmraid2mdadm.cfg and update-grub.

Revision history for this message
tehownt (tehownt) wrote :

Same problem, fresh 14.10 Desktop install on custom built PC (GA-Z77X-UD5H) : mdadm.conf looks good, mdadm --assemble --scan works perfectly when booted, but raid drive lost upon reboot.

Commented that line out and it now works...

Revision history for this message
Ted (tedm) wrote :

This is not a bug, please check the following discussion:

https://lists.ubuntu.com/archives/ubuntu-devel/2014-February/038095.html

"...This cycle a few things were done in mdadm & dmraid packages to
prepare for transition from dmraid to mdadm for a few formats that
mdadm supports.

At the moment it trusty, mdadm has full support to assemble Intel
Matrix Raid and DDF fakeraid arrays, instead of dmraid. At the moment
however, mdadm codepath is disable via a kernel cmdline options..."

Explanation of how to fix it is in the thread.

Revision history for this message
Dimitri John Ledkov (xnox) wrote : Re: [Bug 1318351] Re: mdadm doesn't assemble imsm raids during normal boot

Hello,

On 19 April 2015 at 16:54, Ted <email address hidden> wrote:
> This is not a bug, please check the following discussion:
>
> https://lists.ubuntu.com/archives/ubuntu-devel/2014-February/038095.html
>
> "...This cycle a few things were done in mdadm & dmraid packages to
> prepare for transition from dmraid to mdadm for a few formats that
> mdadm supports.
>
> At the moment it trusty, mdadm has full support to assemble Intel
> Matrix Raid and DDF fakeraid arrays, instead of dmraid. At the moment
> however, mdadm codepath is disable via a kernel cmdline options..."
>
> Explanation of how to fix it is in the thread.
>

However we ought to start using mdadm alone, preferably early in 15.10
cycle such that by 16.04 LTS it is stable.

--
Regards,

Dimitri.

Revision history for this message
Eren (erent) wrote :

Hello,

This bug persists in fresh Ubuntu 14.04.2 Server (I confirm #3). I have a FakeRAID controller in one of our supermicro servers. Installation occurs normally, but when first booted, it doesn't boot correctly. I see that "nomdmonddf nomdmonisw" is appended to the boot parameters. For first boot, I needed to remove those lines via grub before booting, boot the box, and comment out the file:

/etc/default/grub.d/dmraid2mdadm.cfg

When those parameters are not included, it runs perfectly. I didn't need to change any mdadm.conf or other. I just needed to remove those boot parameters.

Revision history for this message
Gabriel A. Devenyi (gadevenyi) wrote :

Just ran into this on a fresh server install. Couldn't figure out what ubuntu asked me about mdadm during install and then proceeded to disable it on first boot!

Revision history for this message
Hans-Richard Grube (neycroir) wrote :

Just happened to me to. The fix from the OP is a bit different in 15.05 (should be done like this: http://askubuntu.com/a/665704/443271).

Revision history for this message
e.s. kohen (e.s.k.) wrote :

Same: Issue - Workaround:
---------------------------------------
dmraid installs /etc/default/grub.d/dmraid2mdadm.cfg

mv to .bak file / or remove.
Or, comment out "GRUB_CMDLINE_LINUX_DEFAULT="$GRUB_CMDLINE_LINUX_DEFAULT $DMRAID2MDADM_TOAPPEND"

System:
From Ubuntu Gnome 15.10 Beta 2
4.2.0-16-generic #19-Ubuntu SMP Thu Oct 8 15:35:06 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

Changes:
- Removed: DMRaid
- Installed: mdadm - v3.3 - 3rd September 2013
- initramfs-update -u
- update-grub
- systemctl enable mdadm

Observations after boot:

- lsmod | grep md ---- Returns nothing
- cat /proc/mdstat --- no raid partitions listed

Manual Assemble Works:
mdadm --assemble --scan
mdadm --examine --scan

Revision history for this message
Dimitri John Ledkov (xnox) wrote :

mdadm (3.3-2ubuntu5) xenial; urgency=medium

  * Drop grub override, and thus use mdadm to assemble ISMS and DDF raids.

 -- Dimitri John Ledkov <email address hidden> Fri, 19 Feb 2016 17:27:12 +0000

Changed in mdadm (Ubuntu):
status: Confirmed → Fix Released
Revision history for this message
Jesse Rebel (darkseid4nk) wrote :

I found that commenting out GRUB_CMDLINE_LINUX_DEFAULT="$GRUB_CMDLINE_LINUX_DEFAULT nomdmonddf nomdmonisw" alone didnt work, nor that with update-initramfs.

My fix:
comment out: GRUB_CMDLINE_LINUX_DEFAULT="$GRUB_CMDLINE_LINUX_DEFAULT nomdmonddf nomdmonisw"
and then append "nodmraid" to /etc/grub/default

If you comment out those grub options alone the OS still wants to use dmraid because it detects an imsm signature.

The other option is #15's answer.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.