degraded NON-root raids never --run on boot

Bug #259145 reported by ceg on 2008-08-18
64
This bug affects 11 people
Affects Status Importance Assigned to Milestone
mdadm (Ubuntu)
Undecided
Unassigned
Declined for Hardy by Scott James Remnant (Canonical)
Declined for Lucid by Scott James Remnant (Canonical)
mountall (Ubuntu)
Wishlist
Unassigned
Declined for Hardy by Scott James Remnant (Canonical)
Declined for Lucid by Scott James Remnant (Canonical)

Bug Description

  Systems with say /home on raid won't come up booting when raid was degraded during downtime.

An init script like /etc/init.d/mdadm-degrade - or because of already doing that kind of watchdog functionality - "mountall" needs to --run particular necessary arrays (those with fstab bootwait) degraded if they don't come up complete after a timeout.

Because the proper mdadm --incremental mode command is not available (Bug #251646) a workaround needs to be used:

mdadm --remove <incomplete-md-device> <arbitrary-member-device-of-incomplete-array>

mdadm --incremental --run <arbitrary-member-device-of-incomplete-array>

(See https://wiki.ubuntu.com/ReliableRaid "mountall" functionality related:" about determining the md devices that the devices mentioned in fstab depend on.)

---

The possibility that large server RAIDs may take minutes until they come up, but regular ones are quick, can be handled nicely:

      "NOTICE: /dev/mdX didn't get up within the first 10 seconds.

      We continue to wait up to a total of xxx seconds complying to the ATA
      spec before attempting to run the array degraded.
      (You can lower this timeout by setting the rootdelay= parameter.)

      <counter> seconds to go.

      Press escape to stop waiting and to enter a rescue shell.

ceg (ceg) wrote :

This issue has been separated out from Bug #120375 in order to track it separately.
(Don't mark this as a duplicate, like 4 others before.)

tricky1 (tricky1) wrote :

Bug still present in Alpha 4

Changed in mdadm:
status: New → Confirmed
ceg (ceg) on 2009-11-25
summary: - home array not run when degraded on boot
+ non-root raids fail to run degraded on boot
ceg (ceg) wrote :

debian init script has been removed but no upstart job has been created to start/run necessary regular (non-rootfs) arrays degraded.

ceg (ceg) on 2010-03-07
description: updated
ceg (ceg) on 2010-03-14
summary: - non-root raids fail to run degraded on boot
+ degraded non-root raids are not run on boot
ceg (ceg) on 2010-03-28
description: updated
description: updated
summary: - degraded non-root raids are not run on boot
+ degraded non-root raids don't appear on boot
ceg (ceg) on 2010-03-28
description: updated
ceg (ceg) on 2010-03-29
summary: - degraded non-root raids don't appear on boot
+ degraded NON-root raids never --run on boot
ceg (ceg) on 2010-03-29
description: updated

Sorry about the spam there, hit the wrong button.

The problem with this approach is that we generally don't *know* that a given filesystem is on a degraded RAID, because the RAID is not activated - so we can't see the filesystem UUID inside it.

mountall already provides the ability to drop to a shell, where the user can run mdadm --run

Does this not suffice?

Changed in mountall (Ubuntu):
status: New → Invalid
status: Invalid → Triaged
importance: Undecided → Wishlist
status: Triaged → Incomplete
ceg (ceg) wrote :

Raid systems introduce redundancy to be able to keep working even if parts of the system fail.

I think the init.d/mdadm scripts (early/late or simillar) that debian uses to assemble and run degraded arrays on boot have been removed because all arrays are set up using udev. But we don't have any replacement functionality to run degraded non-root arrays on boot.

If we fail and drop to a recovery console on boot, the system isn't really failure tolerant.

Auto-running *only selected* arrays if they are found degraded on boot probably requires a watchlist:

* For each filesystem mentioned in fstab that depends on a an array, the watchlist file needs to describe its dependency tree of raid devices. The file needs to be (auto)recreated during update-initramfs.

    * initramfs should only watch out for and run rootfs dependencies if necessary.

    * later at boot mountall watches for and runs other (bootwait) filesystems mentioned in the watchlist.
          * Is there a way to nicely auto-update the raid dependency trees of non-rootfs in the watch list upon changes?
                * The file could be updated/validated on every shutdown.

For more context look for MD_COMPLETION_WAIT and "How would you decide what devices are needed?" at https://wiki.ubuntu.com/ReliableRaid

You said:
  * For each filesystem mentioned in fstab that depends on a an array

This is the problem; fstab only gives us a filesystem UUID or LABEL in many cases, we simply *DO NOT KNOW* that's going to turn out to be on a RAID array.

ceg (ceg) wrote :

I understand the md device dependencies are not available in fstab, I am not sure though what should prevent going through the filesystems and identifying their dependencies in the running system?

In the suggestion the fstab gets merely used as a starting point, a list of filesystems that get set up on boot. And prior to rebooting a list/tree is derived from it, that contains the md devices required to boot and to set up all filesystems mentioned in the fstab.

On Mon, 2010-04-19 at 09:39 +0000, ceg wrote:

> I understand the md device dependencies are not available in fstab, I am
> not sure though what should prevent going through the filesystems and
> identifying their dependencies in the running system?
>
Because they might change after a reboot?

We *explicitly* support people doing that.

Scott
--
Scott James Remnant
<email address hidden>

ceg (ceg) wrote :

> Because they might change after a reboot?
> We *explicitly* support people doing that.

Dropping to a rescue shell is the support if a new raid set up with another (rescue) system comes up degraded upon reboot. But I don't see why supporting that should prevent a proper raid setup. One that will --run unchanged arrays that come up degraded on reboot and are needed for a clean boot.

On Tue, 2010-04-20 at 16:22 +0000, ceg wrote:

> > Because they might change after a reboot?
> > We *explicitly* support people doing that.
>
> Dropping to a rescue shell is the support if a new raid set up with
> another (rescue) system comes up degraded upon reboot. But I don't see
> why supporting that should prevent a proper raid setup. One that will
> --run unchanged arrays that come up degraded on reboot and are needed
> for a clean boot.
>
Patches Welcome.

Scott
--
Scott James Remnant
<email address hidden>

Changed in mountall (Ubuntu):
status: Incomplete → Triaged
Steve Langasek (vorlon) wrote :

This is an mdadm issue, not a mountall one.

Changed in mountall (Ubuntu):
status: Triaged → Invalid

we don't know which raids are required for rootfs, but using deduction, once rootfs did appear we know which raids were not required.

So we could for each incomplete non-rootfs raid do:
mdadm --remove <incomplete-md-device> <arbitrary-member-device-of-incomplete-array>
mdadm --incremental --run <arbitrary-member-device-of-incomplete-array>

But this needs re-testing with recent RAID package.

To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Duplicates of this bug

Other bug subscribers

Related questions