Comment 9 for bug 917520

Revision history for this message
paul fox (pgf-launchpad) wrote :

i can't tell from the initial description whether or not there was a truly degraded array on his system. perhaps not, in which case the fixes described in #942106 would help.

but since the topic of this bug very precisely describes my current issue, and my issue is definitely not a duplicate, i'm commenting here.

i'm running oneiric.

my root fs is _not_ a RAID disk.

i have a single mirrored RAID pair, which is _not_ mentioned in mdadm.conf.

this pair is _always_ degraded, intentionally so. so it's clearly important for my system to boot with it in a degraded state.

[rationale, for the curious: the disk stores my BackupPC backup pool, which is extremely difficult (impossible, really) to copy (for offsite purposes) with standard rsync/cp tools, due to its huge numbers of hardlinks. it could be copied using dd, but that would involve taking the disk offline for long periods. instead, i do my backups to one half of a degraded RAID. when i want a copy for offsite storage i add the missing disk, sync it (which all happens with the pair online), then (the next day, when syncing is done) briefly shut down BackupPC, unmount the RAID pair, manually fail the offsite disk, remove it, and remount the now-degraded pair.]

clearly, setting BOOT_DEGRADED will fix (i hope) my problem -- i only have one RAID, and maintain it in a somewhat unusual way.

but i've been thinking of using RAID for my root disk soon. if i do, i may well not want to set BOOT_DEGRADED. so i'll be stuck.

here's the critical part of the original description: "The complete solution will be to provide BOOT_DEGRADED on a per array basis." and, as a corollary, booting shouldn't be prevented when degradation is detected in an array that's not even configured.