installer's boot-degraded debconf answer not written to installed disk

Bug #462258 reported by Jamie Strandboge on 2009-10-27
20
This bug affects 3 people
Affects Status Importance Assigned to Milestone
Release Notes for Ubuntu
Undecided
Mathias Gug
mdadm (Ubuntu)
High
Colin Watson
Karmic
High
Unassigned

Bug Description

Binary package hint: mdadm

In testing raid1 according to http://testcases.qa.ubuntu.com/Install/ServerRAID1, the system installed fine and both disks show up in /proc/mdstat. I installed using my procedure found in https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/457687/comments/6. I anwered 'yes' to booting in degraded mode. I noticed during install the grub-install did reference both /dev/vda and /dev/vdb

However, if I remove the first disk (vda), the system drops to the initramfs telling me the disk could not be found. If I remove the second disk (after adding the first disk back), I'm told about the 'bootdegraded=true' option.

This ends up being a usability issue as well, because now 'Shift' is needed to get to the hidden boot menu since this was the only operating system installed.

I tried adding bootdegraded=true to the kernel command line when vda was removed, and got the same message in initramfs.

=================
Karmic release note:

Automatic boot from a degraded RAID array is not configured correctly after installation

Even if "boot from a degraded array" is set to yes during the install the system will not be properly configured to automatically boot from a degraded array. A workaround is to run dpkg-reconfigure mdadm and select again the option to boot from a degraded array once the system has rebooted after the installation (462258).

=================

Jamie Strandboge (jdstrand) wrote :

I did:
$ sudo dpkg-reconfigure mdadm

Was asked to boot in degraded mode (the default was 'no'-- will need to verify if I answered yes on install. update-initramfs was run and I rebooted without vdb. It booted in degraded mode. I shutdown and removed vda and added vdb back and booted and was presented with the initramfs telling me the disk could not be found.

Jamie Strandboge (jdstrand) wrote :

/var/log/installer/cdebconf/questions.dat has the following:
Name: mdadm/boot_degraded
Template: mdadm/boot_degraded
Value: true
Owners: mdadm-udeb

description: updated
Jamie Strandboge (jdstrand) wrote :

Here is a screenshot of the first boot (ie, before running dpkg-reconfigure mdadm) without vda.

tags: added: regression-potential
description: updated
Jamie Strandboge (jdstrand) wrote :

Screenshot of the partition before selecting 'Finish'

Jamie Strandboge (jdstrand) wrote :

Screenshot of boot degraded question in the installer.

Jamie Strandboge (jdstrand) wrote :

I should mention that I use raw images and not qcow2.

Jamie Strandboge (jdstrand) wrote :

Ok, I tried this a second time and it is totally reproducible. I will attach tarballs with screenshots clearly demonstrating the problem. Here was the order of operations:
1. install with raid1 (images 462258_[123]*)
2. grab the install logs, shutdown the VM without booting, copy disks to backup
3. remove vda (using 'virsh define 462258_no_vda.xml') and try to boot (image 462258_4*) -- FAIL
4. poweroff VM, remove vdb (using 'virsh define 462258_no_vdb.xml') and try to boot (image 462258_5*) -- works
5. restore disks from backups made in step 2, add both disks back to VM using 'virsh define 462258_both.xml'
6. boot with both disks, and run dpkg-reconfigure mdadm (images 462258_[67]*), then cleanly shutdown -- mdadm MAKEDEV failure
7. remove vda (using 'virsh define 462258_no_vda.xml') and try to boot (image 462258_8*) -- FAIL

I also included the libvirt xml files for the VMs.

Jamie Strandboge (jdstrand) wrote :

Here are the install logs before I booted into the fresh installation.

Jamie Strandboge (jdstrand) wrote :

I do not use preseeding.

Kees Cook (kees) wrote :

I can confirm this as well. /etc/initramfs-tools/conf.d/mdadm does not match the debconf answer from install.

Kees Cook (kees) on 2009-10-28
Changed in mdadm (Ubuntu Karmic):
status: New → Confirmed
Kees Cook (kees) wrote :

In libvirt/kvm when booting with only vdb defined, nothing shows up in /proc/partitions, so this seems like a kvm issue. Booting the second drive as vda (without the first drive), it behaves correctly (just like booting the first drive as vda with vdb).

Dustin Kirkland  (kirkland) wrote :

I have reproduced the problem, and concur with Kees' analysis.

:-Dustin

summary: - raid1 won't boot in degraded mode
+ installer's boot-degraded debconf answer not written to installed disk
Changed in mdadm (Ubuntu Karmic):
assignee: nobody → Colin Watson (cjwatson)
importance: Undecided → High
Mathias Gug (mathiaz) on 2009-10-28
Changed in ubuntu-release-notes:
assignee: nobody → Mathias Gug (mathiaz)
status: New → In Progress
Mathias Gug (mathiaz) on 2009-10-28
description: updated
Changed in ubuntu-release-notes:
status: In Progress → Fix Committed
description: updated
Mathias Gug (mathiaz) on 2009-10-29
description: updated
Mathias Gug (mathiaz) on 2009-10-29
description: updated
Colin Watson (cjwatson) on 2009-10-29
Changed in mdadm (Ubuntu Karmic):
assignee: Colin Watson (cjwatson) → nobody
status: Confirmed → Won't Fix
Changed in mdadm (Ubuntu):
status: Confirmed → In Progress
Steve Langasek (vorlon) wrote :

Documented at <https://wiki.ubuntu.com/KarmicKoala/ReleaseNotes#Automatic%20boot%20from%20a%20degraded%20RAID%20array%20not%20configured%20upon%20installation>:

The installer option to support "boot from a degraded array" does not properly configure the installed system. To correct this after installation, run {{{dpkg-reconfigure mdadm}}} after installation and select the option again. (Bug:462258)

Changed in ubuntu-release-notes:
status: Fix Committed → Fix Released
Dustin Kirkland  (kirkland) wrote :

Colin-

What do you think about publishing an mdadm package to karmic-updates that raises this question to debconf critical, such that an administrator has the opportunity (or, rather is forced?) to actually make this decision and ensure that it gets written to disk?

We could touch a flag to ensure that the question is only posed once.

I recognize this is pretty egregious. But it is a rather unfortunate regression, considering the work that went into this feature.

Jonah (jonah) wrote :

Hi guys I don't know if i have the same bug but I'm gutted I can't use my system.

Basically I had 9.04 working fine and when karmic was released I invested in a second hard drive thinking i could combine my current 1tb drive and my new 1tb into a raid and do clean install of the new release. i backed up to externals, fitted the new drive and tried the install.

The first time i got an error that the bootloader couldn't install to /target/. so i tried installing again from livecd but first did an aptitude upgrade, which seemed to install a new ubiquity and some other stuff. then i did the install again.

it seemed to work but on reboot i get stuck at initramfs - with an error saying the nvidia mapper path doesn't exist. i have tried several times reinstalling but i can't get past this.

and if it's the same bug i can't run the dpkg-reconfigure mdadm as suggested as this command is not recognised from initramfs... please help.

Changed in mdadm (Ubuntu):
assignee: Colin Watson (cjwatson) → karen pulsifer (froggy1234)
status: In Progress → Incomplete
Changed in mdadm (Ubuntu):
status: Incomplete → In Progress
assignee: karen pulsifer (froggy1234) → Colin Watson (cjwatson)
Changed in mdadm (Ubuntu):
status: In Progress → New
Changed in mdadm (Ubuntu Karmic):
status: Won't Fix → Confirmed
Steve Langasek (vorlon) wrote :

Please stop messing with the bug state.

Changed in mdadm (Ubuntu):
status: New → In Progress
Changed in mdadm (Ubuntu Karmic):
status: Confirmed → Won't Fix
Jonah (jonah) wrote :

what's this won't fix status, does that mean i'm screwed?

Steve Langasek (vorlon) wrote :

Jonah,

The problem you describe is unrelated to this bug report.

Launchpad Janitor (janitor) wrote :

This bug was fixed in the package mdadm - 2.6.7.1-1ubuntu14

---------------
mdadm (2.6.7.1-1ubuntu14) lucid; urgency=low

  * Fix boot_degraded handling during installation (LP: #462258):
    - Source /lib/preseed/preseed.sh in check.d/root_on_raid.
    - Change mdadm/boot_degraded default in templates file to match the
      apparently-intended behaviour (i.e. false), and stop overriding
      debconf preseeding if BOOT_DEGRADED is not already set in the
      initramfs configuration file.
 -- Colin Watson <email address hidden> Sun, 15 Nov 2009 17:53:53 -0600

Changed in mdadm (Ubuntu):
status: In Progress → Fix Released
Jonah (jonah) wrote :

well i get installed fine, then reboot and it gets stuck at initramfs and i have this error:

ALERT! /dev/mapper/nvidia_dcfadeef2 does not exist. Dropping to a shell!

is this a current bug or should i report this? please help. thanks

tags: added: iso-testing
Changed in mdadm (Ubuntu):
status: Fix Released → Incomplete
status: Incomplete → New
Changed in ubuntu-release-notes:
status: Fix Released → Invalid
Changed in mdadm (Ubuntu):
status: New → Incomplete
status: Incomplete → New
Steve Langasek (vorlon) wrote :

don't change bug status without explanation.

Changed in ubuntu-release-notes:
status: Invalid → Fix Released
Changed in mdadm (Ubuntu):
status: New → Fix Released
To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers

Related questions