installing ubuntu on a former md raid volume makes system unusable
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
partman-base (Ubuntu) |
Fix Released
|
High
|
Michael Hudson-Doyle | ||
Bionic |
Fix Released
|
Undecided
|
Michael Hudson-Doyle | ||
Disco |
Fix Released
|
Undecided
|
Michael Hudson-Doyle |
Bug Description
[impact]
Installing ubuntu on a disk that was previously a md raid volume leads to a system that doesn't boot (or perhaps does not reliably boot)
[test case]
Create a disk image that has a md RAID 6, metadata 0.90 device on it using the attached "mkraid6" script.
$ sudo mkraid6
Install to it in a VM:
$ kvm -m 2048 -cdrom ~/isos/
Reboot into the installed system. Check that it boots and that there are no occurrences of linux_raid_member in the output of "sudo wipefs /dev/sda".
SRU member request: testing other, regular installation scenarios to sanity check for regressions (comment #10).
[regression potential]
The patch makes a change to a core part of the partitioner. A bug here could crash the installer, rendering it impossible to install. The code is adapted from battle-tested code in wipefs from util-linux and has been somewhat tested before uploading to eoan. The nature of the code makes regressions beyond crashing the installer or failing to do what it's supposed to very unlikely -- it is hard to see how this could result on data loss on a drive not selected to be formatted, for example.
[original description]
18.04 is installed using GUI installer in 'Guided - use entire volume' mode on a disk which was previously used as md raid 6 volume. Installer repartitions the disk and installs the system, system reboots any number of times without issues. Then packages are upgraded to the current states and some new packages are installed including mdadm which *might* be the culprit, after that system won't boot any more failing into ramfs prompt with 'gave up waiting for root filesystem device' message, at this point blkid shows boot disk as a device with TYPE='linux_
affects: | ubuntu-release-upgrader (Ubuntu) → ubiquity (Ubuntu) |
tags: | added: id-5cd5b7c3c1eeca1c0e6458ce |
affects: | ubiquity (Ubuntu) → parted (Ubuntu) |
Changed in parted (Ubuntu): | |
status: | New → Triaged |
importance: | Undecided → High |
assignee: | nobody → Michael Hudson-Doyle (mwhudson) |
affects: | parted (Ubuntu) → partman-base (Ubuntu) |
Changed in partman-base (Ubuntu): | |
assignee: | Michael Hudson-Doyle (mwhudson) → nobody |
Changed in partman-base (Ubuntu): | |
status: | Fix Released → In Progress |
Changed in partman-base (Ubuntu): | |
status: | In Progress → Fix Released |
Zeroing the md superblock in ubiquity - if that's what you are thinking about, will fix this issue in my particular scenario, but what if disk was partitioned in some other way? I am afraid it is a partial workaround, the proper albeit more compex way of handling this issue is to make sure that properly formatted partition table always takes precedence over leftover superblocks during boot.