Eaon Ermine Live Server can not create mirrored root

Bug #1847834 reported by Rob Thomas
12
This bug affects 1 person
Affects Status Importance Assigned to Milestone
subiquity
Fix Released
High
Unassigned

Bug Description

The historical way of installing with hardware redundancy is to create a mirrored root volume, using mdraid.

This is no longer possible. There are several issues:

1. Randomly the 'Done' button will not go white, even if the configuration is correct
2. There is a 'magic' gpt partition created on the first drive you select, which is required for grub
3. If you remove that, to try to make grub install the old way, curtis crashes with an error from grub-install saying that it can only install using blocklist
4. No partition is set active

This makes it impossible to actually install 19.10 in a redundant manner, that can survive a hard drive failure.

Tags: install
Revision history for this message
Rob Thomas (xrobau) wrote :

After discovering other issues that made the live installer not functional (See https://bugs.launchpad.net/ubuntu/+source/livecd-rootfs/+bug/1847835) I gave up and went back to the NON Live ISO, which let me create a mirrored root as expected.

Revision history for this message
Dimitri John Ledkov (xnox) wrote :

Ubuntu Live Server bug reports are tracked in the "subiquity" project, added bug task for it.

Revision history for this message
Dimitri John Ledkov (xnox) wrote :

Are you installing using UEFI boot or BIOS boot?

At the moment, with UEFI it is not possible to install /boot on RAIDed device, but it is possible to have / mountpoint on the mdadm raid.

Revision history for this message
Dimitri John Ledkov (xnox) wrote :

You can control the "magic" bootloader stuff, by using "Make Boot Device" action in the submenu.

Revision history for this message
Rob Thomas (xrobau) wrote :

This was being tested on a HP DL380g8, which does not have UEFI, only BIOS.

There was no way that I could figure out to actually raid the root using the Live Installer. It was trivial and worked as expected using the normal Server installer.

I realise that UEFI actually makes it harder to mirror root, as it (used to?) require manual hacking to get both devices into the UEFI boot order.

This can be trivially verified on virtual hardware, simply create a VM with two hard drives, and try to get both of them active so the VM will boot with one of them missing.

Revision history for this message
Dan Watkins (oddbloke) wrote :

(I'm dropping the curtin task from this for now, while the subiquity folks triage it. If it turns out to need curtin work, please do add it back!)

no longer affects: curtin (Ubuntu)
Revision history for this message
Michael Hudson-Doyle (mwhudson) wrote :

Is this about the setup where you have two disks in raid 1 using metadata 0.90 and then grub-install to the raid device, so that the BIOS can load grub from either disk without knowing anything about raid? Because you're correct that subiquity does not support that setup. We perhaps could, but I've not been considering it a priority because you can't UEFI boot this way and it's 2019 and surely everything boots UEFI by now. Perhaps this needs to be reassessed. But I'd prefer to do something that works with UEFI (such has supporting multiple ESPs and having grub-efi-$arch updates update all of them, having grub-pc updates update a number of devices).

Revision history for this message
Steve Langasek (vorlon) wrote :

Subsequent to Michael's comments: the live server DOES support installing a RAID1 mirrored root partition. Hence the need for clarification.

Changed in subiquity:
status: New → Incomplete
Revision history for this message
Rob Thomas (xrobau) wrote :

> Is this about the setup where you have two disks in raid 1 using metadata 0.90 and then grub-install to the raid device

No. This is about starting with two blank, empty disks, and ending up at a point where you can remove either of those disks and the machine will continue to boot.

On the standard ISO, it's manual partitioning, create unused partitions on two disks, assign them both to a MD device, set that MD device to be /, set both partitions active.

At that point, you can remove either device and the machine will continue to boot. This is not possible with the Live installer (please re-read my original post), or - and this is always a possibility - I was unable to find it.

Revision history for this message
Rob Thomas (xrobau) wrote :

I had a play with this over the weekend, and confirmed that there's no way to make a system that survives the first disk failing with the live installer. (I only tried BIOS mode, uefi is much harder so I didn't bother)

I'm caring a lot about this because I'm a ZFS guy, and I really want 20.04 to be usable as a server install, and 19.10 is meant to be the proving grounds for it!

I'm not talking about zfs-root, which obviously requires UEFI, I'm just talking about a standard server, that will boot in BIOS mode and probably have a LSI9211 HBA with a bunch of drives hanging off it.

Revision history for this message
Rob Thomas (xrobau) wrote :

Just wondering what the process is here. Something that's meant to be supported isn't working, and it's something pretty common.

If the answer is to not use the live installer, who do I have to chase up to get the documentation updated to say so?

Revision history for this message
Michael Hudson-Doyle (mwhudson) wrote :

Sorry for the radio silence, it's been a busy couple of weeks. The situation is this: 19.10 went out without this feature, and there's nothing we can really do about this now. We are in the planning stages for 20.04 and hopefully soon will have clarity on when we'll implement this.

Changed in subiquity:
status: Incomplete → Triaged
importance: Undecided → High
Revision history for this message
Rob Thomas (xrobau) wrote :

Is there anything I can do to help with this? The live installer is going to be required for zfs-root in 20.04 (I assume?), so that will be most people's go-to method of installing. However, if they can't install a mirrored root, there's just going to be extra confusion.

I'm also slightly concerned that even WITH zfs-root, if you're not setting both root drives active, you're not going to be able to boot with a failed drive ANYWAY (or, that may be a 'solved in UEFI' thing that I'm not aware of)

Revision history for this message
Michael Hudson-Doyle (mwhudson) wrote :

I'm not sure there's much you can do to help (unless you're a grub developer upstream?) but this feature is very much on the roadmap for 20.04. I'll update the bug when there's something to test!

Do note though that ZFS support in the server installer is not on the roadmap for 20.04 GA (it might be supported in a 20.04 point release, possibly, maybe).

Revision history for this message
Michael Hudson-Doyle (mwhudson) wrote :

So you can select multiple boot devices with UEFI and BIOS boot now.

Changed in subiquity:
status: Triaged → Fix Released
Revision history for this message
Rob Thomas (xrobau) wrote :

You have no idea how happy this makes me! Can you help me out with directions to an iso with the fix (when it's built?) so I can confirm the fix please?

Revision history for this message
Steve Langasek (vorlon) wrote :

This was fixed in the Ubuntu 20.04 LTS image: https://releases.ubuntu.com/20.04/ubuntu-20.04-live-server-amd64.iso

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.