libefi* integration breaks grub-install on MD devices

Bug #1868553 reported by TJ
26
This bug affects 4 people
Affects Status Importance Assigned to Milestone
curtin (Ubuntu)
Incomplete
Undecided
Unassigned
grub2 (Ubuntu)
Won't Fix
Undecided
Unassigned
subiquity (Ubuntu)
New
Undecided
Unassigned

Bug Description

Working with a new install of 20.04 on RAID-1 mirror using mdadm grub-install fails on EFI installs with:

# grub-install -v /dev/md0
...
Installing for x86_64-efi platform.
grub-install: warning: efivarfs_get_variable: open(/sys/firmware/efi/efivars/blk0-47c7b225-c42a-11d2-8e57-00a0c969723b): No such file or directory
.
grub-install: warning: efi_get_variable: ops->get_variable failed: No such file or directory.
grub-install: warning: efi_va_generate_file_device_path_from_esp: could not open device for ESP: Bad address.
grub-install: warning: efi_generate_file_device_path_from_esp: could not generate File DP from ESP: Bad address.
grub-install: error: failed to register the EFI boot entry: Bad address.

Note the mythical device "blk0-47c7b225-c42a-11d2-8e57-00a0c969723b" - the UUID is the GUID type for an EFI system partition.

This comes from libefivars where it constructs a blk0-* device for EDD - see src/efi.c::get_edd_version() and src/efi.c::make_linux_load_option()

It's not clear why this happens from grub but it doesn't happen (I've not been able to reproduce it) using efibootmr directly.

Current workaround (but doesn't stop apt/dpkg failures) is to manually create the EFI entries for each device in the array, e.g:

# for i in 0 1; do efibootmgr -c -d /dev/nvme${i}n -p 1 -L "Ubuntu nvme${i}n1" -l \\EFI\\ubuntu\\grubx64.efi; done

Revision history for this message
Launchpad Janitor (janitor) wrote :

Status changed to 'Confirmed' because the bug affects multiple users.

Changed in grub2 (Ubuntu):
status: New → Confirmed
Revision history for this message
Steve Langasek (vorlon) wrote :

"for each device in the array"

You cannot put EFI system partitions in a Linux software raid device. This is unsupported. You will get corruption the first time your EFI writes data to one of the VFAT filesystems (which EFI is allowed to do).

As of Ubuntu 20.04, the grub package has support for synchronizing the bootloader content of multiple ESPs, which is the safe way to handle EFI boot across multiple disks in a way that is resilient in the face of disk failure. (And yes, it requires creating multiple boot entries in nvram.)

Revision history for this message
Seth Arnold (seth-arnold) wrote :

Hmm, I guess I'm not understanding. My only system with md root has:

/dev/md126p1 on /boot/efi type vfat (rw,relatime,fmask=0077,dmask=0077,codepage=437,iocharset=iso8859-1,shortname=mixed,errors=remount-ro)

I'm hoping that if either drive dies, this system can still boot. (But I've not been brave enough to try yanking a drive to find out what happens.)

Do we support having some way to boot via raid if a drive dies? This keeps coming up every single release as something we haven't supported correctly, ever, and if we *do* support this, it'd be nice to have it documented somewhere what exactly we support, with which tools, etc. I suggest the LTS server guide, but the release notes would also do.

Thanks

Revision history for this message
Mason Loring Bliss (y-mason) wrote :

So, "dpkg-reconfigure grub-efi-amd64" now has a screen that matches what
we'd get reconfiguring the old grub-pc:

 ┌──────────────────────┤ Configuring grub-efi-amd64 ├───────────────────────┐
 │ The grub-efi package is being upgraded. This menu allows you to select │
 │ which EFI system partions you'd like grub-install to be automatically │
 │ run for, if any. │
 │ │
 │ Running grub-install automatically is recommended in most situations, to │
 │ prevent the installed GRUB core image from getting out of sync with GRUB │
 │ modules or grub.cfg. │
 │ │
 │ GRUB EFI system partitions: │
 │ │
 │ [*] /dev/sda1 (199 MB; /boot/efi) on 120034 MB INTEL_SSDSC2BW12 │
 │ │
 │ │
 │ <Ok> │
 │ │
 └───────────────────────────────────────────────────────────────────────────

I'll test with a system with two ESPs later, but this ought to do the right
thing. You'll need one entry for each, just as you would with an old-meta-
data MD-RAID1 used as an ESP, but as vorlon's noting, this will be a little
safer in the face of UEFI firmware that writes stuff to the drives.

It'd be something like:

efibootmgr -c -d /dev/sda -L ubuntu0 -l '\EFI\UBUNTU\SHIMX64.EFI'
efibootmgr -c -d /dev/sdb -L ubuntu1 -l '\EFI\UBUNTU\SHIMX64.EFI'

This is a win, and I have no further desire for direct MD-RAID 1 support.

Revision history for this message
Mason Loring Bliss (y-mason) wrote :
Revision history for this message
Paride Legovini (paride) wrote :

@Mason: that pastebin doesn't work: 404 That page does not exist

Revision history for this message
Paride Legovini (paride) wrote :

@TJ: could you please share how your efi partition is mounted? For example by pasting the output of:

  grep efi /proc/self/mounts

Also: what happens if you run grub-install without parameters? Thanks!

Revision history for this message
Mason Loring Bliss (y-mason) wrote :

Paride,

It was a limited-duration copy of the original text pasted into my previous
ticket comment, so it contained unblemished formatting but was otherwise
identical content.

As an addendum, I tested 20.04 with a two-ESP system and I was able to
specify both ESPs when I ran:

    dpkg-reconfigure grub-efi-amd64

The resulting EFI boot entries were correct.

Paride Legovini (paride)
Changed in curtin (Ubuntu):
status: New → Incomplete
Revision history for this message
Mason Loring Bliss (y-mason) wrote :

FWIW, my prior comment was confusing. GRUB handles both
efibootmgr entries correctly on its own with this new
functionality.

Revision history for this message
Alexey Kopytko (sanmai) wrote :

Here's a Debian bug report from 2019 for the same issue:

https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=925591

Revision history for this message
Julian Andres Klode (juliank) wrote :

As mentioned before, support for multiple ESPs has been introduced to deal with raid devices, installing directly to an ESP inside a raid is not supported.

Changed in grub2 (Ubuntu):
status: Confirmed → Won't Fix
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.