Error: diskfilter writes are not supported

Bug #1274320 reported by Patrick Houle on 2014-01-29
This bug affects 359 people
Affects Status Importance Assigned to Milestone
grub
Unknown
Unknown
grub2 (Debian)
Fix Released
Unknown
grub2 (Fedora)
Unknown
Unknown
grub2 (Ubuntu)
High
Unassigned
Trusty
High
dann frazier
Vivid
High
dann frazier
grub2-signed (Ubuntu)
Undecided
Unassigned
Trusty
Undecided
Unassigned
Vivid
Undecided
Unassigned

Bug Description

[Impact]
RAID and LVM users may run into a cryptic warning on boot from GRUB; because some variants of RAID and LVM are not supported for writing by GRUB itself. GRUB would typically try to write a tiny file to the boot partition for things like remembering the last selected boot entry.

[Test Case]
On an affected system (typically any RAID/LVM setup where the boot device is on RAID or on a LVM device), try to boot. Without the patch, the message will appear, otherwise it will not.

[Regression Potential]
The potential for regression is minimal as the patch involves enforcing the fact that diskfilter writes are unsupported by grub in menu building scripts, which will automatically avoid enabling recordfail (the offending feature which saves GRUB's state) if the boot partition is detected to be on a device which does not support diskfilter writes.

----

Once grub chooses what to boot to, an error shows up and will sit on the screen for approx. 5 seconds

"Error: diskfilter writes are not supported.
Press any key to continue..."

From what I understand, this error is related to raid partitions, and I have two of them (md0, md1). Both partitions are used (root and swap). Raid is assembled with mdadm and are raid0

This error message started appearing right after grub2 was updated on 01/27/2014.

System: Kernel: 3.13.0-5-generic x86_64 (64 bit) Desktop: KDE 4.11.5 Distro: Ubuntu 14.04 trusty
Drives: HDD Total Size: 1064.2GB (10.9% used)
        1: id: /dev/sda model: SanDisk_SDSSDRC0 size: 32.0GB
        2: id: /dev/sdb model: SanDisk_SDSSDRC0 size: 32.0GB
        3: id: /dev/sdc model: ST31000528AS size: 1000.2GB
RAID: Device-1: /dev/md1 - active raid: 0 components: online: sdb2 sda3 (swap)       Device-2: /dev/md0 - active raid: 0 components: online: sdb1 sda1 ( / )
Grub2: grub-efi-amd64 version 2.02~beta2-5

ProblemType: Bug
DistroRelease: Ubuntu 14.04
Package: grub-efi-amd64 2.02~beta2-5
ProcVersionSignature: Ubuntu 3.13.0-5.20-generic 3.13.0
Uname: Linux 3.13.0-5-generic x86_64
NonfreeKernelModules: nvidia
ApportVersion: 2.13.2-0ubuntu2
Architecture: amd64
CurrentDesktop: KDE
Date: Wed Jan 29 17:37:59 2014
SourcePackage: grub2
UpgradeStatus: Upgraded to trusty on 2014-01-23 (6 days ago)

Patrick Houle (buddlespit) wrote :
description: updated
description: updated
Patrick Houle (buddlespit) wrote :

I should also point out that the system will boot normally once any key is pressed or 5 seconds elapses.

description: updated
Launchpad Janitor (janitor) wrote :

Status changed to 'Confirmed' because the bug affects multiple users.

Changed in grub2 (Ubuntu):
status: New → Confirmed
summary: - Error after boot menu
+ Error: diskfilter writes are not supported
Changed in grub2 (Ubuntu):
importance: Undecided → Low
importance: Low → Medium
KrautOS (krautos) wrote :

Got the same issue on a up-to-date "trusty" machine with / on mdadm RAID 1 and swap on madam RAID 0. Any clues how to fix it?

Patrick Houle (buddlespit) wrote :

I changed my raids <dump> and <pass> to '0 0' instead of '0 1' in /etc/fstab.

Singtoh (singtoh) wrote :

Hello all,

Just thought I would throw in as well. Just started seeing this after a fresh install of Ubuntu-Trusty today at the first bootup and all boots there after. I am not running RAID but I am using LVM. This is Ubuntu-Trusty amd64. Just as a side note, on todays install I didn't give the system a /boot partition like I have seen in all the LVM tutorials. I just have two disks that I made LVM partitions on ie. /root /home /Storage /Storage1 and swap. Runs real nice but get that nagging error at boot????? Hope it gets a fix soon.

Cheers,

Singtoh

Huygens (huygens-25) wrote :

Here is another different kind of setup which triggers the problem:
/boot ext4 on a md RAID10 (4 HDD) partition
/ btrfs RAID10 (4 HDD) partition
swap on a md RAID0 (4HDD) partition

The boot and kernel are on a MD RAID (software RAID), whereas the rest of the system is using Btrfs RAID.

Cybjit (cybjit) wrote :

I also get this message.
/ is on ext4 LVM, and no separate /boot.
Booting works fine after the delay or key press.

Singtoh (singtoh) wrote :

Just to add this tidbit. I just re-installed with normal partitioning (no raid and no LVM) just /root /home and swap and it boots normally, no 5sec wait and no errors. So I guess LVM & or RAID related???? I am just about to re-install again to a new SSD and will install to LVM again and I'll post back with the outcome.

Cheers,

Singtoh

robled (robled) wrote :

This is definitely RAID/LVM related. On my 14.04 system with a ext4 boot partition I don't get the error, but on another system that's fully LVM I do get the error.

Has anyone come up with a grub config workaround to prevent the delay on boot?

Jean-Mi (forum-i) wrote :

You guys should be happy your system still boots. I just got that error (diskfilter writes are not supported) but grub exists immediately after, leaving my uefi with no other choice than booting another distro.
I had to spam press the pause key on my keyboard to get the error message before it disappears.
On my setup, the boot error occurs with openSuse installed on LVM2. The other distro is installed with a regular /boot (ext4) separate partition. Both are using grub. I could load both by calling their respective grubx64.efi from the ESP partition.
The last thing I remember having done on openSuse was to create a btrfs partition and tweaked /etc/fstab a little bit.
From the other distro, I can read openSuse's files and everything looks fine. It's like the boot loader used to work and suddenly failed.
I'd love to remember what else I did since it worked. And I'd love to be able to boot openSuse again.

Jean-Mi (forum-i) wrote :

I may have found the reason for my particular crash. Now my system boots normally.

According to the bug report #1006289 at redhat, the bug could come with insmod diskfilter but someone deactivated that mod and still got the error. I don't even have this mod declared. But I noticed openSuse loves to handle everything on reboot, like setting the next OS to load.
My /boot/grub/grubenv contains 2 lines. Basically, save_entry=openSUSE and next_entry=LMDE Cinnamon.
I removed those lines and the error disappeared. /Maybe/ those line instructs grub to write something on the boot partition, which it's perfectly unable to do since it cannot write to LVM.
Anyway, it seems that solving this bug requires to find out why grub tries to write data.

hamish (hamish-b) wrote :

Hi, I get the same error with 14.04 beta1 booting into RAID1.

for those running their swap partitions in a raid, I'm wondering if it would be better to just mount the multiple swap partitions in fstab and give them all equal priority? For soft-raid it would cut out a layer of overhead, and for swap anything which cuts out overhead is a good thing. (e.g., mount options in fstab for all swap partitions: sw,pri=10)

See `man 2 swapon` for details.
       "If two or more areas
       have the same priority, and it is the highest priority available, pages
       are allocated on a round-robin basis between them."

Phillip Susi (psusi) wrote :

That would defeat the purpose of raid1, which is to keep the system up and running when a drive fails. With two separate swaps, if the disk fails you're going to probably have half of user space die and end up with a badly broken server that needs rebooted.

Artyom Nosov (artyom.nosov) wrote :

Got the same issue on the daily build of trust (20140319). / , /home and swap all is RAID1

I have this issue on ubuntu Trusty 14.04 on / in lvm. Deleting /boot/grub/grubenv prevent this error on next boot, but grub will create this file every time boot, thus I have rm /boot/grub/grubenv in my crontab.

stoffel010170 (stoffel-010170) wrote :

Have the same bug on my LVM system,too. My systems without LVM and RAID are not affected.

Thomas (t.c) wrote :

I have the bug too, I use root filesystem (/) as a software raid 1

Thomas (t.c) wrote :

# GRUB Environment Block
#######################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################

thats the content of /boot/grub/grubenv - is it right?

VladimirCZ (vlabla) wrote :

I also get this message.
/ and swap volumes are on ext4 LVM, and no separate /boot.
Booting works fine after the delay or key press.

Ubuntu QA Website (ubuntuqa) wrote :

This bug has been reported on the Ubuntu ISO testing tracker.

A list of all reports related to this bug can be found here:
http://iso.qa.ubuntu.com/qatracker/reports/bugs/1274320

tags: added: iso-testing
Moritz Baumann (mo42) wrote :

The problem is the call to the recordfail function in each menuentry. If I comment it in /boot/grub/grub.cfg, the error message no longer appears. Unfortunately, there seems to be no configuration option in /etc/default/grub which may prevent the scripts in /etc/grub.d from adding that function call.

I'm also affected by this bug. So I can confirm it's still there on a fresh install of 14.04

I'm using RAID0 for /

Moritz Baumann (mo42) wrote :

As a temporary fix, you can edit /etc/grub.d/10_linux and replace 'quick_boot="1"' with 'quick_boot="0"' in line 25. (Don't forget to run "sudo update-grub" afterwards.)

I can confirm that the workaround mentioned by Moritz (setting
'quickboot=0') works for me.

Drew Michel (drew-michel) wrote :

I can also confirm this bug is happening with the latest beta version of Trusty with /boot living on an EXT4 LVM partition.

* setting quick_boot="0" in /etc/grub.d/10_linux and running update-grub fixes the issue
* setting GRUB_SAVEDEFAULT="false" in /etc/default/grub and running update-grub does not fix the issue
* removing recordfail from /boot/grub/grub.cfg fixes the issue

3.13.0-23-generic #45-Ubuntu
Distributor ID: Ubuntu
Description: Ubuntu Trusty Tahr (development branch)
Release: 14.04

apt-cache policy grub-pc
grub-pc:
  Installed: 2.02~beta2-9

Gus (gus-lgze) wrote :

Just to confirm, this is in the release version of 14.04. I've got it on a fresh build with raid 1 via mdadm, no swap.

It does not halt booting, just a brief delay.

Quesar (rick-microway) wrote :

I just made a permanent clean fix for this, at least for MD (software RAID). It can easily be modified to fix for LVM too. Edit /etc/grub.d/00_header and change the recordfail section to this:

if [ "$quick_boot" = 1 ]; then
    cat <<EOF
function recordfail {
  set recordfail=1
EOF
    FS="$(grub-probe --target=fs "${grubdir}")"
    GRUBMDDEVICE="$(grub-probe --target=disk "${grubdir}" | grep \/dev\/md)"
    if [ $? -eq 0 ] ; then
        cat <<EOF
  # GRUB lacks write support for $GRUBMDDEVICE, so recordfail support is disabled.
EOF
    else
        case "$FS" in
          btrfs | cpiofs | newc | odc | romfs | squash4 | tarfs | zfs)
            cat <<EOF
  # GRUB lacks write support for $FS, so recordfail support is disabled.
EOF
            ;;
          *)
            cat <<EOF
  if [ -n "\${have_grubenv}" ]; then if [ -z "\${boot_once}" ]; then save_env recordfail; fi; fi
EOF
        esac
    fi
    cat <<EOF
}
EOF
fi

robled (robled) wrote :

The work-around from #24 gets rid of the error for me. I timed my boot process after the change and didn't notice any appreciable difference in boot time with the work-around in place. This testing was performed using a recent laptop with an SSD.

EAB (adair-boder) wrote :

I got this error message too - with a fresh install of 14.04 Server Official Release.
I also have 2 RAID-1 setups.

I have recently installed Xubuntu 14.04 (Official Release) on two computers. On one of them I did not use RAID and allowed automatic disk partititioning; no boot error has been observed. For the second computer I used the Minimal CD, installed two RAID0 devices (one for swap and one for /) and Xubuntu; on this computer the error message appeared every time I booted. The workaround suggested by Moritz Baumann (#24) eliminated the error message.

1 comments hidden view all 152 comments
bolted (k-minnick) wrote :

I followed comment #28 from Quesar (rick-microway) above with lubuntu 14.04 running on a 1U supermicro server. Rebooted multiple times to test, and I am no longer getting this error message. A huge thank you to Quesar for a fix!

Vadim Nevorotin (malamut) wrote :

Fix from #28 extended to support LVM (so, I think, it is universal clean fix of this bug). Change recordfail section in /etc/grub.d/00_header to:

if [ "$quick_boot" = 1 ]; then
    cat <<EOF
function recordfail {
  set recordfail=1
EOF
    GRUBMDDEVICE="$(grub-probe --target=disk "${grubdir}")"
    GRUBLVMDEVICE="$(grub-probe --target=disk "${grubdir}")"
    if echo "$GRUBMDDEVICE" | grep "/dev/md" > /dev/null; then
        cat <<EOF
  # GRUB lacks write support for $GRUBMDDEVICE, so recordfail support is disabled.
EOF
    elif echo "$GRUBLVMDEVICE" | grep "/dev/mapper" > /dev/null; then
        cat <<EOF
  # GRUB lacks write support for $GRUBLVMDEVICE, so recordfail support is disabled.
EOF
    else
        FS="$(grub-probe --target=fs "${grubdir}")"
        case "$FS" in
          btrfs | cpiofs | newc | odc | romfs | squash4 | tarfs | zfs)
            cat <<EOF
  # GRUB lacks write support for $FS, so recordfail support is disabled.
EOF
          ;;
          *)
            cat <<EOF
  if [ -n "\${have_grubenv}" ]; then if [ -z "\${boot_once}" ]; then save_env recordfail; fi; fi
EOF
        esac
    fi
    cat <<EOF
}

Then run update-grub

Andrew Hamilton (ahamilton9) wrote :

Just confirming that the above (RAID & LVM version) fix is working for a RAID10, 14.04, x64, fresh install. I don't have LVM up though, so I cannot confirm that detail.

Tato Salcedo (tatosalcedo) wrote :

I have no raid, I lvm and present the same error

Aaron Hastings (thecosmicfrog) wrote :

Just installed 14.04 and seeing the same error on boot.

I don't have any RAID setup, but I am using LVM ext4 volumes for /, /home and swap. My /boot is on a separate ext4 primary partition in an msdos partition table.

Agustín Ure (aeu79) wrote :

Confirming that the fix in comment #34 solved the problem in a fresh install of 14.04 with LVM.

Uqbar (uqbar) wrote :

I would like to apply the fix from comment#34 as I am using software RAID6 and LVM at the same time.
Unluckily I am not so good at changing that "recordfail section in /etc/grub.d/00_header".
Would it be possible to attach here the complete fixed /etc/grub.d/00_header file?
Would it be possible to have this as an official "fix released"?

David Twersky (dmtwersky) wrote :

Confirming comment#34 fixed it for me as well.
Im using LVM on all partitions.

tags: added: patch
Changed in grub2 (Ubuntu):
status: Confirmed → Triaged
Changed in grub:
importance: Undecided → Unknown
status: New → Unknown
importance: Unknown → Undecided
status: Unknown → New
tags: added: utopic
Steve Langasek (vorlon) on 2014-06-17
Changed in grub2 (Ubuntu):
importance: Medium → High
Anders Kaseorg (andersk) on 2014-07-16
Changed in grub:
status: New → Invalid
Changed in grub2 (Debian):
status: Unknown → New
Changed in mdadm (Ubuntu):
assignee: nobody → Dimitri John Ledkov (xnox)
Changed in mdadm (Ubuntu):
status: New → Confirmed
Changed in mdadm (Ubuntu):
importance: Undecided → High
status: Confirmed → Triaged
Changed in grub:
importance: Undecided → Unknown
status: Invalid → Unknown
Changed in grub2 (Ubuntu):
assignee: nobody → Colin Watson (cjwatson)
Changed in mdadm (Ubuntu):
status: Triaged → Invalid
Colin Watson (cjwatson) on 2014-12-18
Changed in grub2 (Ubuntu):
assignee: Colin Watson (cjwatson) → nobody
Steve Langasek (vorlon) on 2015-06-08
Changed in grub2 (Ubuntu):
assignee: nobody → Mathieu Trudel-Lapierre (mathieu-tl)
Changed in grub2 (Ubuntu):
status: Triaged → Incomplete
72 comments hidden view all 152 comments
Loïc Minier (lool) wrote :

keeping this as new as to not expire it -- potentially no one has this issue without LVM/MD, so Mathieu's question might not get an answer but we still want to fix this

Changed in grub2 (Ubuntu):
status: Incomplete → New
Launchpad Janitor (janitor) wrote :

Status changed to 'Confirmed' because the bug affects multiple users.

Changed in grub2 (Ubuntu):
status: New → Confirmed
Seb Bonnard (sebmansfeld) wrote :

Hi, this bug also affected to me because I'm using LVM.

I want to thank Anders for his patch (see comment #70).

I pasted his patch in a file I called 00_header_754921.patch and then I typed the following commands :

$ sed -i "s/00_header.in/00_header/g" 00_header_754921.patch
$ cd /etc/ && sudo patch -p2 < ~/00_header_754921.patch
$ sudo update-grub

Hope this helps.

Seb.

fermulator (fermulator) wrote :

I just tried applying the patch manually and re-ran grub-install, not go.
I also tried pulling in the patch via Forest (foresto) PPA: https://launchpad.net/~foresto/+archive/ubuntu/ubuntutweaks, also no go.

fermulator@fermmy-basement:/$ sudo grub-install /dev/md0
Installing for i386-pc platform.
grub-install: error: diskfilter writes are not supported.

(full output w/ -v is here: http://pastebin.com/6Ni8GpY3)

fermulator@fermmy-basement:/$ cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 sdb1[0] sda1[1]
      156158720 blocks super 1.2 [2/2] [UU]

fermulator@fermmy-basement:/$ uname -a
Linux fermmy-basement 3.16.0-30-generic #40~14.04.1-Ubuntu SMP Thu Jan 15 17:43:14 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
fermulator@fermmy-basement:/$ cat /etc/issue
Ubuntu 14.04.2 LTS \n \l

fermulator@fermmy-basement:/$ dpkg --list | grep grub2
ii grub2 2.02~beta2-9ubuntu1.2 amd64 GRand Unified Bootloader, version 2 (dummy package)

@fermulator, that's on purpose, we don't have write support on /dev/md (diskfilter) devices. You might want to use /dev/sda1 instead (as per mdstat), the changes will get synced on the other drive.

I'm applying the changes for Ubuntu now, changes for Debian are in Debian git.

Changed in grub2 (Ubuntu):
status: Confirmed → In Progress
Launchpad Janitor (janitor) wrote :

This bug was fixed in the package grub2 - 2.02~beta2-26ubuntu3

---------------
grub2 (2.02~beta2-26ubuntu3) wily; urgency=medium

  * debian/patches/uefi_firmware_setup.patch: take into account that the UEFI
    variable OsIndicationsSupported is a bit field, and as such should be
    compared as hex values in 30_uefi-firmware.in. (LP: #1456911)
  * Update quick boot logic to handle abstractions for which there is no
    write support. (LP: #1274320)

 -- Mathieu Trudel-Lapierre <email address hidden> Mon, 06 Jul 2015 16:32:11 -0400

Changed in grub2 (Ubuntu):
status: In Progress → Fix Released
Phillip Susi (psusi) wrote :

On 7/6/2015 4:43 PM, Mathieu Trudel-Lapierre wrote:
> @fermulator, that's on purpose, we don't have write support on /dev/md
> (diskfilter) devices. You might want to use /dev/sda1 instead (as per
> mdstat), the changes will get synced on the other drive.

No, no, no... you NEVER write directly to a disk that is a component of
a raid array, and if you do, it will NOT be synced to the other drive,
since md has no idea you did such a thing.

Hum, of course, you're right. Things won't get synced.

That said, you *do* need to write directly to each disk of the RAID array to install grub on them given that grub doesn't have support for the overlaying device representation.

Seb Bonnard (sebmansfeld) wrote :

Hi,

Oops !

I forgot to add to my comment #115 :

sudo chmod +x /etc/grub.d/00_header

BEFORE the "update-grub" command.

Sebastien.

Shahar Or (mightyiam) wrote :

Not seeing this on startup feels so good. But it was around for so long I almost miss it.

Who'se to blame for the fix?

Thanks a lot.

Rarylson Freitas (rarylson) wrote :

One question:

The solution made by "Mathieu Trudel-Lapierre <email address hidden> " is marked as Fix Released.

However, I can't update my grub package to the released one. The new version is 2.02~beta2-26ubuntu3, and mine is 2.02~beta2-9ubuntu1.3.

I've tried to get it from the trusty-proposed repo, without success (https://wiki.ubuntu.com/Testing/EnableProposed).

What should I do now? Should I only wait for the fix being at the trusty-main repo?

Simon Déziel (sdeziel) wrote :

On 07/17/2015 11:16 AM, Rarylson Freitas wrote:
> One question:
>
> The solution made by "Mathieu Trudel-Lapierre <email address hidden> "
> is marked as Fix Released.
>
> However, I can't update my grub package to the released one. The new
> version is 2.02~beta2-26ubuntu3, and mine is 2.02~beta2-9ubuntu1.3.
>
> I've tried to get it from the trusty-proposed repo, without success
> (https://wiki.ubuntu.com/Testing/EnableProposed).
>
> What should I do now? Should I only wait for the fix being at the
> trusty-main repo?
>

The version 2.02~beta2-26ubuntu3 is for Wily, not Trusty. You'll need to
wait for a Trusty specific version to hit trusty-proposed to be able to
test it.

Indeed. To get this in trusty (or other releases), please see http://wiki.ubuntu.com/StableReleaseUpdates#Procedure to request the update for the release you're interested in. It would help me a lot if someone having the issue could at least update the bug description and nominate for a release, then I can get back to grub later to do the update.

Charis (tao-qqmail) wrote :

Where is the solution.

Changed in grub2 (Ubuntu):
assignee: Mathieu Trudel-Lapierre (mathieu-tl) → nobody
Changed in grub2 (Ubuntu Trusty):
status: New → In Progress
Changed in grub2 (Ubuntu Vivid):
status: New → In Progress
importance: Undecided → High
Changed in grub2 (Ubuntu Trusty):
assignee: nobody → Mathieu Trudel-Lapierre (mathieu-tl)
importance: Undecided → High
Changed in grub2 (Ubuntu Vivid):
assignee: nobody → Mathieu Trudel-Lapierre (mathieu-tl)
Changed in mdadm (Ubuntu):
assignee: Dimitri John Ledkov (xnox) → nobody
fermulator (fermulator) wrote :

So as per my comment on "fermulator (fermulator) wrote on 2015-07-04: " there appears to be a few post-comments of confusion.

What /is/ the correct way to re-install grub to mdadm member drives?
(assuming mdadm has member disks with proper RAID partitions)
{{{
fermulator@fermmy-server:~$ cat /proc/mdstat | grep -A3 md60
md60 : active raid1 sdi2[1] sdj2[0]
      58560384 blocks super 1.2 [2/2] [UU]
}}}

grub-install /dev/sdX|Y
or,
grub-install /dev/sdX#|Y#

Ted Cabeen (ted-cabeen) wrote :

fermulator, if Linux is the only operating system on this computer, you want to install the grub bootloader on the drives, not the partitions, so /dev/sdX, /dev/sdY, etc.

Launchpad Janitor (janitor) wrote :

Status changed to 'Confirmed' because the bug affects multiple users.

Changed in mdadm (Ubuntu Trusty):
status: New → Confirmed
Changed in mdadm (Ubuntu Vivid):
status: New → Confirmed
Phillip Susi (psusi) on 2015-11-07
no longer affects: mdadm (Ubuntu)
no longer affects: mdadm (Ubuntu Trusty)
no longer affects: mdadm (Ubuntu Vivid)
1 comments hidden view all 152 comments
Michiel Bruijn (michiel-o) wrote :

This bug is still present and not fixed for me and several other people (for example http://forum.kodi.tv/showthread.php?tid=194447)

I did a clean install of kodibuntu (lubuntu 14.04) and had this error.
I use LVM and installed the OS on a SSD in AHCI mode.
It's annoying, but the system continues after a few seconds.

I would like to have this problem fixed because I have a slow resume of my monitor after suspend. I would like to rule out this problem to be related.

Tom Reynolds (tomreyn) wrote :

mathieu-tl:

Thanks for your work on this issue.

Since you nominated it for trusty and state it's in progress - is there a way to follow this progress?
Are there any test builds you would like to be tested, yet?

In case it's not been sufficiently stated before, this issue does affect 14.04 LTS x86_64.

It would be great to see a SRU, since it slows the boot process and may trick users into thinking their Ubuntu installation is broken when it is not (doing as the message suggests will just reboot your system).

Anyone is welcome copy + paste this text to the first post if that should help with the SRU.

description: updated
dann frazier (dannf) on 2015-12-16
Changed in grub2 (Ubuntu Vivid):
assignee: Mathieu Trudel-Lapierre (mathieu-tl) → dann frazier (dannf)
Changed in grub2 (Ubuntu Trusty):
assignee: Mathieu Trudel-Lapierre (mathieu-tl) → dann frazier (dannf)

Hello Patrick, or anyone else affected,

Accepted grub2 into trusty-proposed. The package will build now and be available at https://launchpad.net/ubuntu/+source/grub2/2.02~beta2-9ubuntu1.7 in a few hours, and then in the -proposed repository.

Please help us by testing this new package. See https://wiki.ubuntu.com/Testing/EnableProposed for documentation how to enable and use -proposed. Your feedback will aid us getting this update out to other Ubuntu users.

If this package fixes the bug for you, please add a comment to this bug, mentioning the version of the package you tested, and change the tag from verification-needed to verification-done. If it does not fix the bug for you, please add a comment stating that, and change the tag to verification-failed. In either case, details of your testing will help us make a better decision.

Further information regarding the verification process can be found at https://wiki.ubuntu.com/QATeam/PerformingSRUVerification . Thank you in advance!

Changed in grub2 (Ubuntu Trusty):
status: In Progress → Fix Committed
tags: added: verification-needed
Chris J Arges (arges) wrote :

Hello Patrick, or anyone else affected,

Accepted grub2 into vivid-proposed. The package will build now and be available at https://launchpad.net/ubuntu/+source/grub2/2.02~beta2-22ubuntu1.5 in a few hours, and then in the -proposed repository.

Please help us by testing this new package. See https://wiki.ubuntu.com/Testing/EnableProposed for documentation how to enable and use -proposed. Your feedback will aid us getting this update out to other Ubuntu users.

If this package fixes the bug for you, please add a comment to this bug, mentioning the version of the package you tested, and change the tag from verification-needed to verification-done. If it does not fix the bug for you, please add a comment stating that, and change the tag to verification-failed. In either case, details of your testing will help us make a better decision.

Further information regarding the verification process can be found at https://wiki.ubuntu.com/QATeam/PerformingSRUVerification . Thank you in advance!

Changed in grub2 (Ubuntu Vivid):
status: In Progress → Fix Committed
Chris J Arges (arges) wrote :

Hello Patrick, or anyone else affected,

Accepted grub2-signed into trusty-proposed. The package will build now and be available at https://launchpad.net/ubuntu/+source/grub2-signed/1.34.8 in a few hours, and then in the -proposed repository.

Please help us by testing this new package. See https://wiki.ubuntu.com/Testing/EnableProposed for documentation how to enable and use -proposed. Your feedback will aid us getting this update out to other Ubuntu users.

If this package fixes the bug for you, please add a comment to this bug, mentioning the version of the package you tested, and change the tag from verification-needed to verification-done. If it does not fix the bug for you, please add a comment stating that, and change the tag to verification-failed. In either case, details of your testing will help us make a better decision.

Further information regarding the verification process can be found at https://wiki.ubuntu.com/QATeam/PerformingSRUVerification . Thank you in advance!

Changed in grub2-signed (Ubuntu Trusty):
status: New → Fix Committed
Changed in grub2-signed (Ubuntu Vivid):
status: New → Fix Committed
Chris J Arges (arges) wrote :

Hello Patrick, or anyone else affected,

Accepted grub2-signed into vivid-proposed. The package will build now and be available at https://launchpad.net/ubuntu/+source/grub2-signed/1.46.5 in a few hours, and then in the -proposed repository.

Please help us by testing this new package. See https://wiki.ubuntu.com/Testing/EnableProposed for documentation how to enable and use -proposed. Your feedback will aid us getting this update out to other Ubuntu users.

If this package fixes the bug for you, please add a comment to this bug, mentioning the version of the package you tested, and change the tag from verification-needed to verification-done. If it does not fix the bug for you, please add a comment stating that, and change the tag to verification-failed. In either case, details of your testing will help us make a better decision.

Further information regarding the verification process can be found at https://wiki.ubuntu.com/QATeam/PerformingSRUVerification . Thank you in advance!

Simon Déziel (sdeziel) on 2015-12-17
tags: added: verification-done-trusty verification-needed-vivid
removed: verification-needed
Anton Eliasson (eliasson) wrote :

Packages from vivid-proposed fixed the issue for me.

Details:

Start-Date: 2015-12-18 12:14:56
Commandline: apt-get install grub-common/vivid-proposed -t vivid-proposed
Upgrade: grub-efi-amd64-bin:amd64 (2.02~beta2-22ubuntu1.4, 2.02~beta2-22ubuntu1.5), grub-efi-amd64:amd64 (2.02~beta2-22ubuntu1.4, 2.02~beta2-22ubuntu1.5), grub-common:amd64 (2.02~beta2-22ubuntu1.4, 2.02~beta2-22ubuntu1.5), grub2-common:amd64 (2.02~beta2-22ubuntu1.4, 2.02~beta2-22ubuntu1.5), grub-efi-amd64-signed:amd64 (1.46.4+2.02~beta2-22ubuntu1.4, 1.46.5+2.02~beta2-22ubuntu1.5)
End-Date: 2015-12-18 12:15:19

Simon Déziel (sdeziel) on 2015-12-18
tags: added: verification-done-vivid
removed: verification-needed-vivid

After installing 2.02~beta2-9ubuntu1.7 on Trusty (14.04.3 32-bit) I no longer see the message during boot.
(This was perfect timing for me! I only just dealt with the upgrade from Grub legacy today and was disappointed to see an error message, which is now gone)
Thanks

Launchpad Janitor (janitor) wrote :

Status changed to 'Confirmed' because the bug affects multiple users.

Changed in grub2-signed (Ubuntu):
status: New → Confirmed
Id2ndR (id2ndr) wrote :

After installing 2.02~beta2-9ubuntu1.7 on Trusty, I had to set execute right on /etc/grub.d/00_header. Now it works normally with my lvm system partition.

So enable proposed repository, and then:
sudo apt-get install grub-efi-amd64/trusty-proposed -t trusty-proposed
sudo chmod +x /etc/grub.d/00_header
sudo update-grub2

Rich Hart (sirwizkid) wrote :

The 1.7 package is working flawlessly on my systems that were effected.
Thanks for fixing this.

fermulator (fermulator) wrote :

Based upon the comments above, and the TEST CASE defined in the main section for this bug, I confirm that verification=done

###
--> PASS
###

I tested on my own system running

{{{
$ mount | grep md60
/dev/md60 on / type ext4 (rw,errors=remount-ro)

$ cat /proc/mdstat | grep -A1 md60
md60 : active raid1 sdd2[0] sdb2[1]
      58560384 blocks super 1.2 [2/2] [UU]

fermulator@fermmy-server:~$ dpkg --list | grep grub
ii grub-common 2.02~beta2-9ubuntu1.7 amd64 GRand Unified Bootloader (common files)
ii grub-gfxpayload-lists 0.6 amd64 GRUB gfxpayload blacklist
ii grub-pc 2.02~beta2-9ubuntu1.7 amd64 GRand Unified Bootloader, version 2 (PC/BIOS version)
ii grub-pc-bin 2.02~beta2-9ubuntu1.7 amd64 GRand Unified Bootloader, version 2 (PC/BIOS binaries)
ii grub2-common 2.02~beta2-9ubuntu1.7 amd64 GRand Unified Bootloader (common files for version 2)
}}}

Full results:
http://paste.ubuntu.com/14259366/

---

NOTE: I'm not sure what to do about the "grub2-signed" properties for this bug...

information type: Public → Public Security
information type: Public Security → Public
Launchpad Janitor (janitor) wrote :

This bug was fixed in the package grub2 - 2.02~beta2-9ubuntu1.7

---------------
grub2 (2.02~beta2-9ubuntu1.7) trusty; urgency=medium

  * Cherry-picks to better handle TFTP timeouts on some arches: (LP: #1521612)
    - (7b386b7) efidisk: move device path helpers in core for efinet
    - (c52ae40) efinet: skip virtual IP devices when enumerating cards
    - (f348aee) efinet: enable hardware filters when opening interface
  * Update quick boot logic to handle abstractions for which there is no
    write support. (LP: #1274320)

 -- dann frazier <email address hidden> Wed, 16 Dec 2015 14:03:48 -0700

Changed in grub2 (Ubuntu Trusty):
status: Fix Committed → Fix Released

The verification of the Stable Release Update for grub2 has completed successfully and the package has now been released to -updates. Subsequently, the Ubuntu Stable Release Updates Team is being unsubscribed and will not receive messages about this bug report. In the event that you encounter a regression using the package from -updates please report a new bug using ubuntu-bug and tag the bug report regression-update so we can easily find any regressions.

Launchpad Janitor (janitor) wrote :

This bug was fixed in the package grub2-signed - 1.34.8

---------------
grub2-signed (1.34.8) trusty; urgency=medium

  * Rebuild against grub-efi-amd64 2.02~beta2-9ubuntu1.7 (LP: #1521612,
    LP: #1274320).

 -- dann frazier <email address hidden> Wed, 16 Dec 2015 14:23:00 -0700

Changed in grub2-signed (Ubuntu Trusty):
status: Fix Committed → Fix Released
Launchpad Janitor (janitor) wrote :

This bug was fixed in the package grub2 - 2.02~beta2-22ubuntu1.5

---------------
grub2 (2.02~beta2-22ubuntu1.5) vivid; urgency=medium

  * Merge in changes from 2.02~beta2-22ubuntu1.3:
    - d/p/arm64-set-correct-length-of-device-path-end-entry.patch: Fixes
      booting arm64 kernels on certain UEFI implementations. (LP: #1476882)
    - progress: avoid NULL dereference for net files. (LP: #1459872)
    - arm64/setjmp: Add missing license macro. (LP: #1459871)
    - Cherry-pick patch to add SAS disks to the device list from the ofdisk
      module. (LP: #1517586)
    - Cherry-pick patch to open Simple Network Protocol exclusively.
      (LP: #1508893)
  * Cherry-picks to better handle TFTP timeouts on some arches: (LP: #1521612)
    - (7b386b7) efidisk: move device path helpers in core for efinet
    - (c52ae40) efinet: skip virtual IP devices when enumerating cards
    - (f348aee) efinet: enable hardware filters when opening interface
  * Update quick boot logic to handle abstractions for which there is no
    write support. (LP: #1274320)

 -- dann frazier <email address hidden> Wed, 16 Dec 2015 13:31:15 -0700

Changed in grub2 (Ubuntu Vivid):
status: Fix Committed → Fix Released
Launchpad Janitor (janitor) wrote :

This bug was fixed in the package grub2-signed - 1.46.5

---------------
grub2-signed (1.46.5) vivid; urgency=medium

  * Rebuild against grub2 2.02~beta2-22ubuntu1.5 (LP: #1476882, LP: #1459872,
    LP: 1459871, LP: #1517586, LP:#1508893, LP: #1521612, LP: #1274320).

 -- dann frazier <email address hidden> Wed, 16 Dec 2015 14:18:28 -0700

Changed in grub2-signed (Ubuntu Vivid):
status: Fix Committed → Fix Released
Lior Goikhburg (goikhburg) wrote :

Problens with installing latest 14.04.3

I Have tried every solution mentioned in this thread and no luck.
Grub would not install...

HP server with 4 SATA disks, RAID 10 (md0) with /boot and / on it, No LVM

installing with:
update-grub - works fine
grub-install /dev/md0 - fails

Went up to grub version 2.02~beta2-32ubuntu1 latest from XENIAL .... still getting diskfilter error ... nothing helps.

Any ideas, anyone ?

wiley.coyote (tjwiley) wrote :

Did you try simply updating the packages from the Trusty repos? The fix has already been released.

2.02~beta2-9ubuntu1.7

The fix is there & working...at least for me.

Changed in grub2 (Debian):
status: New → Fix Released
armaos (alexandros-k) wrote :

hi,
so more or less i have tried the solutions above but still without luck.
@Lior Goikhburg (goikhburg): did you manage to solve it?

all ideas are more than welcome
thnx

Lior Goikhburg (goikhburg) wrote :

I ended up with the following workaround:

When setting up the server i configured the following:

0. RAID 10 on /sda /sdb /sdc /sdd
1. /boot / and swap partition are on RAID but NOT IN LVM VOLUME
2. Rest of the RAID space - LVM partition

At the end of install, when you get error message:
Install grub manually on /sda1 and /sda2 (/sda3 and /sda4 will not let you, cause they're striped) use console to run:
# update-grub
# grub-install /dev/sda1
# grub-install /dev/sda2
Return to setup and skip installation of grub (you installed it manally)

Hope that helps.

Paul Tomblin (ptomblin) wrote :

I upgraded to Kubuntu 16.04 and it's still happening. When am I supposed to see this supposed fix?

Changed in grub2-signed (Ubuntu):
status: Confirmed → Fix Released
Displaying first 40 and last 40 comments. View all 152 comments or add a comment.
This report contains Public information  Edit
Everyone can see this information.

Duplicates of this bug

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.