Error: diskfilter writes are not supported

Bug #1274320 reported by Patrick Houle
This bug affects 367 people
Affects Status Importance Assigned to Milestone
grub
Unknown
Unknown
grub2 (Debian)
Fix Released
Unknown
grub2 (Fedora)
Confirmed
Undecided
grub2 (Ubuntu)
Fix Released
High
Unassigned
Trusty
Fix Released
High
dann frazier
Vivid
Fix Released
High
dann frazier
grub2-signed (Ubuntu)
Fix Released
Undecided
Unassigned
Trusty
Fix Released
Undecided
Unassigned
Vivid
Fix Released
Undecided
Unassigned

Bug Description

[Impact]
RAID and LVM users may run into a cryptic warning on boot from GRUB; because some variants of RAID and LVM are not supported for writing by GRUB itself. GRUB would typically try to write a tiny file to the boot partition for things like remembering the last selected boot entry.

[Test Case]
On an affected system (typically any RAID/LVM setup where the boot device is on RAID or on a LVM device), try to boot. Without the patch, the message will appear, otherwise it will not.

[Regression Potential]
The potential for regression is minimal as the patch involves enforcing the fact that diskfilter writes are unsupported by grub in menu building scripts, which will automatically avoid enabling recordfail (the offending feature which saves GRUB's state) if the boot partition is detected to be on a device which does not support diskfilter writes.

----

Once grub chooses what to boot to, an error shows up and will sit on the screen for approx. 5 seconds

"Error: diskfilter writes are not supported.
Press any key to continue..."

From what I understand, this error is related to raid partitions, and I have two of them (md0, md1). Both partitions are used (root and swap). Raid is assembled with mdadm and are raid0

This error message started appearing right after grub2 was updated on 01/27/2014.

System: Kernel: 3.13.0-5-generic x86_64 (64 bit) Desktop: KDE 4.11.5 Distro: Ubuntu 14.04 trusty
Drives: HDD Total Size: 1064.2GB (10.9% used)
        1: id: /dev/sda model: SanDisk_SDSSDRC0 size: 32.0GB
        2: id: /dev/sdb model: SanDisk_SDSSDRC0 size: 32.0GB
        3: id: /dev/sdc model: ST31000528AS size: 1000.2GB
RAID: Device-1: /dev/md1 - active raid: 0 components: online: sdb2 sda3 (swap)       Device-2: /dev/md0 - active raid: 0 components: online: sdb1 sda1 ( / )
Grub2: grub-efi-amd64 version 2.02~beta2-5

ProblemType: Bug
DistroRelease: Ubuntu 14.04
Package: grub-efi-amd64 2.02~beta2-5
ProcVersionSignature: Ubuntu 3.13.0-5.20-generic 3.13.0
Uname: Linux 3.13.0-5-generic x86_64
NonfreeKernelModules: nvidia
ApportVersion: 2.13.2-0ubuntu2
Architecture: amd64
CurrentDesktop: KDE
Date: Wed Jan 29 17:37:59 2014
SourcePackage: grub2
UpgradeStatus: Upgraded to trusty on 2014-01-23 (6 days ago)

Revision history for this message
In , Harald (harald-redhat-bugs) wrote :
Download full text (4.3 KiB)

Created attachment 795965
photo of bootscreen

after upgrade to F19 GRUB2 comes up with "error: diskfilter writes are not supported" and waits some seconds to press a key and thanks god boots after that automatically to not break wakeup-on-lan (see also attachment)

but what is this nonsense?

Personalities : [raid1] [raid10]
md2 : active raid10 sda3[0] sdc3[1] sdb3[3] sdd3[2]
      3875222528 blocks super 1.1 512K chunks 2 near-copies [4/4] [UUUU]
      bitmap: 2/29 pages [8KB], 65536KB chunk

md1 : active raid10 sda2[0] sdc2[1] sdb2[3] sdd2[2]
      30716928 blocks super 1.1 512K chunks 2 near-copies [4/4] [UUUU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md0 : active raid1 sda1[0] sdc1[1] sdd1[2] sdb1[3]
      511988 blocks super 1.0 [4/4] [UUUU]

unused devices: <none>
_________________________________________________

[root@rh:~]$ cat /boot/grub2/grub.cfg
#
# DO NOT EDIT THIS FILE
#
# It is automatically generated by grub2-mkconfig using templates
# from /etc/grub.d and settings from /etc/default/grub
#

### BEGIN /etc/grub.d/00_header ###
if [ -s $prefix/grubenv ]; then
  load_env
fi
if [ "${next_entry}" ] ; then
   set default="${next_entry}"
   set next_entry=
   save_env next_entry
   set boot_once=true
else
   set default="${saved_entry}"
fi

if [ x"${feature_menuentry_id}" = xy ]; then
  menuentry_id_option="--id"
else
  menuentry_id_option=""
fi

export menuentry_id_option

if [ "${prev_saved_entry}" ]; then
  set saved_entry="${prev_saved_entry}"
  save_env saved_entry
  set prev_saved_entry=
  save_env prev_saved_entry
  set boot_once=true
fi

function savedefault {
  if [ -z "${boot_once}" ]; then
    saved_entry="${chosen}"
    save_env saved_entry
  fi
}

function load_video {
  if [ x$feature_all_video_module = xy ]; then
    insmod all_video
  else
    insmod efi_gop
    insmod efi_uga
    insmod ieee1275_fb
    insmod vbe
    insmod vga
    insmod video_bochs
    insmod video_cirrus
  fi
}

terminal_output console
set timeout=1
### END /etc/grub.d/00_header ###

### BEGIN /etc/grub.d/10_linux ###
menuentry 'Fedora, with Linux 3.10.11-200.fc19.x86_64' --class fedora --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.10.11-200.fc19.x86_64-advanced-b935b5db-0051-4f7f-83ac-6a6651fe0988' {
        savedefault
        load_video
        set gfxpayload=keep
        insmod gzio
        insmod part_msdos
        insmod part_msdos
        insmod part_msdos
        insmod part_msdos
        insmod diskfilter
        insmod mdraid1x
        insmod ext2
        set root='mduuid/1d691642baed26df1d1974964fb00ff8'
        if [ x$feature_platform_search_hint = xy ]; then
          search --no-floppy --fs-uuid --set=root --hint='mduuid/1d691642baed26df1d1974964fb00ff8' 1de836e4-e97c-43ee-b65c-400b0c29d3aa
        else
          search --no-floppy --fs-uuid --set=root 1de836e4-e97c-43ee-b65c-400b0c29d3aa
        fi
        linux /vmlinuz-3.10.11-200.fc19.x86_64 root=UUID=b935b5db-0051-4f7f-83ac-6a6651fe0988 ro divider=10 audit=0 rd.plymouth=0 plymouth.enable=0 rd.md.uuid=b7475879:c95d9a47:c5043c02:0c5ae720 rd.md.uuid=1d691642:baed26df:1d197496:4fb00ff8 rd.md.uuid=ea253255:cb915401:f32794ad:ce0fe396 rd.luk...

Read more...

Revision history for this message
In , Harald (harald-redhat-bugs) wrote :

oh, and remove the line "insmod diskfilter" from "grub.cfg" does not change anything

Revision history for this message
In , Michal (michal-redhat-bugs) wrote :

I'm seeing the same error. I found the message mysterious, so I took a look at the code and discovered the following:
- "diskfilter" is GRUB's implementation detail for working with LVM and MD RAID
  devices.
- Writing to these kinds of devices is not implemented in GRUB.
- The error may have always been there, but
  0085-grub-core-disk-diskfilter.c-grub_diskfilter_write-Ca.patch made it more
  visible.
- The reason GRUB is trying to write to the device could be it's following
  the "save_env" commands in the config file.

Revision history for this message
In , Harald (harald-redhat-bugs) wrote :

interesting - why does GRUB try to write anything?
it has not to touch any FS at boot

GRUB2 is such a large step backwards because it is more or less it's own operating system with the most ugly configuration one could design while grub-legacy was a boot-manager and nothing else

finally we end in 3 full operating systems

* grub
* dracut
* linux

Revision history for this message
In , Harald (harald-redhat-bugs) wrote :

/etc/default/grub with these options avoids a lot of crap on Fedora-Only machines

GRUB_TIMEOUT=1
GRUB_DISTRIBUTOR="Fedora"
GRUB_SAVEDEFAULT="false"
GRUB_TERMINAL_OUTPUT="console"
GRUB_DISABLE_RECOVERY="true"
GRUB_DISABLE_SUBMENU="true"
GRUB_DISABLE_OS_PROBER="true"

Revision history for this message
In , Michal (michal-redhat-bugs) wrote :

Note that GRUB Legacy had a similar feature: the "savedefault" command.

Revision history for this message
In , Harald (harald-redhat-bugs) wrote :

but it did not halt boot for some seconds with a useless error message and "press any key to continue" as well it did not mess up with submenues and whatever nor did it freeze the machine while edit the kernel line which happens with GRUB2 way too often if you need to edit it

Revision history for this message
In , Michal (michal-redhat-bugs) wrote :

My comment #5 was just to show that the assertion "it has not to touch any FS at boot" is false and that GRUB Legacy was no different in this regard.
I already commented on the increased visibility of the error, in comment #2.

160 comments hidden view all 180 comments
Revision history for this message
Patrick Houle (buddlespit) wrote :
description: updated
description: updated
Revision history for this message
Patrick Houle (buddlespit) wrote :

I should also point out that the system will boot normally once any key is pressed or 5 seconds elapses.

description: updated
Revision history for this message
Launchpad Janitor (janitor) wrote :

Status changed to 'Confirmed' because the bug affects multiple users.

Changed in grub2 (Ubuntu):
status: New → Confirmed
summary: - Error after boot menu
+ Error: diskfilter writes are not supported
Changed in grub2 (Ubuntu):
importance: Undecided → Low
importance: Low → Medium
Revision history for this message
KrautOS (krautos) wrote :

Got the same issue on a up-to-date "trusty" machine with / on mdadm RAID 1 and swap on madam RAID 0. Any clues how to fix it?

Revision history for this message
Patrick Houle (buddlespit) wrote :

I changed my raids <dump> and <pass> to '0 0' instead of '0 1' in /etc/fstab.

Revision history for this message
Singtoh (singtoh) wrote :

Hello all,

Just thought I would throw in as well. Just started seeing this after a fresh install of Ubuntu-Trusty today at the first bootup and all boots there after. I am not running RAID but I am using LVM. This is Ubuntu-Trusty amd64. Just as a side note, on todays install I didn't give the system a /boot partition like I have seen in all the LVM tutorials. I just have two disks that I made LVM partitions on ie. /root /home /Storage /Storage1 and swap. Runs real nice but get that nagging error at boot????? Hope it gets a fix soon.

Cheers,

Singtoh

Revision history for this message
Huygens (huygens-25) wrote :

Here is another different kind of setup which triggers the problem:
/boot ext4 on a md RAID10 (4 HDD) partition
/ btrfs RAID10 (4 HDD) partition
swap on a md RAID0 (4HDD) partition

The boot and kernel are on a MD RAID (software RAID), whereas the rest of the system is using Btrfs RAID.

Revision history for this message
Cybjit (cybjit) wrote :

I also get this message.
/ is on ext4 LVM, and no separate /boot.
Booting works fine after the delay or key press.

Revision history for this message
Singtoh (singtoh) wrote :

Just to add this tidbit. I just re-installed with normal partitioning (no raid and no LVM) just /root /home and swap and it boots normally, no 5sec wait and no errors. So I guess LVM & or RAID related???? I am just about to re-install again to a new SSD and will install to LVM again and I'll post back with the outcome.

Cheers,

Singtoh

Revision history for this message
robled (robled) wrote :

This is definitely RAID/LVM related. On my 14.04 system with a ext4 boot partition I don't get the error, but on another system that's fully LVM I do get the error.

Has anyone come up with a grub config workaround to prevent the delay on boot?

Revision history for this message
Jean-Mi (forum-i) wrote :

You guys should be happy your system still boots. I just got that error (diskfilter writes are not supported) but grub exists immediately after, leaving my uefi with no other choice than booting another distro.
I had to spam press the pause key on my keyboard to get the error message before it disappears.
On my setup, the boot error occurs with openSuse installed on LVM2. The other distro is installed with a regular /boot (ext4) separate partition. Both are using grub. I could load both by calling their respective grubx64.efi from the ESP partition.
The last thing I remember having done on openSuse was to create a btrfs partition and tweaked /etc/fstab a little bit.
From the other distro, I can read openSuse's files and everything looks fine. It's like the boot loader used to work and suddenly failed.
I'd love to remember what else I did since it worked. And I'd love to be able to boot openSuse again.

Revision history for this message
Jean-Mi (forum-i) wrote :

I may have found the reason for my particular crash. Now my system boots normally.

According to the bug report #1006289 at redhat, the bug could come with insmod diskfilter but someone deactivated that mod and still got the error. I don't even have this mod declared. But I noticed openSuse loves to handle everything on reboot, like setting the next OS to load.
My /boot/grub/grubenv contains 2 lines. Basically, save_entry=openSUSE and next_entry=LMDE Cinnamon.
I removed those lines and the error disappeared. /Maybe/ those line instructs grub to write something on the boot partition, which it's perfectly unable to do since it cannot write to LVM.
Anyway, it seems that solving this bug requires to find out why grub tries to write data.

Revision history for this message
hamish (hamish-b) wrote :

Hi, I get the same error with 14.04 beta1 booting into RAID1.

for those running their swap partitions in a raid, I'm wondering if it would be better to just mount the multiple swap partitions in fstab and give them all equal priority? For soft-raid it would cut out a layer of overhead, and for swap anything which cuts out overhead is a good thing. (e.g., mount options in fstab for all swap partitions: sw,pri=10)

See `man 2 swapon` for details.
       "If two or more areas
       have the same priority, and it is the highest priority available, pages
       are allocated on a round-robin basis between them."

Revision history for this message
Phillip Susi (psusi) wrote :

That would defeat the purpose of raid1, which is to keep the system up and running when a drive fails. With two separate swaps, if the disk fails you're going to probably have half of user space die and end up with a badly broken server that needs rebooted.

Revision history for this message
Artyom Nosov (artyom.nosov) wrote :

Got the same issue on the daily build of trust (20140319). / , /home and swap all is RAID1

Revision history for this message
Denis Telnov (irland) wrote :

I have this issue on ubuntu Trusty 14.04 on / in lvm. Deleting /boot/grub/grubenv prevent this error on next boot, but grub will create this file every time boot, thus I have rm /boot/grub/grubenv in my crontab.

Revision history for this message
stoffel010170 (stoffel-010170) wrote :

Have the same bug on my LVM system,too. My systems without LVM and RAID are not affected.

Revision history for this message
Thomas (t.c) wrote :

I have the bug too, I use root filesystem (/) as a software raid 1

Revision history for this message
Thomas (t.c) wrote :

# GRUB Environment Block
#######################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################

thats the content of /boot/grub/grubenv - is it right?

Revision history for this message
VladimirCZ (vlabla) wrote :

I also get this message.
/ and swap volumes are on ext4 LVM, and no separate /boot.
Booting works fine after the delay or key press.

Revision history for this message
Ubuntu QA Website (ubuntuqa) wrote :

This bug has been reported on the Ubuntu ISO testing tracker.

A list of all reports related to this bug can be found here:
http://iso.qa.ubuntu.com/qatracker/reports/bugs/1274320

tags: added: iso-testing
Revision history for this message
Moritz Baumann (mo42) wrote :

The problem is the call to the recordfail function in each menuentry. If I comment it in /boot/grub/grub.cfg, the error message no longer appears. Unfortunately, there seems to be no configuration option in /etc/default/grub which may prevent the scripts in /etc/grub.d from adding that function call.

Revision history for this message
Fabien Lusseau (fabien-beosfrance) wrote :

I'm also affected by this bug. So I can confirm it's still there on a fresh install of 14.04

I'm using RAID0 for /

Revision history for this message
Moritz Baumann (mo42) wrote :

As a temporary fix, you can edit /etc/grub.d/10_linux and replace 'quick_boot="1"' with 'quick_boot="0"' in line 25. (Don't forget to run "sudo update-grub" afterwards.)

Revision history for this message
Jan Rathmann (kaiserclaudius) wrote : Re: [Bug 1274320] Re: Error: diskfilter writes are not supported

I can confirm that the workaround mentioned by Moritz (setting
'quickboot=0') works for me.

137 comments hidden view all 180 comments
Revision history for this message
In , eileon (eileon-redhat-bugs) wrote :

For Fedora 20 in /etc/default/grub

GRUB_SAVEDEFAULT="false"

makes the difference (after grub2-mkconfig)

136 comments hidden view all 180 comments
Revision history for this message
Drew Michel (drew-michel) wrote :

I can also confirm this bug is happening with the latest beta version of Trusty with /boot living on an EXT4 LVM partition.

* setting quick_boot="0" in /etc/grub.d/10_linux and running update-grub fixes the issue
* setting GRUB_SAVEDEFAULT="false" in /etc/default/grub and running update-grub does not fix the issue
* removing recordfail from /boot/grub/grub.cfg fixes the issue

3.13.0-23-generic #45-Ubuntu
Distributor ID: Ubuntu
Description: Ubuntu Trusty Tahr (development branch)
Release: 14.04

apt-cache policy grub-pc
grub-pc:
  Installed: 2.02~beta2-9

Revision history for this message
G (gzader) wrote :

Just to confirm, this is in the release version of 14.04. I've got it on a fresh build with raid 1 via mdadm, no swap.

It does not halt booting, just a brief delay.

Revision history for this message
Quesar (rick-microway) wrote :

I just made a permanent clean fix for this, at least for MD (software RAID). It can easily be modified to fix for LVM too. Edit /etc/grub.d/00_header and change the recordfail section to this:

if [ "$quick_boot" = 1 ]; then
    cat <<EOF
function recordfail {
  set recordfail=1
EOF
    FS="$(grub-probe --target=fs "${grubdir}")"
    GRUBMDDEVICE="$(grub-probe --target=disk "${grubdir}" | grep \/dev\/md)"
    if [ $? -eq 0 ] ; then
        cat <<EOF
  # GRUB lacks write support for $GRUBMDDEVICE, so recordfail support is disabled.
EOF
    else
        case "$FS" in
          btrfs | cpiofs | newc | odc | romfs | squash4 | tarfs | zfs)
            cat <<EOF
  # GRUB lacks write support for $FS, so recordfail support is disabled.
EOF
            ;;
          *)
            cat <<EOF
  if [ -n "\${have_grubenv}" ]; then if [ -z "\${boot_once}" ]; then save_env recordfail; fi; fi
EOF
        esac
    fi
    cat <<EOF
}
EOF
fi

Revision history for this message
robled (robled) wrote :

The work-around from #24 gets rid of the error for me. I timed my boot process after the change and didn't notice any appreciable difference in boot time with the work-around in place. This testing was performed using a recent laptop with an SSD.

Revision history for this message
EAB (adair-boder) wrote :

I got this error message too - with a fresh install of 14.04 Server Official Release.
I also have 2 RAID-1 setups.

Revision history for this message
Francisco Stefano Wechsler (geral-k) wrote :

I have recently installed Xubuntu 14.04 (Official Release) on two computers. On one of them I did not use RAID and allowed automatic disk partititioning; no boot error has been observed. For the second computer I used the Minimal CD, installed two RAID0 devices (one for swap and one for /) and Xubuntu; on this computer the error message appeared every time I booted. The workaround suggested by Moritz Baumann (#24) eliminated the error message.

1 comments hidden view all 180 comments
Revision history for this message
bolted (k-minnick) wrote :

I followed comment #28 from Quesar (rick-microway) above with lubuntu 14.04 running on a 1U supermicro server. Rebooted multiple times to test, and I am no longer getting this error message. A huge thank you to Quesar for a fix!

Revision history for this message
Vadim Nevorotin (malamut) wrote :

Fix from #28 extended to support LVM (so, I think, it is universal clean fix of this bug). Change recordfail section in /etc/grub.d/00_header to:

if [ "$quick_boot" = 1 ]; then
    cat <<EOF
function recordfail {
  set recordfail=1
EOF
    GRUBMDDEVICE="$(grub-probe --target=disk "${grubdir}")"
    GRUBLVMDEVICE="$(grub-probe --target=disk "${grubdir}")"
    if echo "$GRUBMDDEVICE" | grep "/dev/md" > /dev/null; then
        cat <<EOF
  # GRUB lacks write support for $GRUBMDDEVICE, so recordfail support is disabled.
EOF
    elif echo "$GRUBLVMDEVICE" | grep "/dev/mapper" > /dev/null; then
        cat <<EOF
  # GRUB lacks write support for $GRUBLVMDEVICE, so recordfail support is disabled.
EOF
    else
        FS="$(grub-probe --target=fs "${grubdir}")"
        case "$FS" in
          btrfs | cpiofs | newc | odc | romfs | squash4 | tarfs | zfs)
            cat <<EOF
  # GRUB lacks write support for $FS, so recordfail support is disabled.
EOF
          ;;
          *)
            cat <<EOF
  if [ -n "\${have_grubenv}" ]; then if [ -z "\${boot_once}" ]; then save_env recordfail; fi; fi
EOF
        esac
    fi
    cat <<EOF
}

Then run update-grub

Revision history for this message
Andrew Hamilton (ahamilton9) wrote :

Just confirming that the above (RAID & LVM version) fix is working for a RAID10, 14.04, x64, fresh install. I don't have LVM up though, so I cannot confirm that detail.

Revision history for this message
Tato Salcedo (tatosalcedo) wrote :

I have no raid, I lvm and present the same error

Revision history for this message
Aaron Hastings (thecosmicfrog) wrote :

Just installed 14.04 and seeing the same error on boot.

I don't have any RAID setup, but I am using LVM ext4 volumes for /, /home and swap. My /boot is on a separate ext4 primary partition in an msdos partition table.

Revision history for this message
Agustín Ure (aeu79) wrote :

Confirming that the fix in comment #34 solved the problem in a fresh install of 14.04 with LVM.

Revision history for this message
Uqbar (uqbar) wrote :

I would like to apply the fix from comment#34 as I am using software RAID6 and LVM at the same time.
Unluckily I am not so good at changing that "recordfail section in /etc/grub.d/00_header".
Would it be possible to attach here the complete fixed /etc/grub.d/00_header file?
Would it be possible to have this as an official "fix released"?

Revision history for this message
David Twersky (dmtwersky) wrote :

Confirming comment#34 fixed it for me as well.
Im using LVM on all partitions.

tags: added: patch
Changed in grub2 (Ubuntu):
status: Confirmed → Triaged
Changed in grub:
importance: Undecided → Unknown
status: New → Unknown
importance: Unknown → Undecided
status: Unknown → New
tags: added: utopic
Steve Langasek (vorlon)
Changed in grub2 (Ubuntu):
importance: Medium → High
Anders Kaseorg (andersk)
Changed in grub:
status: New → Invalid
Changed in grub2 (Debian):
status: Unknown → New
Changed in mdadm (Ubuntu):
assignee: nobody → Dimitri John Ledkov (xnox)
Changed in mdadm (Ubuntu):
status: New → Confirmed
Changed in mdadm (Ubuntu):
importance: Undecided → High
status: Confirmed → Triaged
Changed in grub:
importance: Undecided → Unknown
status: Invalid → Unknown
Changed in grub2 (Ubuntu):
assignee: nobody → Colin Watson (cjwatson)
Changed in mdadm (Ubuntu):
status: Triaged → Invalid
Colin Watson (cjwatson)
Changed in grub2 (Ubuntu):
assignee: Colin Watson (cjwatson) → nobody
123 comments hidden view all 180 comments
Revision history for this message
In , Fedora (fedora-redhat-bugs) wrote :

This message is a notice that Fedora 19 is now at end of life. Fedora
has stopped maintaining and issuing updates for Fedora 19. It is
Fedora's policy to close all bug reports from releases that are no
longer maintained. Approximately 4 (four) weeks from now this bug will
be closed as EOL if it remains open with a Fedora 'version' of '19'.

Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, simply change the 'version'
to a later Fedora version.

Thank you for reporting this issue and we are sorry that we were not
able to fix it before Fedora 19 is end of life. If you would still like
to see this bug fixed and are able to reproduce it against a later version
of Fedora, you are encouraged change the 'version' to a later Fedora
version prior this bug is closed as described in the policy above.

Although we aim to fix as many bugs as possible during every release's
lifetime, sometimes those efforts are overtaken by events. Often a
more recent Fedora release includes newer upstream software that fixes
bugs or makes them obsolete.

Revision history for this message
In , Fedora (fedora-redhat-bugs) wrote :

Fedora 19 changed to end-of-life (EOL) status on 2015-01-06. Fedora 19 is
no longer maintained, which means that it will not receive any further
security or bug fix updates. As a result we are closing this bug.

If you can reproduce this bug against a currently maintained version of
Fedora please feel free to reopen this bug against that version. If you
are unable to reopen this bug, please file a new report against the
current release. If you experience problems, please add a comment to this
bug.

Thank you for reporting this bug and we are sorry it could not be fixed.

Steve Langasek (vorlon)
Changed in grub2 (Ubuntu):
assignee: nobody → Mathieu Trudel-Lapierre (mathieu-tl)
Changed in grub2 (Ubuntu):
status: Triaged → Incomplete
Loïc Minier (lool)
Changed in grub2 (Ubuntu):
status: Incomplete → New
Changed in grub2 (Ubuntu):
status: New → Confirmed
Changed in grub2 (Ubuntu):
status: Confirmed → In Progress
Changed in grub2 (Ubuntu):
status: In Progress → Fix Released
Charis (tao-qqmail)
Changed in grub2 (Ubuntu):
assignee: Mathieu Trudel-Lapierre (mathieu-tl) → nobody
Changed in grub2 (Ubuntu Trusty):
status: New → In Progress
Changed in grub2 (Ubuntu Vivid):
status: New → In Progress
importance: Undecided → High
Changed in grub2 (Ubuntu Trusty):
assignee: nobody → Mathieu Trudel-Lapierre (mathieu-tl)
importance: Undecided → High
Changed in grub2 (Ubuntu Vivid):
assignee: nobody → Mathieu Trudel-Lapierre (mathieu-tl)
Changed in mdadm (Ubuntu):
assignee: Dimitri John Ledkov (xnox) → nobody
Changed in mdadm (Ubuntu Trusty):
status: New → Confirmed
Changed in mdadm (Ubuntu Vivid):
status: New → Confirmed
Phillip Susi (psusi)
no longer affects: mdadm (Ubuntu)
no longer affects: mdadm (Ubuntu Trusty)
no longer affects: mdadm (Ubuntu Vivid)
description: updated
dann frazier (dannf)
Changed in grub2 (Ubuntu Vivid):
assignee: Mathieu Trudel-Lapierre (mathieu-tl) → dann frazier (dannf)
Changed in grub2 (Ubuntu Trusty):
assignee: Mathieu Trudel-Lapierre (mathieu-tl) → dann frazier (dannf)
Chris J Arges (arges)
Changed in grub2 (Ubuntu Trusty):
status: In Progress → Fix Committed
tags: added: verification-needed
Chris J Arges (arges)
Changed in grub2 (Ubuntu Vivid):
status: In Progress → Fix Committed
Chris J Arges (arges)
Changed in grub2-signed (Ubuntu Trusty):
status: New → Fix Committed
Changed in grub2-signed (Ubuntu Vivid):
status: New → Fix Committed
Simon Déziel (sdeziel)
tags: added: verification-done-trusty verification-needed-vivid
removed: verification-needed
Simon Déziel (sdeziel)
tags: added: verification-done-vivid
removed: verification-needed-vivid
Changed in grub2-signed (Ubuntu):
status: New → Confirmed
23 comments hidden view all 180 comments
Revision history for this message
Rich Hart (sirwizkid) wrote :

The 1.7 package is working flawlessly on my systems that were effected.
Thanks for fixing this.

Revision history for this message
fermulator (fermulator) wrote :

Based upon the comments above, and the TEST CASE defined in the main section for this bug, I confirm that verification=done

###
--> PASS
###

I tested on my own system running

{{{
$ mount | grep md60
/dev/md60 on / type ext4 (rw,errors=remount-ro)

$ cat /proc/mdstat | grep -A1 md60
md60 : active raid1 sdd2[0] sdb2[1]
      58560384 blocks super 1.2 [2/2] [UU]

fermulator@fermmy-server:~$ dpkg --list | grep grub
ii grub-common 2.02~beta2-9ubuntu1.7 amd64 GRand Unified Bootloader (common files)
ii grub-gfxpayload-lists 0.6 amd64 GRUB gfxpayload blacklist
ii grub-pc 2.02~beta2-9ubuntu1.7 amd64 GRand Unified Bootloader, version 2 (PC/BIOS version)
ii grub-pc-bin 2.02~beta2-9ubuntu1.7 amd64 GRand Unified Bootloader, version 2 (PC/BIOS binaries)
ii grub2-common 2.02~beta2-9ubuntu1.7 amd64 GRand Unified Bootloader (common files for version 2)
}}}

Full results:
http://paste.ubuntu.com/14259366/

---

NOTE: I'm not sure what to do about the "grub2-signed" properties for this bug...

information type: Public → Public Security
information type: Public Security → Public
Revision history for this message
Launchpad Janitor (janitor) wrote :

This bug was fixed in the package grub2 - 2.02~beta2-9ubuntu1.7

---------------
grub2 (2.02~beta2-9ubuntu1.7) trusty; urgency=medium

  * Cherry-picks to better handle TFTP timeouts on some arches: (LP: #1521612)
    - (7b386b7) efidisk: move device path helpers in core for efinet
    - (c52ae40) efinet: skip virtual IP devices when enumerating cards
    - (f348aee) efinet: enable hardware filters when opening interface
  * Update quick boot logic to handle abstractions for which there is no
    write support. (LP: #1274320)

 -- dann frazier <email address hidden> Wed, 16 Dec 2015 14:03:48 -0700

Changed in grub2 (Ubuntu Trusty):
status: Fix Committed → Fix Released
Revision history for this message
Steve Langasek (vorlon) wrote : Update Released

The verification of the Stable Release Update for grub2 has completed successfully and the package has now been released to -updates. Subsequently, the Ubuntu Stable Release Updates Team is being unsubscribed and will not receive messages about this bug report. In the event that you encounter a regression using the package from -updates please report a new bug using ubuntu-bug and tag the bug report regression-update so we can easily find any regressions.

Revision history for this message
Launchpad Janitor (janitor) wrote :

This bug was fixed in the package grub2-signed - 1.34.8

---------------
grub2-signed (1.34.8) trusty; urgency=medium

  * Rebuild against grub-efi-amd64 2.02~beta2-9ubuntu1.7 (LP: #1521612,
    LP: #1274320).

 -- dann frazier <email address hidden> Wed, 16 Dec 2015 14:23:00 -0700

Changed in grub2-signed (Ubuntu Trusty):
status: Fix Committed → Fix Released
Revision history for this message
Launchpad Janitor (janitor) wrote :

This bug was fixed in the package grub2 - 2.02~beta2-22ubuntu1.5

---------------
grub2 (2.02~beta2-22ubuntu1.5) vivid; urgency=medium

  * Merge in changes from 2.02~beta2-22ubuntu1.3:
    - d/p/arm64-set-correct-length-of-device-path-end-entry.patch: Fixes
      booting arm64 kernels on certain UEFI implementations. (LP: #1476882)
    - progress: avoid NULL dereference for net files. (LP: #1459872)
    - arm64/setjmp: Add missing license macro. (LP: #1459871)
    - Cherry-pick patch to add SAS disks to the device list from the ofdisk
      module. (LP: #1517586)
    - Cherry-pick patch to open Simple Network Protocol exclusively.
      (LP: #1508893)
  * Cherry-picks to better handle TFTP timeouts on some arches: (LP: #1521612)
    - (7b386b7) efidisk: move device path helpers in core for efinet
    - (c52ae40) efinet: skip virtual IP devices when enumerating cards
    - (f348aee) efinet: enable hardware filters when opening interface
  * Update quick boot logic to handle abstractions for which there is no
    write support. (LP: #1274320)

 -- dann frazier <email address hidden> Wed, 16 Dec 2015 13:31:15 -0700

Changed in grub2 (Ubuntu Vivid):
status: Fix Committed → Fix Released
Revision history for this message
Launchpad Janitor (janitor) wrote :

This bug was fixed in the package grub2-signed - 1.46.5

---------------
grub2-signed (1.46.5) vivid; urgency=medium

  * Rebuild against grub2 2.02~beta2-22ubuntu1.5 (LP: #1476882, LP: #1459872,
    LP: 1459871, LP: #1517586, LP:#1508893, LP: #1521612, LP: #1274320).

 -- dann frazier <email address hidden> Wed, 16 Dec 2015 14:18:28 -0700

Changed in grub2-signed (Ubuntu Vivid):
status: Fix Committed → Fix Released
Revision history for this message
Lior Goikhburg (goikhburg) wrote :

Problens with installing latest 14.04.3

I Have tried every solution mentioned in this thread and no luck.
Grub would not install...

HP server with 4 SATA disks, RAID 10 (md0) with /boot and / on it, No LVM

installing with:
update-grub - works fine
grub-install /dev/md0 - fails

Went up to grub version 2.02~beta2-32ubuntu1 latest from XENIAL .... still getting diskfilter error ... nothing helps.

Any ideas, anyone ?

Revision history for this message
wiley.coyote (tjwiley) wrote :

Did you try simply updating the packages from the Trusty repos? The fix has already been released.

2.02~beta2-9ubuntu1.7

The fix is there & working...at least for me.

Changed in grub2 (Debian):
status: New → Fix Released
Revision history for this message
armaos (alexandros-k) wrote :

hi,
so more or less i have tried the solutions above but still without luck.
@Lior Goikhburg (goikhburg): did you manage to solve it?

all ideas are more than welcome
thnx

Revision history for this message
Lior Goikhburg (goikhburg) wrote :

I ended up with the following workaround:

When setting up the server i configured the following:

0. RAID 10 on /sda /sdb /sdc /sdd
1. /boot / and swap partition are on RAID but NOT IN LVM VOLUME
2. Rest of the RAID space - LVM partition

At the end of install, when you get error message:
Install grub manually on /sda1 and /sda2 (/sda3 and /sda4 will not let you, cause they're striped) use console to run:
# update-grub
# grub-install /dev/sda1
# grub-install /dev/sda2
Return to setup and skip installation of grub (you installed it manally)

Hope that helps.

Revision history for this message
Paul Tomblin (ptomblin) wrote :

I upgraded to Kubuntu 16.04 and it's still happening. When am I supposed to see this supposed fix?

Changed in grub2-signed (Ubuntu):
status: Confirmed → Fix Released
Revision history for this message
oglop (1oglop1) wrote :

Yeah,Xubuntu 17.04 still present..

I hope 17.10 will get fixed

Revision history for this message
Anders Kaseorg (andersk) wrote :

This was fixed in 16.04, but if you had manually modified /etc/grub.d/00_header before the upgrade, the new version will not have been installed. You may have an unmodified version in /etc/grub.d/00_header.dpkg-new. If not, run ‘apt-get download grub-common; dpkg -x grub-common_2.02~beta3-4ubuntu6_amd64.deb grub-common-extracted’ and you’ll have the unmodified version in grub-common-extracted/etc/grub.d/00_header.dpkg-new.

Changed in grub2 (Fedora):
importance: Unknown → Undecided
status: Unknown → Won't Fix
11 comments hidden view all 180 comments
Revision history for this message
Tim Ritberg (xpert-reactos) wrote :

Still exists in 18.04
I downloaded grub-common_2.02-2ubuntu8.2_amd64.deb and it's the same 00_header like on my disk.
My grub is located on a Raid1.

Revision history for this message
vadzen (vadzen) wrote :

Still exists in 18.04.1
No RAID. 1 SSD disk ST500LM000-1EJ16 + 500GB WD BlackTM M.2 NVMe PCIe SSD.

Revision history for this message
dann frazier (dannf) wrote : Re: [Bug 1274320] Re: Error: diskfilter writes are not supported

On Fri, Dec 28, 2018 at 7:15 AM vadzen <email address hidden> wrote:
>
> Still exists in 18.04.1
> No RAID. 1 SSD disk ST500LM000-1EJ16 + 500GB WD BlackTM M.2 NVMe PCIe SSD.

To be clear, we did not resolve this by implementing writes via
diskfilter, we just added a warning if we detect diskfilter is in use.
Are you seeing a condition where 1) you do not get warned by
grub-reboot *and* 2) You still see "Error: diskfilter writes are not
supported." after reboot?

Revision history for this message
Brendan Holmes (whiling) wrote :

This is still occurring on Ubuntu 18.04.2. Hardware RAID1 & RAID0.

Tried adding quick_boot=0 to /etc/grub.d/00_header, no difference. If I remove RAID, works fine.

Also proceeds past the error if I select "Guided - use entire disk" instead of "Guided - use entire disk and set up LVM", but then fails to reboot\boot.

Grub is v2.02-2ubuntu8.12.

Any suggestions?

Revision history for this message
Brendan Holmes (whiling) wrote :

Further to yesterday's comment, issue is occuring while installing (using debian-installer), at the "Install the GRUB boot loader on a hard disk" stage. Displayed error is: "Unable to install GRUB in /dev/mapper/<machine_name>--vg-root"

I have worked-around by choosing not to use LVM (I am choosing the "Guided - use entire disk" option), which reduces disk manageability. I'm left with the impression installing Ubuntu on enterprise-grade hardware is a bit alpha.

Revision history for this message
Jan Navrátil (honzanav) wrote :

Also affects Ubuntu 19.04 with this configuration

Disk /dev/sda: 119,2 GiB, 128 035 676 160 bajtů, 250 069 680 sektorů
Disk model: INTEL SSDSCKKW12
Jednotky: sektorů po 1 * 512 = 512 bajtech
Velikost sektoru (logického/fyzického): 512 bajtů / 512 bajtů
Velikost I/O (minimální/optimální): 512 bajtů / 512 bajtů
Typ popisu disku: gpt
Identifikátor disku: A99C455B-4F9A-4FE3-A57F-35D26DC99583

Zařízení Začátek Konec Sektory Velikost Druh
/dev/sda1 2048 2050047 2048000 1000M Systém EFI
/dev/sda2 2050048 2582527 532480 260M Systém EFI
/dev/sda3 2582528 150573055 147990528 70,6G Základní data Microsoftu
/dev/sda4 150575104 151691263 1116160 545M Prostředí obnovy Windows
/dev/sda5 151693312 250068991 98375680 46,9G Souborový systém Linuxu

Disk /dev/sdb: 465,8 GiB, 500 107 862 016 bajtů, 976 773 168 sektorů
Disk model: HGST HTS545050A7
Jednotky: sektorů po 1 * 512 = 512 bajtech
Velikost sektoru (logického/fyzického): 512 bajtů / 4096 bajtů
Velikost I/O (minimální/optimální): 4096 bajtů / 4096 bajtů
Typ popisu disku: gpt
Identifikátor disku: D5C37A89-480B-498C-BFF5-FFA3F8A1D93C

Zařízení Začátek Konec Sektory Velikost Druh
/dev/sdb1 2048 2050047 2048000 1000M Prostředí obnovy Windows
/dev/sdb2 2582528 4630527 2048000 1000M Startovací oddíl Lenova
/dev/sdb3 4630528 4892671 262144 128M Vyhrazeno pro Microsoft
/dev/sdb4 4892672 945827839 940935168 448,7G Základní data Microsoftu
/dev/sdb5 945829888 976773119 30943232 14,8G Prostředí obnovy Windows

Disk /dev/mapper/linux-ubuntu: 24 GiB, 25 769 803 776 bajtů, 50 331 648 sektorů
Jednotky: sektorů po 1 * 512 = 512 bajtech
Velikost sektoru (logického/fyzického): 512 bajtů / 512 bajtů
Velikost I/O (minimální/optimální): 512 bajtů / 512 bajtů

Disk /dev/mapper/linux-swap: 4 GiB, 4 294 967 296 bajtů, 8 388 608 sektorů
Jednotky: sektorů po 1 * 512 = 512 bajtech
Velikost sektoru (logického/fyzického): 512 bajtů / 512 bajtů
Velikost I/O (minimální/optimální): 512 bajtů / 512 bajtů

Revision history for this message
MasterCATZ (mastercatz) wrote :

ugh still exists

I might have to do the method Lior Goikhburg (goikhburg) did

and recreate with a standalone boot partition at the start of each drive striped outside of LVM or something

Revision history for this message
MasterCATZ (mastercatz) wrote :

(GRUB) 2.02-2ubuntu8.13

Revision history for this message
David Andruczyk (dandruczyk) wrote :

This affects 20.04 as well where the root device is on mdadm (/dev/md127)

Revision history for this message
MasterCATZ (mastercatz) wrote :

this still happens for me Ubuntu 20.04

this time around I even allowed grub to have ext4 partition acess at the front of the disks

instead of giving mdadm entire device this time I partitioned the disks and gave mdadm the partitions to raid with , Using LVM with btrfs partitions for boot and system

Changed in grub2 (Ubuntu Bionic):
status: New → Confirmed
Changed in grub2 (Ubuntu Focal):
status: New → Confirmed
Changed in grub2-signed (Ubuntu Bionic):
status: New → Confirmed
Changed in grub2-signed (Ubuntu Focal):
status: New → Confirmed
Revision history for this message
Allen Lee (metricv) wrote :

Confirmed on Ubuntu Focal (Kubuntu 20.04) installed on LVM disk, with GRUB_SAVEDEFAULT=true

This bug triggers whenever a boot option other than the first is being selected in the grub menu. grub won't be able to save that selection.

Revision history for this message
Amedee Van Gasse (amedee) wrote :

Confirmed on Ubuntu Jammy Jellyfish (Ubuntu 22.04 LTS) installed on LVM disk, with GRUB_SAVEDEFAULT=true.

This bug triggers whenever a boot option other than the first is being selected in the grub menu. Grub won't be able to save that selection.

Revision history for this message
Julian Andres Klode (juliank) wrote :

This bug is not about enabling writes through disk filters, but about not writing on those systems as part of the boot logic. Support for writes and hence GRUB_SAVEDEFAULT is a separate feature request I'd suggest you to take upstream.

no longer affects: grub2 (Ubuntu Bionic)
no longer affects: grub2 (Ubuntu Focal)
no longer affects: grub2-signed (Ubuntu Bionic)
no longer affects: grub2-signed (Ubuntu Focal)
Revision history for this message
In , bmertens (bmertens-redhat-bugs) wrote :

Reopening, this issue occurs on a new installation of Fedora 39 with /boot on a RAID 1 device.

https://askubuntu.com/questions/468466/diskfilter-writes-are-not-supported-what-triggers-this-error/498281#498281 and https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/1274320 indicate this problem is related to LVM/RAID.

$ df -hP /boot
Filesystem Size Used Avail Use% Mounted on
/dev/md125 2.0G 250M 1.6G 14% /boot

$ cat /proc/mdstat
Personalities : [raid10] [raid1] [raid6] [raid5] [raid4]
md125 : active raid1 sdc2[2] sdb2[1] sdd2[3] sda3[0]
      2094080 blocks super 1.2 [4/4] [UUUU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

There is also a patch proposed at https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=754921

Changed in grub2 (Fedora):
status: Won't Fix → Confirmed
Revision history for this message
In , amoloney (amoloney-redhat-bugs) wrote :

This message is a reminder that Fedora Linux 39 is nearing its end of life.
Fedora will stop maintaining and issuing updates for Fedora Linux 39 on 2024-11-26.
It is Fedora's policy to close all bug reports from releases that are no longer
maintained. At that time this bug will be closed as EOL if it remains open with a
'version' of '39'.

Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, change the 'version'
to a later Fedora Linux version. Note that the version field may be hidden.
Click the "Show advanced fields" button if you do not see it.

Thank you for reporting this issue and we are sorry that we were not
able to fix it before Fedora Linux 39 is end of life. If you would still like
to see this bug fixed and are able to reproduce it against a later version
of Fedora Linux, you are encouraged to change the 'version' to a later version
prior to this bug being closed.

Displaying first 40 and last 40 comments. View all 180 comments or add a comment.
This report contains Public information  
Everyone can see this information.

Duplicates of this bug

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.