Error: diskfilter writes are not supported

Bug #1274320 reported by Patrick Houle
This bug affects 367 people
Affects Status Importance Assigned to Milestone
grub
Unknown
Unknown
grub2 (Debian)
Fix Released
Unknown
grub2 (Fedora)
Confirmed
Undecided
grub2 (Ubuntu)
Fix Released
High
Unassigned
Trusty
Fix Released
High
dann frazier
Vivid
Fix Released
High
dann frazier
grub2-signed (Ubuntu)
Fix Released
Undecided
Unassigned
Trusty
Fix Released
Undecided
Unassigned
Vivid
Fix Released
Undecided
Unassigned

Bug Description

[Impact]
RAID and LVM users may run into a cryptic warning on boot from GRUB; because some variants of RAID and LVM are not supported for writing by GRUB itself. GRUB would typically try to write a tiny file to the boot partition for things like remembering the last selected boot entry.

[Test Case]
On an affected system (typically any RAID/LVM setup where the boot device is on RAID or on a LVM device), try to boot. Without the patch, the message will appear, otherwise it will not.

[Regression Potential]
The potential for regression is minimal as the patch involves enforcing the fact that diskfilter writes are unsupported by grub in menu building scripts, which will automatically avoid enabling recordfail (the offending feature which saves GRUB's state) if the boot partition is detected to be on a device which does not support diskfilter writes.

----

Once grub chooses what to boot to, an error shows up and will sit on the screen for approx. 5 seconds

"Error: diskfilter writes are not supported.
Press any key to continue..."

From what I understand, this error is related to raid partitions, and I have two of them (md0, md1). Both partitions are used (root and swap). Raid is assembled with mdadm and are raid0

This error message started appearing right after grub2 was updated on 01/27/2014.

System: Kernel: 3.13.0-5-generic x86_64 (64 bit) Desktop: KDE 4.11.5 Distro: Ubuntu 14.04 trusty
Drives: HDD Total Size: 1064.2GB (10.9% used)
        1: id: /dev/sda model: SanDisk_SDSSDRC0 size: 32.0GB
        2: id: /dev/sdb model: SanDisk_SDSSDRC0 size: 32.0GB
        3: id: /dev/sdc model: ST31000528AS size: 1000.2GB
RAID: Device-1: /dev/md1 - active raid: 0 components: online: sdb2 sda3 (swap)       Device-2: /dev/md0 - active raid: 0 components: online: sdb1 sda1 ( / )
Grub2: grub-efi-amd64 version 2.02~beta2-5

ProblemType: Bug
DistroRelease: Ubuntu 14.04
Package: grub-efi-amd64 2.02~beta2-5
ProcVersionSignature: Ubuntu 3.13.0-5.20-generic 3.13.0
Uname: Linux 3.13.0-5-generic x86_64
NonfreeKernelModules: nvidia
ApportVersion: 2.13.2-0ubuntu2
Architecture: amd64
CurrentDesktop: KDE
Date: Wed Jan 29 17:37:59 2014
SourcePackage: grub2
UpgradeStatus: Upgraded to trusty on 2014-01-23 (6 days ago)

Revision history for this message
In , Harald (harald-redhat-bugs) wrote :
Download full text (4.3 KiB)

Created attachment 795965
photo of bootscreen

after upgrade to F19 GRUB2 comes up with "error: diskfilter writes are not supported" and waits some seconds to press a key and thanks god boots after that automatically to not break wakeup-on-lan (see also attachment)

but what is this nonsense?

Personalities : [raid1] [raid10]
md2 : active raid10 sda3[0] sdc3[1] sdb3[3] sdd3[2]
      3875222528 blocks super 1.1 512K chunks 2 near-copies [4/4] [UUUU]
      bitmap: 2/29 pages [8KB], 65536KB chunk

md1 : active raid10 sda2[0] sdc2[1] sdb2[3] sdd2[2]
      30716928 blocks super 1.1 512K chunks 2 near-copies [4/4] [UUUU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md0 : active raid1 sda1[0] sdc1[1] sdd1[2] sdb1[3]
      511988 blocks super 1.0 [4/4] [UUUU]

unused devices: <none>
_________________________________________________

[root@rh:~]$ cat /boot/grub2/grub.cfg
#
# DO NOT EDIT THIS FILE
#
# It is automatically generated by grub2-mkconfig using templates
# from /etc/grub.d and settings from /etc/default/grub
#

### BEGIN /etc/grub.d/00_header ###
if [ -s $prefix/grubenv ]; then
  load_env
fi
if [ "${next_entry}" ] ; then
   set default="${next_entry}"
   set next_entry=
   save_env next_entry
   set boot_once=true
else
   set default="${saved_entry}"
fi

if [ x"${feature_menuentry_id}" = xy ]; then
  menuentry_id_option="--id"
else
  menuentry_id_option=""
fi

export menuentry_id_option

if [ "${prev_saved_entry}" ]; then
  set saved_entry="${prev_saved_entry}"
  save_env saved_entry
  set prev_saved_entry=
  save_env prev_saved_entry
  set boot_once=true
fi

function savedefault {
  if [ -z "${boot_once}" ]; then
    saved_entry="${chosen}"
    save_env saved_entry
  fi
}

function load_video {
  if [ x$feature_all_video_module = xy ]; then
    insmod all_video
  else
    insmod efi_gop
    insmod efi_uga
    insmod ieee1275_fb
    insmod vbe
    insmod vga
    insmod video_bochs
    insmod video_cirrus
  fi
}

terminal_output console
set timeout=1
### END /etc/grub.d/00_header ###

### BEGIN /etc/grub.d/10_linux ###
menuentry 'Fedora, with Linux 3.10.11-200.fc19.x86_64' --class fedora --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.10.11-200.fc19.x86_64-advanced-b935b5db-0051-4f7f-83ac-6a6651fe0988' {
        savedefault
        load_video
        set gfxpayload=keep
        insmod gzio
        insmod part_msdos
        insmod part_msdos
        insmod part_msdos
        insmod part_msdos
        insmod diskfilter
        insmod mdraid1x
        insmod ext2
        set root='mduuid/1d691642baed26df1d1974964fb00ff8'
        if [ x$feature_platform_search_hint = xy ]; then
          search --no-floppy --fs-uuid --set=root --hint='mduuid/1d691642baed26df1d1974964fb00ff8' 1de836e4-e97c-43ee-b65c-400b0c29d3aa
        else
          search --no-floppy --fs-uuid --set=root 1de836e4-e97c-43ee-b65c-400b0c29d3aa
        fi
        linux /vmlinuz-3.10.11-200.fc19.x86_64 root=UUID=b935b5db-0051-4f7f-83ac-6a6651fe0988 ro divider=10 audit=0 rd.plymouth=0 plymouth.enable=0 rd.md.uuid=b7475879:c95d9a47:c5043c02:0c5ae720 rd.md.uuid=1d691642:baed26df:1d197496:4fb00ff8 rd.md.uuid=ea253255:cb915401:f32794ad:ce0fe396 rd.luk...

Read more...

Revision history for this message
In , Harald (harald-redhat-bugs) wrote :

oh, and remove the line "insmod diskfilter" from "grub.cfg" does not change anything

Revision history for this message
In , Michal (michal-redhat-bugs) wrote :

I'm seeing the same error. I found the message mysterious, so I took a look at the code and discovered the following:
- "diskfilter" is GRUB's implementation detail for working with LVM and MD RAID
  devices.
- Writing to these kinds of devices is not implemented in GRUB.
- The error may have always been there, but
  0085-grub-core-disk-diskfilter.c-grub_diskfilter_write-Ca.patch made it more
  visible.
- The reason GRUB is trying to write to the device could be it's following
  the "save_env" commands in the config file.

Revision history for this message
In , Harald (harald-redhat-bugs) wrote :

interesting - why does GRUB try to write anything?
it has not to touch any FS at boot

GRUB2 is such a large step backwards because it is more or less it's own operating system with the most ugly configuration one could design while grub-legacy was a boot-manager and nothing else

finally we end in 3 full operating systems

* grub
* dracut
* linux

Revision history for this message
In , Harald (harald-redhat-bugs) wrote :

/etc/default/grub with these options avoids a lot of crap on Fedora-Only machines

GRUB_TIMEOUT=1
GRUB_DISTRIBUTOR="Fedora"
GRUB_SAVEDEFAULT="false"
GRUB_TERMINAL_OUTPUT="console"
GRUB_DISABLE_RECOVERY="true"
GRUB_DISABLE_SUBMENU="true"
GRUB_DISABLE_OS_PROBER="true"

Revision history for this message
In , Michal (michal-redhat-bugs) wrote :

Note that GRUB Legacy had a similar feature: the "savedefault" command.

Revision history for this message
In , Harald (harald-redhat-bugs) wrote :

but it did not halt boot for some seconds with a useless error message and "press any key to continue" as well it did not mess up with submenues and whatever nor did it freeze the machine while edit the kernel line which happens with GRUB2 way too often if you need to edit it

Revision history for this message
In , Michal (michal-redhat-bugs) wrote :

My comment #5 was just to show that the assertion "it has not to touch any FS at boot" is false and that GRUB Legacy was no different in this regard.
I already commented on the increased visibility of the error, in comment #2.

Revision history for this message
Patrick Houle (buddlespit) wrote :
description: updated
description: updated
Revision history for this message
Patrick Houle (buddlespit) wrote :

I should also point out that the system will boot normally once any key is pressed or 5 seconds elapses.

description: updated
Revision history for this message
Launchpad Janitor (janitor) wrote :

Status changed to 'Confirmed' because the bug affects multiple users.

Changed in grub2 (Ubuntu):
status: New → Confirmed
summary: - Error after boot menu
+ Error: diskfilter writes are not supported
Changed in grub2 (Ubuntu):
importance: Undecided → Low
importance: Low → Medium
Revision history for this message
KrautOS (krautos) wrote :

Got the same issue on a up-to-date "trusty" machine with / on mdadm RAID 1 and swap on madam RAID 0. Any clues how to fix it?

Revision history for this message
Patrick Houle (buddlespit) wrote :

I changed my raids <dump> and <pass> to '0 0' instead of '0 1' in /etc/fstab.

Revision history for this message
Singtoh (singtoh) wrote :

Hello all,

Just thought I would throw in as well. Just started seeing this after a fresh install of Ubuntu-Trusty today at the first bootup and all boots there after. I am not running RAID but I am using LVM. This is Ubuntu-Trusty amd64. Just as a side note, on todays install I didn't give the system a /boot partition like I have seen in all the LVM tutorials. I just have two disks that I made LVM partitions on ie. /root /home /Storage /Storage1 and swap. Runs real nice but get that nagging error at boot????? Hope it gets a fix soon.

Cheers,

Singtoh

Revision history for this message
Huygens (huygens-25) wrote :

Here is another different kind of setup which triggers the problem:
/boot ext4 on a md RAID10 (4 HDD) partition
/ btrfs RAID10 (4 HDD) partition
swap on a md RAID0 (4HDD) partition

The boot and kernel are on a MD RAID (software RAID), whereas the rest of the system is using Btrfs RAID.

Revision history for this message
Cybjit (cybjit) wrote :

I also get this message.
/ is on ext4 LVM, and no separate /boot.
Booting works fine after the delay or key press.

Revision history for this message
Singtoh (singtoh) wrote :

Just to add this tidbit. I just re-installed with normal partitioning (no raid and no LVM) just /root /home and swap and it boots normally, no 5sec wait and no errors. So I guess LVM & or RAID related???? I am just about to re-install again to a new SSD and will install to LVM again and I'll post back with the outcome.

Cheers,

Singtoh

Revision history for this message
robled (robled) wrote :

This is definitely RAID/LVM related. On my 14.04 system with a ext4 boot partition I don't get the error, but on another system that's fully LVM I do get the error.

Has anyone come up with a grub config workaround to prevent the delay on boot?

Revision history for this message
Jean-Mi (forum-i) wrote :

You guys should be happy your system still boots. I just got that error (diskfilter writes are not supported) but grub exists immediately after, leaving my uefi with no other choice than booting another distro.
I had to spam press the pause key on my keyboard to get the error message before it disappears.
On my setup, the boot error occurs with openSuse installed on LVM2. The other distro is installed with a regular /boot (ext4) separate partition. Both are using grub. I could load both by calling their respective grubx64.efi from the ESP partition.
The last thing I remember having done on openSuse was to create a btrfs partition and tweaked /etc/fstab a little bit.
From the other distro, I can read openSuse's files and everything looks fine. It's like the boot loader used to work and suddenly failed.
I'd love to remember what else I did since it worked. And I'd love to be able to boot openSuse again.

Revision history for this message
Jean-Mi (forum-i) wrote :

I may have found the reason for my particular crash. Now my system boots normally.

According to the bug report #1006289 at redhat, the bug could come with insmod diskfilter but someone deactivated that mod and still got the error. I don't even have this mod declared. But I noticed openSuse loves to handle everything on reboot, like setting the next OS to load.
My /boot/grub/grubenv contains 2 lines. Basically, save_entry=openSUSE and next_entry=LMDE Cinnamon.
I removed those lines and the error disappeared. /Maybe/ those line instructs grub to write something on the boot partition, which it's perfectly unable to do since it cannot write to LVM.
Anyway, it seems that solving this bug requires to find out why grub tries to write data.

Revision history for this message
hamish (hamish-b) wrote :

Hi, I get the same error with 14.04 beta1 booting into RAID1.

for those running their swap partitions in a raid, I'm wondering if it would be better to just mount the multiple swap partitions in fstab and give them all equal priority? For soft-raid it would cut out a layer of overhead, and for swap anything which cuts out overhead is a good thing. (e.g., mount options in fstab for all swap partitions: sw,pri=10)

See `man 2 swapon` for details.
       "If two or more areas
       have the same priority, and it is the highest priority available, pages
       are allocated on a round-robin basis between them."

Revision history for this message
Phillip Susi (psusi) wrote :

That would defeat the purpose of raid1, which is to keep the system up and running when a drive fails. With two separate swaps, if the disk fails you're going to probably have half of user space die and end up with a badly broken server that needs rebooted.

Revision history for this message
Artyom Nosov (artyom.nosov) wrote :

Got the same issue on the daily build of trust (20140319). / , /home and swap all is RAID1

Revision history for this message
Denis Telnov (irland) wrote :

I have this issue on ubuntu Trusty 14.04 on / in lvm. Deleting /boot/grub/grubenv prevent this error on next boot, but grub will create this file every time boot, thus I have rm /boot/grub/grubenv in my crontab.

Revision history for this message
stoffel010170 (stoffel-010170) wrote :

Have the same bug on my LVM system,too. My systems without LVM and RAID are not affected.

Revision history for this message
Thomas (t.c) wrote :

I have the bug too, I use root filesystem (/) as a software raid 1

Revision history for this message
Thomas (t.c) wrote :

# GRUB Environment Block
#######################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################

thats the content of /boot/grub/grubenv - is it right?

Revision history for this message
VladimirCZ (vlabla) wrote :

I also get this message.
/ and swap volumes are on ext4 LVM, and no separate /boot.
Booting works fine after the delay or key press.

Revision history for this message
Ubuntu QA Website (ubuntuqa) wrote :

This bug has been reported on the Ubuntu ISO testing tracker.

A list of all reports related to this bug can be found here:
http://iso.qa.ubuntu.com/qatracker/reports/bugs/1274320

tags: added: iso-testing
Revision history for this message
Moritz Baumann (mo42) wrote :

The problem is the call to the recordfail function in each menuentry. If I comment it in /boot/grub/grub.cfg, the error message no longer appears. Unfortunately, there seems to be no configuration option in /etc/default/grub which may prevent the scripts in /etc/grub.d from adding that function call.

Revision history for this message
Fabien Lusseau (fabien-beosfrance) wrote :

I'm also affected by this bug. So I can confirm it's still there on a fresh install of 14.04

I'm using RAID0 for /

Revision history for this message
Moritz Baumann (mo42) wrote :

As a temporary fix, you can edit /etc/grub.d/10_linux and replace 'quick_boot="1"' with 'quick_boot="0"' in line 25. (Don't forget to run "sudo update-grub" afterwards.)

Revision history for this message
Jan Rathmann (kaiserclaudius) wrote : Re: [Bug 1274320] Re: Error: diskfilter writes are not supported

I can confirm that the workaround mentioned by Moritz (setting
'quickboot=0') works for me.

Revision history for this message
In , eileon (eileon-redhat-bugs) wrote :

For Fedora 20 in /etc/default/grub

GRUB_SAVEDEFAULT="false"

makes the difference (after grub2-mkconfig)

Revision history for this message
Drew Michel (drew-michel) wrote :

I can also confirm this bug is happening with the latest beta version of Trusty with /boot living on an EXT4 LVM partition.

* setting quick_boot="0" in /etc/grub.d/10_linux and running update-grub fixes the issue
* setting GRUB_SAVEDEFAULT="false" in /etc/default/grub and running update-grub does not fix the issue
* removing recordfail from /boot/grub/grub.cfg fixes the issue

3.13.0-23-generic #45-Ubuntu
Distributor ID: Ubuntu
Description: Ubuntu Trusty Tahr (development branch)
Release: 14.04

apt-cache policy grub-pc
grub-pc:
  Installed: 2.02~beta2-9

Revision history for this message
G (gzader) wrote :

Just to confirm, this is in the release version of 14.04. I've got it on a fresh build with raid 1 via mdadm, no swap.

It does not halt booting, just a brief delay.

Revision history for this message
Quesar (rick-microway) wrote :

I just made a permanent clean fix for this, at least for MD (software RAID). It can easily be modified to fix for LVM too. Edit /etc/grub.d/00_header and change the recordfail section to this:

if [ "$quick_boot" = 1 ]; then
    cat <<EOF
function recordfail {
  set recordfail=1
EOF
    FS="$(grub-probe --target=fs "${grubdir}")"
    GRUBMDDEVICE="$(grub-probe --target=disk "${grubdir}" | grep \/dev\/md)"
    if [ $? -eq 0 ] ; then
        cat <<EOF
  # GRUB lacks write support for $GRUBMDDEVICE, so recordfail support is disabled.
EOF
    else
        case "$FS" in
          btrfs | cpiofs | newc | odc | romfs | squash4 | tarfs | zfs)
            cat <<EOF
  # GRUB lacks write support for $FS, so recordfail support is disabled.
EOF
            ;;
          *)
            cat <<EOF
  if [ -n "\${have_grubenv}" ]; then if [ -z "\${boot_once}" ]; then save_env recordfail; fi; fi
EOF
        esac
    fi
    cat <<EOF
}
EOF
fi

Revision history for this message
robled (robled) wrote :

The work-around from #24 gets rid of the error for me. I timed my boot process after the change and didn't notice any appreciable difference in boot time with the work-around in place. This testing was performed using a recent laptop with an SSD.

Revision history for this message
EAB (adair-boder) wrote :

I got this error message too - with a fresh install of 14.04 Server Official Release.
I also have 2 RAID-1 setups.

Revision history for this message
Francisco Stefano Wechsler (geral-k) wrote :

I have recently installed Xubuntu 14.04 (Official Release) on two computers. On one of them I did not use RAID and allowed automatic disk partititioning; no boot error has been observed. For the second computer I used the Minimal CD, installed two RAID0 devices (one for swap and one for /) and Xubuntu; on this computer the error message appeared every time I booted. The workaround suggested by Moritz Baumann (#24) eliminated the error message.

Revision history for this message
bolted (k-minnick) wrote :

I followed comment #28 from Quesar (rick-microway) above with lubuntu 14.04 running on a 1U supermicro server. Rebooted multiple times to test, and I am no longer getting this error message. A huge thank you to Quesar for a fix!

Revision history for this message
Vadim Nevorotin (malamut) wrote :

Fix from #28 extended to support LVM (so, I think, it is universal clean fix of this bug). Change recordfail section in /etc/grub.d/00_header to:

if [ "$quick_boot" = 1 ]; then
    cat <<EOF
function recordfail {
  set recordfail=1
EOF
    GRUBMDDEVICE="$(grub-probe --target=disk "${grubdir}")"
    GRUBLVMDEVICE="$(grub-probe --target=disk "${grubdir}")"
    if echo "$GRUBMDDEVICE" | grep "/dev/md" > /dev/null; then
        cat <<EOF
  # GRUB lacks write support for $GRUBMDDEVICE, so recordfail support is disabled.
EOF
    elif echo "$GRUBLVMDEVICE" | grep "/dev/mapper" > /dev/null; then
        cat <<EOF
  # GRUB lacks write support for $GRUBLVMDEVICE, so recordfail support is disabled.
EOF
    else
        FS="$(grub-probe --target=fs "${grubdir}")"
        case "$FS" in
          btrfs | cpiofs | newc | odc | romfs | squash4 | tarfs | zfs)
            cat <<EOF
  # GRUB lacks write support for $FS, so recordfail support is disabled.
EOF
          ;;
          *)
            cat <<EOF
  if [ -n "\${have_grubenv}" ]; then if [ -z "\${boot_once}" ]; then save_env recordfail; fi; fi
EOF
        esac
    fi
    cat <<EOF
}

Then run update-grub

Revision history for this message
Andrew Hamilton (ahamilton9) wrote :

Just confirming that the above (RAID & LVM version) fix is working for a RAID10, 14.04, x64, fresh install. I don't have LVM up though, so I cannot confirm that detail.

Revision history for this message
Tato Salcedo (tatosalcedo) wrote :

I have no raid, I lvm and present the same error

Revision history for this message
Aaron Hastings (thecosmicfrog) wrote :

Just installed 14.04 and seeing the same error on boot.

I don't have any RAID setup, but I am using LVM ext4 volumes for /, /home and swap. My /boot is on a separate ext4 primary partition in an msdos partition table.

Revision history for this message
Agustín Ure (aeu79) wrote :

Confirming that the fix in comment #34 solved the problem in a fresh install of 14.04 with LVM.

Revision history for this message
Uqbar (uqbar) wrote :

I would like to apply the fix from comment#34 as I am using software RAID6 and LVM at the same time.
Unluckily I am not so good at changing that "recordfail section in /etc/grub.d/00_header".
Would it be possible to attach here the complete fixed /etc/grub.d/00_header file?
Would it be possible to have this as an official "fix released"?

Revision history for this message
David Twersky (dmtwersky) wrote :

Confirming comment#34 fixed it for me as well.
Im using LVM on all partitions.

Revision history for this message
Per Engström (per-10823-n) wrote :

I am also affected.
I have 4 disks in RAID0, used ubuntu server 13.10 to set up a command line base RAID0 system with no server extras, rebooted and installed ubuntu-desktop, rebooted and changed /etc/NetworkManager/NetworkManager.conf to 'managed=true'.

It has been working flawlessly for months but after a dist-upgrade to ubuntu 14.04 I too get this error message at every boot up. System continues after a couple of seconds with no issues as far as I can tell.

Regards,
Per in Sweden

Revision history for this message
Matt Bush (mbbush) wrote :

Using the code in comment #34 fixed my problem involving LVM on ubuntu-gnome 14.04 (clean install).

Revision history for this message
halfgaar (wiebe-halfgaar) wrote :

Because the ubuntu server install did not allow me to create GPT partitions, and configuring software RAID is also very cumbersome, I booted sysrescueCD first and used gdisk+mdadm. Then I installed Ubuntu on the RAID1 partitions. I get this error as well, and the fix in comment #34 fixes it.

I attached the fix as patch, which can be applied with the patch command.

Revision history for this message
Ubuntu Foundations Team Bug Bot (crichton) wrote :

The attachment "Patch to fix error." seems to be a patch. If it isn't, please remove the "patch" flag from the attachment, remove the "patch" tag, and if you are a member of the ~ubuntu-reviewers, unsubscribe the team.

[This is an automated message performed by a Launchpad user owned by ~brian-murray, for any issues please contact him.]

tags: added: patch
Revision history for this message
David Daynard (nardholio) wrote :

#28 worked perfectly, this needs to be upstreamed

Changed in grub2 (Ubuntu):
status: Confirmed → Triaged
Changed in grub:
importance: Undecided → Unknown
status: New → Unknown
importance: Unknown → Undecided
status: Unknown → New
Revision history for this message
Alberto Salvia Novella (es20490446e) wrote :
Revision history for this message
Theor (theor) wrote :

Using the fix provided by #34 and running a sudo update-grub did _not_ solve the problem for me (14.04 with LVM, BIOS/MBR system).
The only way to get it work so far is by specifying quickboot="0" in /etc/grub.d/10_linux then running update-grub.

Anything I can paste here to assist with that?

Revision history for this message
Jérôme Benoit (jerome-benoit) wrote :
Revision history for this message
-VoltagE- (sami-naatanen) wrote :

The fix in post #34 did the trick for me too.

tags: added: utopic
Revision history for this message
Tim K. (tkubnt) wrote :

If you know you are always using LVM or RAID and the fix in post #34 seems to be too complicated to apply, here's a much easier one:
Edit /etc/grub.d/00_header
At line 118 (at least on my Linux Mint 17 that's the line number) simply put a # in front of the "if [ -n "\${have_grubenv}" ] ...". This will always comment out that line from /boot/grub/grub.cfg
Run update-grub <--- this is important
Check /boot/grub/grub.cfg to make sure the "if ..." is commented out in the recordfail function
Reboot and test

Of course the fix in post #34 or something similar is what Grub should do in general, but you don't really need all that if you are currently using RAID or LVM and have no plans switching.

Revision history for this message
malheum (maxheise) wrote :

Trusty amd64. Two disks to build a raid1 device. Then an lvm2 on top of that. I am affected too.

Revision history for this message
Graham Warner (groovyg0) wrote :

Sadly post #34 did not work for me - server 12.04 clean install using LVM on small system disk; intending to use ZFS on further 4 disks.
Staggering up the Linux learning curve; used vi to modify 00_header, and having found the 2 existing lines:

function recordfail {
  set recordfail=1

assumed that the first two lines of the fix:

if [ "$quickboot" = 1 ]; then
    cat <<EOF

should precede these 'recordfail' lines and the remainder of the fix follow them.
I then ran update-grub and saw what I take to be an error message about grub-probe (the suggestion to type grub-probe - help didn't help) and the diskfilter writes message came up at the next reboot.
Was that a false assumption? - what else might have happened - was pretty careful to avoid typos?
Thanks.

Revision history for this message
Fabio Marconi (fabiomarconi) wrote :

Same for me too, fix is not working with LVM encrypted

Revision history for this message
Victor Rodriguez (vrc-vlm) wrote :

The patch mentioned in post #34 is not working for me. I'm using an LVM over a RAID1 for / and another RAID1 for swap.

As mentioned here:

https://forum.manjaro.org/index.php?topic=7425.msg112623#msg112623

seems like Grub tries to write something to /boot. If /boot is on a LVM and/or RAID volume Grub is unable to do so and we get that error.

Adding
GRUB_SAVEDEFAULT=false

to
/etc/default/grub

solved the issue for me. I don't care that Grub does not remember my last chosen option.

I don't know whether this is a bug or simply Grub can't write to LVM/RAID volumes and I should have created a small /boot partition outside the LVM and/or RAID. Please, someone clarify this.

Revision history for this message
Hadmut Danisch (hadmut) wrote :

The more important and more severe problem (at least for me) is that in my case the computer sometimes hangs forever when this message appears and does not even respond to key strikes anymore.

So besides the fact that this error message occurs a secondary problem is that the grub procedure to display the message and to wait for user interaction is broken.

Revision history for this message
Graham Warner (groovyg0) wrote :

Sadly post #34 did not work for me - server 12.04 clean install using LVM on small system disk; intending to use ZFS on further 4 disks.
Staggering up the Linux learning curve; used vi to modify 00_header, and having found the 2 existing lines:

function recordfail {
  set recordfail=1

assumed that the first two lines of the fix:

if [ "$quickboot" = 1 ]; then
    cat <<EOF

should precede these 'recordfail' lines and the remainder of the fix follow them.
I then ran update-grub and saw what I take to be an error message about grub-probe (the suggestion to type grub-probe - help didn't help) and the diskfilter writes message came up at the next reboot.
Was that a false assumption? - what else might have happened - was pretty careful to avoid typos?
Thanks.

Steve Langasek (vorlon)
Changed in grub2 (Ubuntu):
importance: Medium → High
Revision history for this message
Dan Kegel (dank) wrote :

Affected 1 out of 2 of my systems here.

Revision history for this message
jonathan.battle (jonathan-battle) wrote :

Me too, plain ol 14.04 install, fix #24 has no effect.
All this bug does to me is force another 10 secs wait for boot , which can be escaped by hitting return.
If this were my only problem with grub2 I would be deliriously happy.

Revision history for this message
framp (framp) wrote :

Same issue on my system. I installed Mint 17 (Quiana) on LVM. System starts successfully when the message was displayed for some seconds. So it's not a major issue for me. If there is need for any addtl debug info just let me know.

Revision history for this message
jonathan.battle (jonathan-battle) wrote :

New 3.13.0-30 kernel fixed it!

Revision history for this message
Anders Kaseorg (andersk) wrote :

Jonathan, I don't know how you concluded that a new kernel fixed this, but this message is displayed by GRUB before the kernel is even loaded, so I find it very unlikely.

Revision history for this message
D Hitz (dhitz) wrote :

#24 made the error go away for me. Boot does seem slower, but I had not timed it previously, so I have no numbers to verify.

Running software RAID
Ubuntu Srver 14.04 withe Kubuntu Desktop
KDELibs 4.13.2
Kernel 3.13.0-30-generic

Revision history for this message
Florent Flament (florentflament) wrote :

I have the same issue.
Using LVM on an Ubuntu 14.04.
Fix #34 has no effect neither.

Revision history for this message
hyper_ch (bugs-launchpad-net-roleplayer) wrote :

Just setup a new raid1 system and get the same error... after few seconds it disappears and then I get prompted for the LUKS password.

Revision history for this message
Rarylson Freitas (rarylson) wrote :
Download full text (5.8 KiB)

I added more info about this bug here: http://askubuntu.com/a/498281/197497

To trying to help everybody with the problem solution, I'm copy/past my answer in the Ask Ubuntu here.

----

This bug appears when you create the boot partition (or the root partition, when the boot partition doesn't exists) inside a LVM or a RAID partition.

When the system is booting, Grub (using its `diskfilter` module) tries to write some data in `/boot`. However, in this Ubuntu version, something goes wrong and Grub cannot write the desirable data (and the warning appears).

Let's look inside the `/boot/grub/grub.cfg` file (generated using the `/etc/grub.d/00_header` file by the `update-grub` command):

    if [ -s $prefix/grubenv ]; then
      set have_grubenv=true
      load_env
    fi
    if [ "${next_entry}" ] ; then
       set default="${next_entry}"
       set next_entry=
       save_env next_entry
       set boot_once=true
    [...]

According to this file, grub reads (`load_env`) the Grub environment file (`/boot/grub/grubenv`) if it exits in every boot. Sometimes, it saves (`save_env`) a new environment in this file too (when it's necessary that the next boot sees a new environment).

This (save `grubenv`) can be used to save the last used grub entry (setting `GRUB_DEFAULT=saved` in the `/etc/default/grub` file and running `update-grub`).

This can be used by the **recordfail** feature too (see [Ubuntu Help - Grub 2](https://help.ubuntu.com/community/Grub2), "Last Boot Failed or Boot into Recovery ModeLast Boot Failed or Boot into Recovery Mode" section).

In every boot, Grub updates the `recordfail` value and saves it. Probably, at this moment, the warning appears to you (lines 104 to 124):

    if [ "$quick_boot" = 1 ]; then
        cat <<EOF
    function recordfail {
      set recordfail=1
    EOF
        FS="$(grub-probe --target=fs "${grubdir}")"
        case "$FS" in
          btrfs | cpiofs | newc | odc | romfs | squash4 | tarfs | zfs)
        cat <<EOF
      # GRUB lacks write support for $FS, so recordfail support is disabled.
    EOF
        ;;
          *)
        cat <<EOF
      if [ -n "\${have_grubenv}" ]; then if [ -z "\${boot_once}" ]; then save_env recordfail; fi; fi
    EOF
        esac
        cat <<EOF
    }
    EOF
    fi

See that Grub skips the recordfail feature when using the following filesystems: `btrfs | cpiofs | newc | odc | romfs | squash4 | tarfs | zfs`. The LVM and RAID subsystems aren't skipped in any moment.

To work inside RAID partitions (I don't know what happens inside LVM partitions...), Grub uses the **diskfilter** module (`insmod diskfilter`). You can get the source code of this module running:

    apt-get source grub2
    vim grub2-2.02~beta2/grub-core/disk/diskfilter.c

However, even if this module was loaded, the bug will appear when grub calls the `save_env recordfail` function.

More details about this behaviour can be found at: https://bugzilla.redhat.com/show_bug.cgi?id=1006289

I read the source code and found the moment when the warning is dispatched (line 821). I'm pasting the code here (lines 808 to 823):

    static grub_err_t
    grub_diskfilter_read (grub_disk_t disk, grub_disk_addr_t sector,
    ...

Read more...

Revision history for this message
Rarylson Freitas (rarylson) wrote :

In the last comment, I wrote:

"However, the best solution would be if Grub implements the `grub_diskfilter_write` function of the `diskfilter` module."

This is because, with the actual patch, we lost some Grub features in RAID/LVM Grub instalations: the recordfail feature no more works.

Revision history for this message
Anders Kaseorg (andersk) wrote :
Revision history for this message
Anders Kaseorg (andersk) wrote :

recordfail is part of a Debian-specific patch. Closing the upstream task. (Leaving the Ubuntu task open, obviously.)

Changed in grub:
status: New → Invalid
Changed in grub2 (Debian):
status: Unknown → New
Revision history for this message
Rarylson Freitas (rarylson) wrote :

Anders Kaseorg (anders-kaseorg), thanks for the links. Really, it's more safe not implementing LVM write support instead of accepting a risk.

And about the RAID write support? Are there any considerations about?

Well, I had a different idea (but a complex idea) about the write support: users maybe want to install /boot in a RAID to have redundancy when a disk fails.

And if does grub mount a small external partition only to save the contents of the grubenv? This partition could uses the same ideia of the BIOS Boot partition, marked with a bios_grub flag (this partition would had a flag like save_grubenv). So, when a disk fails, the only lost data would be the grubenv, saved locally in that disk.

Changed in mdadm (Ubuntu):
assignee: nobody → Dimitri John Ledkov (xnox)
Revision history for this message
Anders Kaseorg (andersk) wrote :

Rarylson, yes, the same concerns that apply to LVM also apply to any RAID with redundancy. In any event, fixing that is outside the scope of this bug. Talk to GRUB upstream if you’re interested in new features.

As for the problem at hand, I posted a cleaner patch to http://bugs.debian.org/754921 that uses grub-probe --target=abstraction instead of ad-hoc pattern-matching on device names.

Revision history for this message
Rarylson Freitas (rarylson) wrote :

Anders, I tested your patch (adapted and applied to the `/etc/grub.d/00_header` file), and it worked well.

Thanks for the patch.

If someone don't want to wait for the Debian guys confirm, test and merge the Anders patch, I'm updating my Github gist with a new patch to be applied in the `/etc/grub.d/00_header` (it can be used as a temporary workarround): https://gist.github.com/rarylson/23fb3ab46ded7ca2a818

Revision history for this message
Launchpad Janitor (janitor) wrote :

Status changed to 'Confirmed' because the bug affects multiple users.

Changed in mdadm (Ubuntu):
status: New → Confirmed
Revision history for this message
Florent Flament (florentflament) wrote :

Hi,

Couldn't apply neither patch #70 nor #71 on my Ubuntu 14.04. I guess 00_header file may differ slightly between distributions.

$ patch -p0 00_header < recordfail.patch
patching file 00_header
Hunk #1 FAILED at 102.
1 out of 1 hunk FAILED -- saving rejects to file 00_header.rej

In the end I found the following quick workaround:
Comment out line 118 of file /etc/grub.d/00_header
# if [ -n "\${have_grubenv}" ]; then if [ -z "\${boot_once}" ]; then save_env recordfail; fi; fi

Then do `update-grub`

Changed in mdadm (Ubuntu):
importance: Undecided → High
status: Confirmed → Triaged
Changed in grub:
importance: Undecided → Unknown
status: Invalid → Unknown
Revision history for this message
Heiko L (hl1) wrote :

Software RAID. I had no success with #28 + update-grub. Same old error message persists.
(Haven't and won't try other things, because I'm afraid to break my system.)

Revision history for this message
Anders Kaseorg (andersk) wrote :

To those who are looking for a quick workaround on their system and can’t figure out how to apply a patch, just change this line in /etc/grub.d/00_header:

      btrfs | cpiofs | newc | odc | romfs | squash4 | tarfs | zfs)

to

      btrfs | cpiofs | newc | odc | romfs | squash4 | tarfs | zfs | *)

This disables recordfail support unconditionally, but if you’re affected by this bug, it wouldn’t have worked on your system anyway. (The proper fix, again, is http://bugs.debian.org/754921.)

Revision history for this message
Dimitri John Ledkov (xnox) wrote :

Colin, mentioned to me he is going to apply patch in Debian and Ubuntu. Assigning appropriately. Nothing to be done in mdadm package itself.

Changed in grub2 (Ubuntu):
assignee: nobody → Colin Watson (cjwatson)
Changed in mdadm (Ubuntu):
status: Triaged → Invalid
Revision history for this message
toxi (toxi-m) wrote :

Hello.
Problem solved?

Revision history for this message
Andrey Bondarenko (abone) wrote :

No. Patch exists, but as of 2014-08-14, it is not applied in Debian and Ubuntu

Revision history for this message
juffinhalli (juffinhalli) wrote :

Proble don't solved on 06-09-14

Revision history for this message
Popolon (popolon) wrote :

Bug still confirmed on 2014-09-22

Revision history for this message
Bernardo (bbernardoleon) wrote :

Dont know if this invalidates RAID1? Is my server mirroring disks despite this bug?

Revision history for this message
Anders Kaseorg (andersk) wrote :

Bernardo, your RAID should be fine. The only effects of this bug are the error message, delay, and (sometimes) required key press during boot.

Revision history for this message
Thomas (t.c) wrote :

I get the same problem. what I also see is, that I get the follow message on update-grub:

# LANG=C update-grub
Generating grub configuration file ...
/usr/sbin/grub-probe: warning: Couldn't find physical volume `(null)'. Some modules may be missing from core image..
/usr/sbin/grub-probe: warning: Couldn't find physical volume `(null)'. Some modules may be missing from core image..
/usr/sbin/grub-probe: warning: Couldn't find physical volume `(null)'. Some modules may be missing from core image..
/usr/sbin/grub-probe: warning: Couldn't find physical volume `(null)'. Some modules may be missing from core image..
Found linux image: /boot/vmlinuz-3.13.0-37-generic
Found initrd image: /boot/initrd.img-3.13.0-37-generic
/usr/sbin/grub-probe: warning: Couldn't find physical volume `(null)'. Some modules may be missing from core image..
/usr/sbin/grub-probe: warning: Couldn't find physical volume `(null)'. Some modules may be missing from core image..
/usr/sbin/grub-probe: warning: Couldn't find physical volume `(null)'. Some modules may be missing from core image..
Found linux image: /boot/vmlinuz-3.13.0-32-generic
Found initrd image: /boot/initrd.img-3.13.0-32-generic
/usr/sbin/grub-probe: warning: Couldn't find physical volume `(null)'. Some modules may be missing from core image..
/usr/sbin/grub-probe: warning: Couldn't find physical volume `(null)'. Some modules may be missing from core image..
/usr/sbin/grub-probe: warning: Couldn't find physical volume `(null)'. Some modules may be missing from core image..
/usr/sbin/grub-probe: warning: Couldn't find physical volume `(null)'. Some modules may be missing from core image..
Found memtest86+ image: /boot/memtest86+.elf
Found memtest86+ image: /boot/memtest86+.bin
done

and when I try to run the abstraction target I also get this:

# grub-probe --target=abstraction "/boot/grub/"
grub-probe: Warnung: Physischer Datenträger »(null)« konnte nicht gefunden werden. Einige Module könnten im Core-Abbild fehlen..
diskfilter
mdraid1x

but not when I run the disk/fs target - so this seems more stable to me?!

# grub-probe --target=disk "/boot/grub/"
/dev/md0

# grub-probe --target=fs "/boot/grub/"
ext2

# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 14.04.1 LTS
Release: 14.04
Codename: trusty

# dpkg -l|grep grub
ii grub-common 2.02~beta2-9ubuntu1 amd64 GRand Unified Bootloader (common files)
ii grub-gfxpayload-lists 0.6 amd64 GRUB gfxpayload blacklist
ii grub-pc 2.02~beta2-9ubuntu1 amd64 GRand Unified Bootloader, version 2 (PC/BIOS version)
ii grub-pc-bin 2.02~beta2-9ubuntu1 amd64 GRand Unified Bootloader, version 2 (PC/BIOS binaries)
ii grub2-common 2.02~beta2-9ubuntu1 amd64 GRand Unified Bootloader (common files for version 2)

Revision history for this message
Thomas (t.c) wrote :

oh, and the patch from rarylson (http://askubuntu.com/questions/468466/why-this-occurs-error-diskfilter-writes-are-not-supported) also get me the warning "Couldn't find physical volume (null)"

Colin Watson (cjwatson)
Changed in grub2 (Ubuntu):
assignee: Colin Watson (cjwatson) → nobody
Revision history for this message
Shahar Or (mightyiam) wrote :

Oh, thank you, @cjwatson, for getting to this. I'm sure the bug list is endless are you're all doing your very best.

Revision history for this message
Adam Niedling (krychek) wrote :

Shahar: Colin has just unassigned himself from this bug...

Revision history for this message
Shahar Or (mightyiam) wrote :

OK here's a photo of me sad about this bug.

Revision history for this message
Søren Høyer Kristensen (sorenhoyerkristensen) wrote :

I just installed Ubuntu Server 14.04 the other day and got the same error.
Configuration was:

2x1tb hdd's
512Mb efiboot partition on each disk
The rest was setup as a raid partition (on both disks), before linking them in a raid 1 array

On top of that raid 1 array I setup LVM, with a LVM group and virtual volumes such as the following:
swap
root ( mounted as / )

I thought a linux software raid 1 array + LVM was the way to go, but after reading #67, if I understand it correctly, there is no plans for an official fix in future Ubuntu versions or patches? Due to security issues? Should we just forget about LVM then, if we want to mirror our disks with raid 1?

Revision history for this message
Søren Høyer Kristensen (sorenhoyerkristensen) wrote :
Revision history for this message
Niels Böhm (blubberdiblub) wrote :

I think there is a better "solution" (involving disabling recordfail in the affected cases, which is LVM for me).

I believe that the correct thing to use to check for LVM or RAID is grub-probe's "abstraction" target. This returns a list of possible abstractions below the filesystem level (or an empty string if there are none). For instance, on a system with just LVM here (hardware RAID is transparent to the system), it returns that:

    # grub-probe --target=abstraction /boot/
    lvm

On an older system with mdadm RAID1 and boot outside of LVM, it returns this:

    # grub-probe --target=abstraction /boot
    raid mdraid1x

    grub-probe --target=abstraction /
    raid mdraid1x lvm

Also note that older versions of grub-probe output an additional space at the end of the list if the list is non-empty, so mind that when parsing the output. There's no such superfluous space on trusty, tho.

So what I do is check the list for possible candidates that would fail with "recordfail". In my case, this is just "lvm" and I don't have a system here to test it with md RAID in practice, but I guess you would just need to extend the "for i in x y z" list with "raid".

The reason while the case statement inside the for loop looks as bit convoluted is that you could find the culprit at 4 possible places: making up the whole list, just at the beginning of the list, just at the end of the list or somewhere in the middle of the list.

Revision history for this message
schneibva (schneibva) wrote :

I just made a new installation and got the same error:

LUBUNTU 14.04 amd64
Alternate installation with RAID1, 2 disks, no spare

Revision history for this message
Niels Böhm (blubberdiblub) wrote :

@schneibva: Can you please try my patch on your /etc/grub.d/00_header file and manually add the word "raid" to the for-in list?

I.e. replace this line:
    for check_abstraction in lvm ; do
with that:
    for check_abstraction in lvm raid ; do

And afterwards run "update-grub", of course. (If you made any mistakes in applying the patch, update-grub will probably fail with an error and you should restore your old 00_header file.)

Revision history for this message
wiley.coyote (tjwiley) wrote :

@blubberdiblub: The patch didn't work for me. It did, however, work with a little modification.

14.04.1
mdadm
1x RAID1
2 x physical drives

user@RAID-Upgrade-Test:~$ sudo grub-probe --target=abstraction /boot/grub
diskfilter
mdraid1x

The patch above wasn't catching cases of *$check_abstraction*, so I added that in. My patch is attached.

I'm no grub expert, so please check it to make sure I'm not doing anything stupid.

Revision history for this message
Simon Déziel (sdeziel) wrote :

On 01/07/2015 02:49 PM, wiley.coyote wrote:
> The patch above wasn't catching cases of *$check_abstraction*, so I
> added that in. My patch is attached.

Loosing the double quotes when assigning ABSTRACTION would avoid
catching any surrounding spaces. Also, it's possible to drop the for
loop and the case by using a bit of grep:

ABSTRACTION=$(grub-probe --target=abstraction "${grubdir}" | grep -xm1
'raid\|lvm')
[ -n "$ABSTRACTION" ] && skip_recordfail=$ABSTRACTION

Be warned, the above code was not tested :)

HTH,
Simon

Revision history for this message
Anders Kaseorg (andersk) wrote :

Niels Böhm: If you look at the patch that I submitted to Debian (see comment #70 and https://bugs.debian.org/754921), you’ll see that this is exactly what I did. We’re just waiting for the maintainer to apply the patch.

Revision history for this message
wiley.coyote (tjwiley) wrote :

You know, I was just looking at that. I'm not sure why I didn't see it before. FWIW, the patch in Debian Bug #754921 works for me without modification.

Revision history for this message
Forest (foresto) wrote :

Thanks for that patch, Anders!

In case anyone running Trusty wants a quick fix, I just uploaded a package to my PPA that includes the patch. It's building now.
https://launchpad.net/~foresto/+archive/ubuntu/ubuntutweaks

Revision history for this message
Niels Böhm (blubberdiblub) wrote :

@wiley.coyote: I wrote the patch to err on the conservative side, so I deliberately avoided matching for *"$check_abstraction"*, so if grub-probe spat out (contrived example) a list of "braid ponytail", it would not accidentally match on *"raid"*.

So it seems that for you, "grub-probe --target=abstraction /boot" spits out a list with "somethingraidsomething", but not "raid" - probably "mdraid1x" - so the correct fix to my patch would have been adding that word to the for list. I only have a system with boot on LVM, so I couldn't test for software raid, as I mentioned above.

However, Anders' patch is more elegant and shorter anyway, so we should indeed go for that instead :)

Revision history for this message
In , Fedora (fedora-redhat-bugs) wrote :

This message is a notice that Fedora 19 is now at end of life. Fedora
has stopped maintaining and issuing updates for Fedora 19. It is
Fedora's policy to close all bug reports from releases that are no
longer maintained. Approximately 4 (four) weeks from now this bug will
be closed as EOL if it remains open with a Fedora 'version' of '19'.

Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, simply change the 'version'
to a later Fedora version.

Thank you for reporting this issue and we are sorry that we were not
able to fix it before Fedora 19 is end of life. If you would still like
to see this bug fixed and are able to reproduce it against a later version
of Fedora, you are encouraged change the 'version' to a later Fedora
version prior this bug is closed as described in the policy above.

Although we aim to fix as many bugs as possible during every release's
lifetime, sometimes those efforts are overtaken by events. Often a
more recent Fedora release includes newer upstream software that fixes
bugs or makes them obsolete.

Revision history for this message
Philippe Clérié (pclerie) wrote :

I've had that problem since my first install of Trusty continuing on to Utopic.

I'd like to add an observation I made while testing different kernels. This error _does not_ occur when you select the kernel to boot.

Ex:

- Boot the computer
- At the Grub menu, stop the count down and select the Advanced Options....
- Select the kernel
- No error.

FWIW. Maybe it's a hint.

Hope that helps.

Philippe

Revision history for this message
In , Fedora (fedora-redhat-bugs) wrote :

Fedora 19 changed to end-of-life (EOL) status on 2015-01-06. Fedora 19 is
no longer maintained, which means that it will not receive any further
security or bug fix updates. As a result we are closing this bug.

If you can reproduce this bug against a currently maintained version of
Fedora please feel free to reopen this bug against that version. If you
are unable to reopen this bug, please file a new report against the
current release. If you experience problems, please add a comment to this
bug.

Thank you for reporting this bug and we are sorry it could not be fixed.

Revision history for this message
Rubén Parra (rubenxparra) wrote :

This bug is active on 15.04.

Regards,

Rubén.

Revision history for this message
Shahar Or (mightyiam) wrote :

Since `2.02~beta2-22`, my system displays the error message (~"diskfilter writes are not supported") and that remains indefinitely. Boot does not continue. I've managed to boot to a desktop session by selecting "advanced"->"recovery mode".

Revision history for this message
Shahar Or (mightyiam) wrote :

False alarm: it was #1318111

Revision history for this message
luca (llucax) wrote :

Seeing this problem after I switched to a encrypted root/boot partition using LUKS and LVM.

Revision history for this message
Hedley Finger (hedley-finger) wrote :

I have a desktop Ubuntu 14.04.2 Trusty reinstalled (to fix multiple problems), and this error also occurred for me. Only LVM installed, no RAID. At the moment I just wait 5 s or press any key (but it still takes 5 s to resolve!).
Toshiba Satellite C660, par no. PSC0LA-01C01H

Revision history for this message
no!chance (ralf-fehlau) wrote :

This error is not resolved since several months. There are more and more bugs preventing that ubuntu will be used in a production environment or for servers. Its no fun anymore! I'm leaving ubuntu now!

Revision history for this message
hyper_ch (bugs-launchpad-net-roleplayer) wrote :

Issue still exists on 15.04

Revision history for this message
François Jacques (francois-jacques) wrote :

Indeed.

Revision history for this message
200999900s (200999900s) wrote :

this problem still exist on fresh ubuntu server 14.04.2 and 15.04 installation
HW: HP Microserver
HDD's:
md raid1(/boot ext3 + PV for LVM ) + LVM ( / btrfs and swap )

Revision history for this message
Douglas Fraser (douglas-fraser) wrote :

If you've got a UEFI system then you should have EFI System partition (/boot/efi) that grub can write to (it's just FAT). Noticed that Fedora now makes a symlink from /boot/grub2/grubenv to /boot/efi/EFI/fedora/grubenv. This worked for me on Ubuntu 15.04 so presumable the symlink patch is present:

cd /boot/grub
sudo mv grubenv ../efi/EFI/ubuntu/grubenv
sudo ln -s ../efi/EFI/ubuntu/grubenv

Revision history for this message
Steve Langasek (vorlon) wrote :

I am seeing this problem recently on an Ubuntu 14.04 server system where I don't remember seeing it before. There are no recent changes to the grub version on the system, and there are no recent changes to the filesystem layout.

This is a serious bug, because it appears to block the boot indefinitely with this error.

Changed in grub2 (Ubuntu):
assignee: nobody → Mathieu Trudel-Lapierre (mathieu-tl)
Revision history for this message
Anders Kaseorg (andersk) wrote :

Friendly reminder that I posted a patch at https://bugs.debian.org/754921 nearly a year ago.

Revision history for this message
Mathieu Trudel-Lapierre (cyphermox) wrote :

Anders, thanks. I'm reviewing the patch and I'll apply it to grub in Debian.

Is there anyone here *NOT* using LVM or RAID on a system which is showing this error message?

Changed in grub2 (Ubuntu):
status: Triaged → Incomplete
Revision history for this message
Loïc Minier (lool) wrote :

keeping this as new as to not expire it -- potentially no one has this issue without LVM/MD, so Mathieu's question might not get an answer but we still want to fix this

Changed in grub2 (Ubuntu):
status: Incomplete → New
Revision history for this message
Launchpad Janitor (janitor) wrote :

Status changed to 'Confirmed' because the bug affects multiple users.

Changed in grub2 (Ubuntu):
status: New → Confirmed
Revision history for this message
Seb Bonnard (sebma) wrote :

Hi, this bug also affected to me because I'm using LVM.

I want to thank Anders for his patch (see comment #70).

I pasted his patch in a file I called 00_header_754921.patch and then I typed the following commands :

$ sed -i "s/00_header.in/00_header/g" 00_header_754921.patch
$ cd /etc/ && sudo patch -p2 < ~/00_header_754921.patch
$ sudo update-grub

Hope this helps.

Seb.

Revision history for this message
fermulator (fermulator) wrote :

I just tried applying the patch manually and re-ran grub-install, not go.
I also tried pulling in the patch via Forest (foresto) PPA: https://launchpad.net/~foresto/+archive/ubuntu/ubuntutweaks, also no go.

fermulator@fermmy-basement:/$ sudo grub-install /dev/md0
Installing for i386-pc platform.
grub-install: error: diskfilter writes are not supported.

(full output w/ -v is here: http://pastebin.com/6Ni8GpY3)

fermulator@fermmy-basement:/$ cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 sdb1[0] sda1[1]
      156158720 blocks super 1.2 [2/2] [UU]

fermulator@fermmy-basement:/$ uname -a
Linux fermmy-basement 3.16.0-30-generic #40~14.04.1-Ubuntu SMP Thu Jan 15 17:43:14 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
fermulator@fermmy-basement:/$ cat /etc/issue
Ubuntu 14.04.2 LTS \n \l

fermulator@fermmy-basement:/$ dpkg --list | grep grub2
ii grub2 2.02~beta2-9ubuntu1.2 amd64 GRand Unified Bootloader, version 2 (dummy package)

Revision history for this message
Mathieu Trudel-Lapierre (cyphermox) wrote :

@fermulator, that's on purpose, we don't have write support on /dev/md (diskfilter) devices. You might want to use /dev/sda1 instead (as per mdstat), the changes will get synced on the other drive.

I'm applying the changes for Ubuntu now, changes for Debian are in Debian git.

Changed in grub2 (Ubuntu):
status: Confirmed → In Progress
Revision history for this message
Launchpad Janitor (janitor) wrote :

This bug was fixed in the package grub2 - 2.02~beta2-26ubuntu3

---------------
grub2 (2.02~beta2-26ubuntu3) wily; urgency=medium

  * debian/patches/uefi_firmware_setup.patch: take into account that the UEFI
    variable OsIndicationsSupported is a bit field, and as such should be
    compared as hex values in 30_uefi-firmware.in. (LP: #1456911)
  * Update quick boot logic to handle abstractions for which there is no
    write support. (LP: #1274320)

 -- Mathieu Trudel-Lapierre <email address hidden> Mon, 06 Jul 2015 16:32:11 -0400

Changed in grub2 (Ubuntu):
status: In Progress → Fix Released
Revision history for this message
Phillip Susi (psusi) wrote :

On 7/6/2015 4:43 PM, Mathieu Trudel-Lapierre wrote:
> @fermulator, that's on purpose, we don't have write support on /dev/md
> (diskfilter) devices. You might want to use /dev/sda1 instead (as per
> mdstat), the changes will get synced on the other drive.

No, no, no... you NEVER write directly to a disk that is a component of
a raid array, and if you do, it will NOT be synced to the other drive,
since md has no idea you did such a thing.

Revision history for this message
Mathieu Trudel-Lapierre (cyphermox) wrote :

Hum, of course, you're right. Things won't get synced.

That said, you *do* need to write directly to each disk of the RAID array to install grub on them given that grub doesn't have support for the overlaying device representation.

Revision history for this message
Seb Bonnard (sebma) wrote :

Hi,

Oops !

I forgot to add to my comment #115 :

sudo chmod +x /etc/grub.d/00_header

BEFORE the "update-grub" command.

Sebastien.

Revision history for this message
Shahar Or (mightyiam) wrote :

Not seeing this on startup feels so good. But it was around for so long I almost miss it.

Who'se to blame for the fix?

Thanks a lot.

Revision history for this message
Rarylson Freitas (rarylson) wrote :

One question:

The solution made by "Mathieu Trudel-Lapierre <email address hidden> " is marked as Fix Released.

However, I can't update my grub package to the released one. The new version is 2.02~beta2-26ubuntu3, and mine is 2.02~beta2-9ubuntu1.3.

I've tried to get it from the trusty-proposed repo, without success (https://wiki.ubuntu.com/Testing/EnableProposed).

What should I do now? Should I only wait for the fix being at the trusty-main repo?

Revision history for this message
Simon Déziel (sdeziel) wrote :

On 07/17/2015 11:16 AM, Rarylson Freitas wrote:
> One question:
>
> The solution made by "Mathieu Trudel-Lapierre <email address hidden> "
> is marked as Fix Released.
>
> However, I can't update my grub package to the released one. The new
> version is 2.02~beta2-26ubuntu3, and mine is 2.02~beta2-9ubuntu1.3.
>
> I've tried to get it from the trusty-proposed repo, without success
> (https://wiki.ubuntu.com/Testing/EnableProposed).
>
> What should I do now? Should I only wait for the fix being at the
> trusty-main repo?
>

The version 2.02~beta2-26ubuntu3 is for Wily, not Trusty. You'll need to
wait for a Trusty specific version to hit trusty-proposed to be able to
test it.

Revision history for this message
Mathieu Trudel-Lapierre (cyphermox) wrote :

Indeed. To get this in trusty (or other releases), please see http://wiki.ubuntu.com/StableReleaseUpdates#Procedure to request the update for the release you're interested in. It would help me a lot if someone having the issue could at least update the bug description and nominate for a release, then I can get back to grub later to do the update.

Revision history for this message
Charis (tao-qqmail) wrote :

Where is the solution.

Changed in grub2 (Ubuntu):
assignee: Mathieu Trudel-Lapierre (mathieu-tl) → nobody
Changed in grub2 (Ubuntu Trusty):
status: New → In Progress
Changed in grub2 (Ubuntu Vivid):
status: New → In Progress
importance: Undecided → High
Changed in grub2 (Ubuntu Trusty):
assignee: nobody → Mathieu Trudel-Lapierre (mathieu-tl)
importance: Undecided → High
Changed in grub2 (Ubuntu Vivid):
assignee: nobody → Mathieu Trudel-Lapierre (mathieu-tl)
Changed in mdadm (Ubuntu):
assignee: Dimitri John Ledkov (xnox) → nobody
Revision history for this message
fermulator (fermulator) wrote :

So as per my comment on "fermulator (fermulator) wrote on 2015-07-04: " there appears to be a few post-comments of confusion.

What /is/ the correct way to re-install grub to mdadm member drives?
(assuming mdadm has member disks with proper RAID partitions)
{{{
fermulator@fermmy-server:~$ cat /proc/mdstat | grep -A3 md60
md60 : active raid1 sdi2[1] sdj2[0]
      58560384 blocks super 1.2 [2/2] [UU]
}}}

grub-install /dev/sdX|Y
or,
grub-install /dev/sdX#|Y#

Revision history for this message
Ted Cabeen (ted-cabeen) wrote :

fermulator, if Linux is the only operating system on this computer, you want to install the grub bootloader on the drives, not the partitions, so /dev/sdX, /dev/sdY, etc.

Revision history for this message
Launchpad Janitor (janitor) wrote :

Status changed to 'Confirmed' because the bug affects multiple users.

Changed in mdadm (Ubuntu Trusty):
status: New → Confirmed
Changed in mdadm (Ubuntu Vivid):
status: New → Confirmed
Phillip Susi (psusi)
no longer affects: mdadm (Ubuntu)
no longer affects: mdadm (Ubuntu Trusty)
no longer affects: mdadm (Ubuntu Vivid)
Revision history for this message
Michiel Bruijn (michielbruijn) wrote :

This bug is still present and not fixed for me and several other people (for example http://forum.kodi.tv/showthread.php?tid=194447)

I did a clean install of kodibuntu (lubuntu 14.04) and had this error.
I use LVM and installed the OS on a SSD in AHCI mode.
It's annoying, but the system continues after a few seconds.

I would like to have this problem fixed because I have a slow resume of my monitor after suspend. I would like to rule out this problem to be related.

Revision history for this message
Tom Reynolds (tomreyn) wrote :

mathieu-tl:

Thanks for your work on this issue.

Since you nominated it for trusty and state it's in progress - is there a way to follow this progress?
Are there any test builds you would like to be tested, yet?

In case it's not been sufficiently stated before, this issue does affect 14.04 LTS x86_64.

It would be great to see a SRU, since it slows the boot process and may trick users into thinking their Ubuntu installation is broken when it is not (doing as the message suggests will just reboot your system).

Anyone is welcome copy + paste this text to the first post if that should help with the SRU.

description: updated
dann frazier (dannf)
Changed in grub2 (Ubuntu Vivid):
assignee: Mathieu Trudel-Lapierre (mathieu-tl) → dann frazier (dannf)
Changed in grub2 (Ubuntu Trusty):
assignee: Mathieu Trudel-Lapierre (mathieu-tl) → dann frazier (dannf)
Revision history for this message
Chris J Arges (arges) wrote : Please test proposed package

Hello Patrick, or anyone else affected,

Accepted grub2 into trusty-proposed. The package will build now and be available at https://launchpad.net/ubuntu/+source/grub2/2.02~beta2-9ubuntu1.7 in a few hours, and then in the -proposed repository.

Please help us by testing this new package. See https://wiki.ubuntu.com/Testing/EnableProposed for documentation how to enable and use -proposed. Your feedback will aid us getting this update out to other Ubuntu users.

If this package fixes the bug for you, please add a comment to this bug, mentioning the version of the package you tested, and change the tag from verification-needed to verification-done. If it does not fix the bug for you, please add a comment stating that, and change the tag to verification-failed. In either case, details of your testing will help us make a better decision.

Further information regarding the verification process can be found at https://wiki.ubuntu.com/QATeam/PerformingSRUVerification . Thank you in advance!

Changed in grub2 (Ubuntu Trusty):
status: In Progress → Fix Committed
tags: added: verification-needed
Revision history for this message
Chris J Arges (arges) wrote :

Hello Patrick, or anyone else affected,

Accepted grub2 into vivid-proposed. The package will build now and be available at https://launchpad.net/ubuntu/+source/grub2/2.02~beta2-22ubuntu1.5 in a few hours, and then in the -proposed repository.

Please help us by testing this new package. See https://wiki.ubuntu.com/Testing/EnableProposed for documentation how to enable and use -proposed. Your feedback will aid us getting this update out to other Ubuntu users.

If this package fixes the bug for you, please add a comment to this bug, mentioning the version of the package you tested, and change the tag from verification-needed to verification-done. If it does not fix the bug for you, please add a comment stating that, and change the tag to verification-failed. In either case, details of your testing will help us make a better decision.

Further information regarding the verification process can be found at https://wiki.ubuntu.com/QATeam/PerformingSRUVerification . Thank you in advance!

Changed in grub2 (Ubuntu Vivid):
status: In Progress → Fix Committed
Revision history for this message
Chris J Arges (arges) wrote :

Hello Patrick, or anyone else affected,

Accepted grub2-signed into trusty-proposed. The package will build now and be available at https://launchpad.net/ubuntu/+source/grub2-signed/1.34.8 in a few hours, and then in the -proposed repository.

Please help us by testing this new package. See https://wiki.ubuntu.com/Testing/EnableProposed for documentation how to enable and use -proposed. Your feedback will aid us getting this update out to other Ubuntu users.

If this package fixes the bug for you, please add a comment to this bug, mentioning the version of the package you tested, and change the tag from verification-needed to verification-done. If it does not fix the bug for you, please add a comment stating that, and change the tag to verification-failed. In either case, details of your testing will help us make a better decision.

Further information regarding the verification process can be found at https://wiki.ubuntu.com/QATeam/PerformingSRUVerification . Thank you in advance!

Changed in grub2-signed (Ubuntu Trusty):
status: New → Fix Committed
Changed in grub2-signed (Ubuntu Vivid):
status: New → Fix Committed
Revision history for this message
Chris J Arges (arges) wrote :

Hello Patrick, or anyone else affected,

Accepted grub2-signed into vivid-proposed. The package will build now and be available at https://launchpad.net/ubuntu/+source/grub2-signed/1.46.5 in a few hours, and then in the -proposed repository.

Please help us by testing this new package. See https://wiki.ubuntu.com/Testing/EnableProposed for documentation how to enable and use -proposed. Your feedback will aid us getting this update out to other Ubuntu users.

If this package fixes the bug for you, please add a comment to this bug, mentioning the version of the package you tested, and change the tag from verification-needed to verification-done. If it does not fix the bug for you, please add a comment stating that, and change the tag to verification-failed. In either case, details of your testing will help us make a better decision.

Further information regarding the verification process can be found at https://wiki.ubuntu.com/QATeam/PerformingSRUVerification . Thank you in advance!

Simon Déziel (sdeziel)
tags: added: verification-done-trusty verification-needed-vivid
removed: verification-needed
Revision history for this message
Anton Eliasson (eliasson) wrote :

Packages from vivid-proposed fixed the issue for me.

Details:

Start-Date: 2015-12-18 12:14:56
Commandline: apt-get install grub-common/vivid-proposed -t vivid-proposed
Upgrade: grub-efi-amd64-bin:amd64 (2.02~beta2-22ubuntu1.4, 2.02~beta2-22ubuntu1.5), grub-efi-amd64:amd64 (2.02~beta2-22ubuntu1.4, 2.02~beta2-22ubuntu1.5), grub-common:amd64 (2.02~beta2-22ubuntu1.4, 2.02~beta2-22ubuntu1.5), grub2-common:amd64 (2.02~beta2-22ubuntu1.4, 2.02~beta2-22ubuntu1.5), grub-efi-amd64-signed:amd64 (1.46.4+2.02~beta2-22ubuntu1.4, 1.46.5+2.02~beta2-22ubuntu1.5)
End-Date: 2015-12-18 12:15:19

Simon Déziel (sdeziel)
tags: added: verification-done-vivid
removed: verification-needed-vivid
Revision history for this message
YitzchokL (yitzchok+launchpad) wrote :

After installing 2.02~beta2-9ubuntu1.7 on Trusty (14.04.3 32-bit) I no longer see the message during boot.
(This was perfect timing for me! I only just dealt with the upgrade from Grub legacy today and was disappointed to see an error message, which is now gone)
Thanks

Revision history for this message
Launchpad Janitor (janitor) wrote :

Status changed to 'Confirmed' because the bug affects multiple users.

Changed in grub2-signed (Ubuntu):
status: New → Confirmed
Revision history for this message
Id2ndR (id2ndr) wrote :

After installing 2.02~beta2-9ubuntu1.7 on Trusty, I had to set execute right on /etc/grub.d/00_header. Now it works normally with my lvm system partition.

So enable proposed repository, and then:
sudo apt-get install grub-efi-amd64/trusty-proposed -t trusty-proposed
sudo chmod +x /etc/grub.d/00_header
sudo update-grub2

Revision history for this message
Rich Hart (sirwizkid) wrote :

The 1.7 package is working flawlessly on my systems that were effected.
Thanks for fixing this.

Revision history for this message
fermulator (fermulator) wrote :

Based upon the comments above, and the TEST CASE defined in the main section for this bug, I confirm that verification=done

###
--> PASS
###

I tested on my own system running

{{{
$ mount | grep md60
/dev/md60 on / type ext4 (rw,errors=remount-ro)

$ cat /proc/mdstat | grep -A1 md60
md60 : active raid1 sdd2[0] sdb2[1]
      58560384 blocks super 1.2 [2/2] [UU]

fermulator@fermmy-server:~$ dpkg --list | grep grub
ii grub-common 2.02~beta2-9ubuntu1.7 amd64 GRand Unified Bootloader (common files)
ii grub-gfxpayload-lists 0.6 amd64 GRUB gfxpayload blacklist
ii grub-pc 2.02~beta2-9ubuntu1.7 amd64 GRand Unified Bootloader, version 2 (PC/BIOS version)
ii grub-pc-bin 2.02~beta2-9ubuntu1.7 amd64 GRand Unified Bootloader, version 2 (PC/BIOS binaries)
ii grub2-common 2.02~beta2-9ubuntu1.7 amd64 GRand Unified Bootloader (common files for version 2)
}}}

Full results:
http://paste.ubuntu.com/14259366/

---

NOTE: I'm not sure what to do about the "grub2-signed" properties for this bug...

information type: Public → Public Security
information type: Public Security → Public
Revision history for this message
Launchpad Janitor (janitor) wrote :

This bug was fixed in the package grub2 - 2.02~beta2-9ubuntu1.7

---------------
grub2 (2.02~beta2-9ubuntu1.7) trusty; urgency=medium

  * Cherry-picks to better handle TFTP timeouts on some arches: (LP: #1521612)
    - (7b386b7) efidisk: move device path helpers in core for efinet
    - (c52ae40) efinet: skip virtual IP devices when enumerating cards
    - (f348aee) efinet: enable hardware filters when opening interface
  * Update quick boot logic to handle abstractions for which there is no
    write support. (LP: #1274320)

 -- dann frazier <email address hidden> Wed, 16 Dec 2015 14:03:48 -0700

Changed in grub2 (Ubuntu Trusty):
status: Fix Committed → Fix Released
Revision history for this message
Steve Langasek (vorlon) wrote : Update Released

The verification of the Stable Release Update for grub2 has completed successfully and the package has now been released to -updates. Subsequently, the Ubuntu Stable Release Updates Team is being unsubscribed and will not receive messages about this bug report. In the event that you encounter a regression using the package from -updates please report a new bug using ubuntu-bug and tag the bug report regression-update so we can easily find any regressions.

Revision history for this message
Launchpad Janitor (janitor) wrote :

This bug was fixed in the package grub2-signed - 1.34.8

---------------
grub2-signed (1.34.8) trusty; urgency=medium

  * Rebuild against grub-efi-amd64 2.02~beta2-9ubuntu1.7 (LP: #1521612,
    LP: #1274320).

 -- dann frazier <email address hidden> Wed, 16 Dec 2015 14:23:00 -0700

Changed in grub2-signed (Ubuntu Trusty):
status: Fix Committed → Fix Released
Revision history for this message
Launchpad Janitor (janitor) wrote :

This bug was fixed in the package grub2 - 2.02~beta2-22ubuntu1.5

---------------
grub2 (2.02~beta2-22ubuntu1.5) vivid; urgency=medium

  * Merge in changes from 2.02~beta2-22ubuntu1.3:
    - d/p/arm64-set-correct-length-of-device-path-end-entry.patch: Fixes
      booting arm64 kernels on certain UEFI implementations. (LP: #1476882)
    - progress: avoid NULL dereference for net files. (LP: #1459872)
    - arm64/setjmp: Add missing license macro. (LP: #1459871)
    - Cherry-pick patch to add SAS disks to the device list from the ofdisk
      module. (LP: #1517586)
    - Cherry-pick patch to open Simple Network Protocol exclusively.
      (LP: #1508893)
  * Cherry-picks to better handle TFTP timeouts on some arches: (LP: #1521612)
    - (7b386b7) efidisk: move device path helpers in core for efinet
    - (c52ae40) efinet: skip virtual IP devices when enumerating cards
    - (f348aee) efinet: enable hardware filters when opening interface
  * Update quick boot logic to handle abstractions for which there is no
    write support. (LP: #1274320)

 -- dann frazier <email address hidden> Wed, 16 Dec 2015 13:31:15 -0700

Changed in grub2 (Ubuntu Vivid):
status: Fix Committed → Fix Released
Revision history for this message
Launchpad Janitor (janitor) wrote :

This bug was fixed in the package grub2-signed - 1.46.5

---------------
grub2-signed (1.46.5) vivid; urgency=medium

  * Rebuild against grub2 2.02~beta2-22ubuntu1.5 (LP: #1476882, LP: #1459872,
    LP: 1459871, LP: #1517586, LP:#1508893, LP: #1521612, LP: #1274320).

 -- dann frazier <email address hidden> Wed, 16 Dec 2015 14:18:28 -0700

Changed in grub2-signed (Ubuntu Vivid):
status: Fix Committed → Fix Released
Revision history for this message
Lior Goikhburg (goikhburg) wrote :

Problens with installing latest 14.04.3

I Have tried every solution mentioned in this thread and no luck.
Grub would not install...

HP server with 4 SATA disks, RAID 10 (md0) with /boot and / on it, No LVM

installing with:
update-grub - works fine
grub-install /dev/md0 - fails

Went up to grub version 2.02~beta2-32ubuntu1 latest from XENIAL .... still getting diskfilter error ... nothing helps.

Any ideas, anyone ?

Revision history for this message
wiley.coyote (tjwiley) wrote :

Did you try simply updating the packages from the Trusty repos? The fix has already been released.

2.02~beta2-9ubuntu1.7

The fix is there & working...at least for me.

Changed in grub2 (Debian):
status: New → Fix Released
Revision history for this message
armaos (alexandros-k) wrote :

hi,
so more or less i have tried the solutions above but still without luck.
@Lior Goikhburg (goikhburg): did you manage to solve it?

all ideas are more than welcome
thnx

Revision history for this message
Lior Goikhburg (goikhburg) wrote :

I ended up with the following workaround:

When setting up the server i configured the following:

0. RAID 10 on /sda /sdb /sdc /sdd
1. /boot / and swap partition are on RAID but NOT IN LVM VOLUME
2. Rest of the RAID space - LVM partition

At the end of install, when you get error message:
Install grub manually on /sda1 and /sda2 (/sda3 and /sda4 will not let you, cause they're striped) use console to run:
# update-grub
# grub-install /dev/sda1
# grub-install /dev/sda2
Return to setup and skip installation of grub (you installed it manally)

Hope that helps.

Revision history for this message
Paul Tomblin (ptomblin) wrote :

I upgraded to Kubuntu 16.04 and it's still happening. When am I supposed to see this supposed fix?

Changed in grub2-signed (Ubuntu):
status: Confirmed → Fix Released
Revision history for this message
oglop (1oglop1) wrote :

Yeah,Xubuntu 17.04 still present..

I hope 17.10 will get fixed

Revision history for this message
Anders Kaseorg (andersk) wrote :

This was fixed in 16.04, but if you had manually modified /etc/grub.d/00_header before the upgrade, the new version will not have been installed. You may have an unmodified version in /etc/grub.d/00_header.dpkg-new. If not, run ‘apt-get download grub-common; dpkg -x grub-common_2.02~beta3-4ubuntu6_amd64.deb grub-common-extracted’ and you’ll have the unmodified version in grub-common-extracted/etc/grub.d/00_header.dpkg-new.

Changed in grub2 (Fedora):
importance: Unknown → Undecided
status: Unknown → Won't Fix
Revision history for this message
Tim Ritberg (xpert-reactos) wrote :

Still exists in 18.04
I downloaded grub-common_2.02-2ubuntu8.2_amd64.deb and it's the same 00_header like on my disk.
My grub is located on a Raid1.

Revision history for this message
vadzen (vadzen) wrote :

Still exists in 18.04.1
No RAID. 1 SSD disk ST500LM000-1EJ16 + 500GB WD BlackTM M.2 NVMe PCIe SSD.

Revision history for this message
dann frazier (dannf) wrote : Re: [Bug 1274320] Re: Error: diskfilter writes are not supported

On Fri, Dec 28, 2018 at 7:15 AM vadzen <email address hidden> wrote:
>
> Still exists in 18.04.1
> No RAID. 1 SSD disk ST500LM000-1EJ16 + 500GB WD BlackTM M.2 NVMe PCIe SSD.

To be clear, we did not resolve this by implementing writes via
diskfilter, we just added a warning if we detect diskfilter is in use.
Are you seeing a condition where 1) you do not get warned by
grub-reboot *and* 2) You still see "Error: diskfilter writes are not
supported." after reboot?

Revision history for this message
Brendan Holmes (whiling) wrote :

This is still occurring on Ubuntu 18.04.2. Hardware RAID1 & RAID0.

Tried adding quick_boot=0 to /etc/grub.d/00_header, no difference. If I remove RAID, works fine.

Also proceeds past the error if I select "Guided - use entire disk" instead of "Guided - use entire disk and set up LVM", but then fails to reboot\boot.

Grub is v2.02-2ubuntu8.12.

Any suggestions?

Revision history for this message
Brendan Holmes (whiling) wrote :

Further to yesterday's comment, issue is occuring while installing (using debian-installer), at the "Install the GRUB boot loader on a hard disk" stage. Displayed error is: "Unable to install GRUB in /dev/mapper/<machine_name>--vg-root"

I have worked-around by choosing not to use LVM (I am choosing the "Guided - use entire disk" option), which reduces disk manageability. I'm left with the impression installing Ubuntu on enterprise-grade hardware is a bit alpha.

Revision history for this message
Jan Navrátil (honzanav) wrote :

Also affects Ubuntu 19.04 with this configuration

Disk /dev/sda: 119,2 GiB, 128 035 676 160 bajtů, 250 069 680 sektorů
Disk model: INTEL SSDSCKKW12
Jednotky: sektorů po 1 * 512 = 512 bajtech
Velikost sektoru (logického/fyzického): 512 bajtů / 512 bajtů
Velikost I/O (minimální/optimální): 512 bajtů / 512 bajtů
Typ popisu disku: gpt
Identifikátor disku: A99C455B-4F9A-4FE3-A57F-35D26DC99583

Zařízení Začátek Konec Sektory Velikost Druh
/dev/sda1 2048 2050047 2048000 1000M Systém EFI
/dev/sda2 2050048 2582527 532480 260M Systém EFI
/dev/sda3 2582528 150573055 147990528 70,6G Základní data Microsoftu
/dev/sda4 150575104 151691263 1116160 545M Prostředí obnovy Windows
/dev/sda5 151693312 250068991 98375680 46,9G Souborový systém Linuxu

Disk /dev/sdb: 465,8 GiB, 500 107 862 016 bajtů, 976 773 168 sektorů
Disk model: HGST HTS545050A7
Jednotky: sektorů po 1 * 512 = 512 bajtech
Velikost sektoru (logického/fyzického): 512 bajtů / 4096 bajtů
Velikost I/O (minimální/optimální): 4096 bajtů / 4096 bajtů
Typ popisu disku: gpt
Identifikátor disku: D5C37A89-480B-498C-BFF5-FFA3F8A1D93C

Zařízení Začátek Konec Sektory Velikost Druh
/dev/sdb1 2048 2050047 2048000 1000M Prostředí obnovy Windows
/dev/sdb2 2582528 4630527 2048000 1000M Startovací oddíl Lenova
/dev/sdb3 4630528 4892671 262144 128M Vyhrazeno pro Microsoft
/dev/sdb4 4892672 945827839 940935168 448,7G Základní data Microsoftu
/dev/sdb5 945829888 976773119 30943232 14,8G Prostředí obnovy Windows

Disk /dev/mapper/linux-ubuntu: 24 GiB, 25 769 803 776 bajtů, 50 331 648 sektorů
Jednotky: sektorů po 1 * 512 = 512 bajtech
Velikost sektoru (logického/fyzického): 512 bajtů / 512 bajtů
Velikost I/O (minimální/optimální): 512 bajtů / 512 bajtů

Disk /dev/mapper/linux-swap: 4 GiB, 4 294 967 296 bajtů, 8 388 608 sektorů
Jednotky: sektorů po 1 * 512 = 512 bajtech
Velikost sektoru (logického/fyzického): 512 bajtů / 512 bajtů
Velikost I/O (minimální/optimální): 512 bajtů / 512 bajtů

Revision history for this message
MasterCATZ (mastercatz) wrote :

ugh still exists

I might have to do the method Lior Goikhburg (goikhburg) did

and recreate with a standalone boot partition at the start of each drive striped outside of LVM or something

Revision history for this message
MasterCATZ (mastercatz) wrote :

(GRUB) 2.02-2ubuntu8.13

Revision history for this message
David Andruczyk (dandruczyk) wrote :

This affects 20.04 as well where the root device is on mdadm (/dev/md127)

Revision history for this message
MasterCATZ (mastercatz) wrote :

this still happens for me Ubuntu 20.04

this time around I even allowed grub to have ext4 partition acess at the front of the disks

instead of giving mdadm entire device this time I partitioned the disks and gave mdadm the partitions to raid with , Using LVM with btrfs partitions for boot and system

Changed in grub2 (Ubuntu Bionic):
status: New → Confirmed
Changed in grub2 (Ubuntu Focal):
status: New → Confirmed
Changed in grub2-signed (Ubuntu Bionic):
status: New → Confirmed
Changed in grub2-signed (Ubuntu Focal):
status: New → Confirmed
Revision history for this message
Allen Lee (metricv) wrote :

Confirmed on Ubuntu Focal (Kubuntu 20.04) installed on LVM disk, with GRUB_SAVEDEFAULT=true

This bug triggers whenever a boot option other than the first is being selected in the grub menu. grub won't be able to save that selection.

Revision history for this message
Amedee Van Gasse (amedee) wrote :

Confirmed on Ubuntu Jammy Jellyfish (Ubuntu 22.04 LTS) installed on LVM disk, with GRUB_SAVEDEFAULT=true.

This bug triggers whenever a boot option other than the first is being selected in the grub menu. Grub won't be able to save that selection.

Revision history for this message
Julian Andres Klode (juliank) wrote :

This bug is not about enabling writes through disk filters, but about not writing on those systems as part of the boot logic. Support for writes and hence GRUB_SAVEDEFAULT is a separate feature request I'd suggest you to take upstream.

no longer affects: grub2 (Ubuntu Bionic)
no longer affects: grub2 (Ubuntu Focal)
no longer affects: grub2-signed (Ubuntu Bionic)
no longer affects: grub2-signed (Ubuntu Focal)
Revision history for this message
In , bmertens (bmertens-redhat-bugs) wrote :

Reopening, this issue occurs on a new installation of Fedora 39 with /boot on a RAID 1 device.

https://askubuntu.com/questions/468466/diskfilter-writes-are-not-supported-what-triggers-this-error/498281#498281 and https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/1274320 indicate this problem is related to LVM/RAID.

$ df -hP /boot
Filesystem Size Used Avail Use% Mounted on
/dev/md125 2.0G 250M 1.6G 14% /boot

$ cat /proc/mdstat
Personalities : [raid10] [raid1] [raid6] [raid5] [raid4]
md125 : active raid1 sdc2[2] sdb2[1] sdd2[3] sda3[0]
      2094080 blocks super 1.2 [4/4] [UUUU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

There is also a patch proposed at https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=754921

Changed in grub2 (Fedora):
status: Won't Fix → Confirmed
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Duplicates of this bug

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.