Error: diskfilter writes are not supported

Bug #1274320 reported by Patrick Houle on 2014-01-29
This bug affects 359 people
Affects Status Importance Assigned to Milestone
grub
Unknown
Unknown
grub2 (Debian)
Fix Released
Unknown
grub2 (Fedora)
Won't Fix
Undecided
grub2 (Ubuntu)
High
Unassigned
Trusty
High
dann frazier
Vivid
High
dann frazier
grub2-signed (Ubuntu)
Undecided
Unassigned
Trusty
Undecided
Unassigned
Vivid
Undecided
Unassigned

Bug Description

[Impact]
RAID and LVM users may run into a cryptic warning on boot from GRUB; because some variants of RAID and LVM are not supported for writing by GRUB itself. GRUB would typically try to write a tiny file to the boot partition for things like remembering the last selected boot entry.

[Test Case]
On an affected system (typically any RAID/LVM setup where the boot device is on RAID or on a LVM device), try to boot. Without the patch, the message will appear, otherwise it will not.

[Regression Potential]
The potential for regression is minimal as the patch involves enforcing the fact that diskfilter writes are unsupported by grub in menu building scripts, which will automatically avoid enabling recordfail (the offending feature which saves GRUB's state) if the boot partition is detected to be on a device which does not support diskfilter writes.

----

Once grub chooses what to boot to, an error shows up and will sit on the screen for approx. 5 seconds

"Error: diskfilter writes are not supported.
Press any key to continue..."

From what I understand, this error is related to raid partitions, and I have two of them (md0, md1). Both partitions are used (root and swap). Raid is assembled with mdadm and are raid0

This error message started appearing right after grub2 was updated on 01/27/2014.

System: Kernel: 3.13.0-5-generic x86_64 (64 bit) Desktop: KDE 4.11.5 Distro: Ubuntu 14.04 trusty
Drives: HDD Total Size: 1064.2GB (10.9% used)
        1: id: /dev/sda model: SanDisk_SDSSDRC0 size: 32.0GB
        2: id: /dev/sdb model: SanDisk_SDSSDRC0 size: 32.0GB
        3: id: /dev/sdc model: ST31000528AS size: 1000.2GB
RAID: Device-1: /dev/md1 - active raid: 0 components: online: sdb2 sda3 (swap)       Device-2: /dev/md0 - active raid: 0 components: online: sdb1 sda1 ( / )
Grub2: grub-efi-amd64 version 2.02~beta2-5

ProblemType: Bug
DistroRelease: Ubuntu 14.04
Package: grub-efi-amd64 2.02~beta2-5
ProcVersionSignature: Ubuntu 3.13.0-5.20-generic 3.13.0
Uname: Linux 3.13.0-5-generic x86_64
NonfreeKernelModules: nvidia
ApportVersion: 2.13.2-0ubuntu2
Architecture: amd64
CurrentDesktop: KDE
Date: Wed Jan 29 17:37:59 2014
SourcePackage: grub2
UpgradeStatus: Upgraded to trusty on 2014-01-23 (6 days ago)

Download full text (4.3 KiB)

Created attachment 795965
photo of bootscreen

after upgrade to F19 GRUB2 comes up with "error: diskfilter writes are not supported" and waits some seconds to press a key and thanks god boots after that automatically to not break wakeup-on-lan (see also attachment)

but what is this nonsense?

Personalities : [raid1] [raid10]
md2 : active raid10 sda3[0] sdc3[1] sdb3[3] sdd3[2]
      3875222528 blocks super 1.1 512K chunks 2 near-copies [4/4] [UUUU]
      bitmap: 2/29 pages [8KB], 65536KB chunk

md1 : active raid10 sda2[0] sdc2[1] sdb2[3] sdd2[2]
      30716928 blocks super 1.1 512K chunks 2 near-copies [4/4] [UUUU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md0 : active raid1 sda1[0] sdc1[1] sdd1[2] sdb1[3]
      511988 blocks super 1.0 [4/4] [UUUU]

unused devices: <none>
_________________________________________________

[root@rh:~]$ cat /boot/grub2/grub.cfg
#
# DO NOT EDIT THIS FILE
#
# It is automatically generated by grub2-mkconfig using templates
# from /etc/grub.d and settings from /etc/default/grub
#

### BEGIN /etc/grub.d/00_header ###
if [ -s $prefix/grubenv ]; then
  load_env
fi
if [ "${next_entry}" ] ; then
   set default="${next_entry}"
   set next_entry=
   save_env next_entry
   set boot_once=true
else
   set default="${saved_entry}"
fi

if [ x"${feature_menuentry_id}" = xy ]; then
  menuentry_id_option="--id"
else
  menuentry_id_option=""
fi

export menuentry_id_option

if [ "${prev_saved_entry}" ]; then
  set saved_entry="${prev_saved_entry}"
  save_env saved_entry
  set prev_saved_entry=
  save_env prev_saved_entry
  set boot_once=true
fi

function savedefault {
  if [ -z "${boot_once}" ]; then
    saved_entry="${chosen}"
    save_env saved_entry
  fi
}

function load_video {
  if [ x$feature_all_video_module = xy ]; then
    insmod all_video
  else
    insmod efi_gop
    insmod efi_uga
    insmod ieee1275_fb
    insmod vbe
    insmod vga
    insmod video_bochs
    insmod video_cirrus
  fi
}

terminal_output console
set timeout=1
### END /etc/grub.d/00_header ###

### BEGIN /etc/grub.d/10_linux ###
menuentry 'Fedora, with Linux 3.10.11-200.fc19.x86_64' --class fedora --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.10.11-200.fc19.x86_64-advanced-b935b5db-0051-4f7f-83ac-6a6651fe0988' {
        savedefault
        load_video
        set gfxpayload=keep
        insmod gzio
        insmod part_msdos
        insmod part_msdos
        insmod part_msdos
        insmod part_msdos
        insmod diskfilter
        insmod mdraid1x
        insmod ext2
        set root='mduuid/1d691642baed26df1d1974964fb00ff8'
        if [ x$feature_platform_search_hint = xy ]; then
          search --no-floppy --fs-uuid --set=root --hint='mduuid/1d691642baed26df1d1974964fb00ff8' 1de836e4-e97c-43ee-b65c-400b0c29d3aa
        else
          search --no-floppy --fs-uuid --set=root 1de836e4-e97c-43ee-b65c-400b0c29d3aa
        fi
        linux /vmlinuz-3.10.11-200.fc19.x86_64 root=UUID=b935b5db-0051-4f7f-83ac-6a6651fe0988 ro divider=10 audit=0 rd.plymouth=0 plymouth.enable=0 rd.md.uuid=b7475879:c95d9a47:c5043c02:0c5ae720 rd.md.uuid=1d691642:baed26df:1d197496:4fb00ff8 rd.md.uuid=ea253255:cb915401:f32794ad:ce0fe396 rd.luk...

Read more...

oh, and remove the line "insmod diskfilter" from "grub.cfg" does not change anything

I'm seeing the same error. I found the message mysterious, so I took a look at the code and discovered the following:
- "diskfilter" is GRUB's implementation detail for working with LVM and MD RAID
  devices.
- Writing to these kinds of devices is not implemented in GRUB.
- The error may have always been there, but
  0085-grub-core-disk-diskfilter.c-grub_diskfilter_write-Ca.patch made it more
  visible.
- The reason GRUB is trying to write to the device could be it's following
  the "save_env" commands in the config file.

interesting - why does GRUB try to write anything?
it has not to touch any FS at boot

GRUB2 is such a large step backwards because it is more or less it's own operating system with the most ugly configuration one could design while grub-legacy was a boot-manager and nothing else

finally we end in 3 full operating systems

* grub
* dracut
* linux

/etc/default/grub with these options avoids a lot of crap on Fedora-Only machines

GRUB_TIMEOUT=1
GRUB_DISTRIBUTOR="Fedora"
GRUB_SAVEDEFAULT="false"
GRUB_TERMINAL_OUTPUT="console"
GRUB_DISABLE_RECOVERY="true"
GRUB_DISABLE_SUBMENU="true"
GRUB_DISABLE_OS_PROBER="true"

Note that GRUB Legacy had a similar feature: the "savedefault" command.

but it did not halt boot for some seconds with a useless error message and "press any key to continue" as well it did not mess up with submenues and whatever nor did it freeze the machine while edit the kernel line which happens with GRUB2 way too often if you need to edit it

My comment #5 was just to show that the assertion "it has not to touch any FS at boot" is false and that GRUB Legacy was no different in this regard.
I already commented on the increased visibility of the error, in comment #2.

160 comments hidden view all 165 comments
Patrick Houle (buddlespit) wrote :
description: updated
description: updated
Patrick Houle (buddlespit) wrote :

I should also point out that the system will boot normally once any key is pressed or 5 seconds elapses.

description: updated
Launchpad Janitor (janitor) wrote :

Status changed to 'Confirmed' because the bug affects multiple users.

Changed in grub2 (Ubuntu):
status: New → Confirmed
summary: - Error after boot menu
+ Error: diskfilter writes are not supported
Changed in grub2 (Ubuntu):
importance: Undecided → Low
importance: Low → Medium
KrautOS (krautos) wrote :

Got the same issue on a up-to-date "trusty" machine with / on mdadm RAID 1 and swap on madam RAID 0. Any clues how to fix it?

Patrick Houle (buddlespit) wrote :

I changed my raids <dump> and <pass> to '0 0' instead of '0 1' in /etc/fstab.

Singtoh (singtoh) wrote :

Hello all,

Just thought I would throw in as well. Just started seeing this after a fresh install of Ubuntu-Trusty today at the first bootup and all boots there after. I am not running RAID but I am using LVM. This is Ubuntu-Trusty amd64. Just as a side note, on todays install I didn't give the system a /boot partition like I have seen in all the LVM tutorials. I just have two disks that I made LVM partitions on ie. /root /home /Storage /Storage1 and swap. Runs real nice but get that nagging error at boot????? Hope it gets a fix soon.

Cheers,

Singtoh

Huygens (huygens-25) wrote :

Here is another different kind of setup which triggers the problem:
/boot ext4 on a md RAID10 (4 HDD) partition
/ btrfs RAID10 (4 HDD) partition
swap on a md RAID0 (4HDD) partition

The boot and kernel are on a MD RAID (software RAID), whereas the rest of the system is using Btrfs RAID.

Cybjit (cybjit) wrote :

I also get this message.
/ is on ext4 LVM, and no separate /boot.
Booting works fine after the delay or key press.

Singtoh (singtoh) wrote :

Just to add this tidbit. I just re-installed with normal partitioning (no raid and no LVM) just /root /home and swap and it boots normally, no 5sec wait and no errors. So I guess LVM & or RAID related???? I am just about to re-install again to a new SSD and will install to LVM again and I'll post back with the outcome.

Cheers,

Singtoh

robled (robled) wrote :

This is definitely RAID/LVM related. On my 14.04 system with a ext4 boot partition I don't get the error, but on another system that's fully LVM I do get the error.

Has anyone come up with a grub config workaround to prevent the delay on boot?

Jean-Mi (forum-i) wrote :

You guys should be happy your system still boots. I just got that error (diskfilter writes are not supported) but grub exists immediately after, leaving my uefi with no other choice than booting another distro.
I had to spam press the pause key on my keyboard to get the error message before it disappears.
On my setup, the boot error occurs with openSuse installed on LVM2. The other distro is installed with a regular /boot (ext4) separate partition. Both are using grub. I could load both by calling their respective grubx64.efi from the ESP partition.
The last thing I remember having done on openSuse was to create a btrfs partition and tweaked /etc/fstab a little bit.
From the other distro, I can read openSuse's files and everything looks fine. It's like the boot loader used to work and suddenly failed.
I'd love to remember what else I did since it worked. And I'd love to be able to boot openSuse again.

Jean-Mi (forum-i) wrote :

I may have found the reason for my particular crash. Now my system boots normally.

According to the bug report #1006289 at redhat, the bug could come with insmod diskfilter but someone deactivated that mod and still got the error. I don't even have this mod declared. But I noticed openSuse loves to handle everything on reboot, like setting the next OS to load.
My /boot/grub/grubenv contains 2 lines. Basically, save_entry=openSUSE and next_entry=LMDE Cinnamon.
I removed those lines and the error disappeared. /Maybe/ those line instructs grub to write something on the boot partition, which it's perfectly unable to do since it cannot write to LVM.
Anyway, it seems that solving this bug requires to find out why grub tries to write data.

hamish (hamish-b) wrote :

Hi, I get the same error with 14.04 beta1 booting into RAID1.

for those running their swap partitions in a raid, I'm wondering if it would be better to just mount the multiple swap partitions in fstab and give them all equal priority? For soft-raid it would cut out a layer of overhead, and for swap anything which cuts out overhead is a good thing. (e.g., mount options in fstab for all swap partitions: sw,pri=10)

See `man 2 swapon` for details.
       "If two or more areas
       have the same priority, and it is the highest priority available, pages
       are allocated on a round-robin basis between them."

Phillip Susi (psusi) wrote :

That would defeat the purpose of raid1, which is to keep the system up and running when a drive fails. With two separate swaps, if the disk fails you're going to probably have half of user space die and end up with a badly broken server that needs rebooted.

Artyom Nosov (artyom.nosov) wrote :

Got the same issue on the daily build of trust (20140319). / , /home and swap all is RAID1

I have this issue on ubuntu Trusty 14.04 on / in lvm. Deleting /boot/grub/grubenv prevent this error on next boot, but grub will create this file every time boot, thus I have rm /boot/grub/grubenv in my crontab.

stoffel010170 (stoffel-010170) wrote :

Have the same bug on my LVM system,too. My systems without LVM and RAID are not affected.

Thomas (t.c) wrote :

I have the bug too, I use root filesystem (/) as a software raid 1

Thomas (t.c) wrote :

# GRUB Environment Block
#######################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################

thats the content of /boot/grub/grubenv - is it right?

VladimirCZ (vlabla) wrote :

I also get this message.
/ and swap volumes are on ext4 LVM, and no separate /boot.
Booting works fine after the delay or key press.

Ubuntu QA Website (ubuntuqa) wrote :

This bug has been reported on the Ubuntu ISO testing tracker.

A list of all reports related to this bug can be found here:
http://iso.qa.ubuntu.com/qatracker/reports/bugs/1274320

tags: added: iso-testing
Moritz Baumann (mo42) wrote :

The problem is the call to the recordfail function in each menuentry. If I comment it in /boot/grub/grub.cfg, the error message no longer appears. Unfortunately, there seems to be no configuration option in /etc/default/grub which may prevent the scripts in /etc/grub.d from adding that function call.

I'm also affected by this bug. So I can confirm it's still there on a fresh install of 14.04

I'm using RAID0 for /

Moritz Baumann (mo42) wrote :

As a temporary fix, you can edit /etc/grub.d/10_linux and replace 'quick_boot="1"' with 'quick_boot="0"' in line 25. (Don't forget to run "sudo update-grub" afterwards.)

I can confirm that the workaround mentioned by Moritz (setting
'quickboot=0') works for me.

137 comments hidden view all 165 comments

For Fedora 20 in /etc/default/grub

GRUB_SAVEDEFAULT="false"

makes the difference (after grub2-mkconfig)

136 comments hidden view all 165 comments
Drew Michel (drew-michel) wrote :

I can also confirm this bug is happening with the latest beta version of Trusty with /boot living on an EXT4 LVM partition.

* setting quick_boot="0" in /etc/grub.d/10_linux and running update-grub fixes the issue
* setting GRUB_SAVEDEFAULT="false" in /etc/default/grub and running update-grub does not fix the issue
* removing recordfail from /boot/grub/grub.cfg fixes the issue

3.13.0-23-generic #45-Ubuntu
Distributor ID: Ubuntu
Description: Ubuntu Trusty Tahr (development branch)
Release: 14.04

apt-cache policy grub-pc
grub-pc:
  Installed: 2.02~beta2-9

Gus (gus-lgze) wrote :

Just to confirm, this is in the release version of 14.04. I've got it on a fresh build with raid 1 via mdadm, no swap.

It does not halt booting, just a brief delay.

Quesar (rick-microway) wrote :

I just made a permanent clean fix for this, at least for MD (software RAID). It can easily be modified to fix for LVM too. Edit /etc/grub.d/00_header and change the recordfail section to this:

if [ "$quick_boot" = 1 ]; then
    cat <<EOF
function recordfail {
  set recordfail=1
EOF
    FS="$(grub-probe --target=fs "${grubdir}")"
    GRUBMDDEVICE="$(grub-probe --target=disk "${grubdir}" | grep \/dev\/md)"
    if [ $? -eq 0 ] ; then
        cat <<EOF
  # GRUB lacks write support for $GRUBMDDEVICE, so recordfail support is disabled.
EOF
    else
        case "$FS" in
          btrfs | cpiofs | newc | odc | romfs | squash4 | tarfs | zfs)
            cat <<EOF
  # GRUB lacks write support for $FS, so recordfail support is disabled.
EOF
            ;;
          *)
            cat <<EOF
  if [ -n "\${have_grubenv}" ]; then if [ -z "\${boot_once}" ]; then save_env recordfail; fi; fi
EOF
        esac
    fi
    cat <<EOF
}
EOF
fi

robled (robled) wrote :

The work-around from #24 gets rid of the error for me. I timed my boot process after the change and didn't notice any appreciable difference in boot time with the work-around in place. This testing was performed using a recent laptop with an SSD.

EAB (adair-boder) wrote :

I got this error message too - with a fresh install of 14.04 Server Official Release.
I also have 2 RAID-1 setups.

I have recently installed Xubuntu 14.04 (Official Release) on two computers. On one of them I did not use RAID and allowed automatic disk partititioning; no boot error has been observed. For the second computer I used the Minimal CD, installed two RAID0 devices (one for swap and one for /) and Xubuntu; on this computer the error message appeared every time I booted. The workaround suggested by Moritz Baumann (#24) eliminated the error message.

1 comments hidden view all 165 comments
bolted (k-minnick) wrote :

I followed comment #28 from Quesar (rick-microway) above with lubuntu 14.04 running on a 1U supermicro server. Rebooted multiple times to test, and I am no longer getting this error message. A huge thank you to Quesar for a fix!

Vadim Nevorotin (malamut) wrote :

Fix from #28 extended to support LVM (so, I think, it is universal clean fix of this bug). Change recordfail section in /etc/grub.d/00_header to:

if [ "$quick_boot" = 1 ]; then
    cat <<EOF
function recordfail {
  set recordfail=1
EOF
    GRUBMDDEVICE="$(grub-probe --target=disk "${grubdir}")"
    GRUBLVMDEVICE="$(grub-probe --target=disk "${grubdir}")"
    if echo "$GRUBMDDEVICE" | grep "/dev/md" > /dev/null; then
        cat <<EOF
  # GRUB lacks write support for $GRUBMDDEVICE, so recordfail support is disabled.
EOF
    elif echo "$GRUBLVMDEVICE" | grep "/dev/mapper" > /dev/null; then
        cat <<EOF
  # GRUB lacks write support for $GRUBLVMDEVICE, so recordfail support is disabled.
EOF
    else
        FS="$(grub-probe --target=fs "${grubdir}")"
        case "$FS" in
          btrfs | cpiofs | newc | odc | romfs | squash4 | tarfs | zfs)
            cat <<EOF
  # GRUB lacks write support for $FS, so recordfail support is disabled.
EOF
          ;;
          *)
            cat <<EOF
  if [ -n "\${have_grubenv}" ]; then if [ -z "\${boot_once}" ]; then save_env recordfail; fi; fi
EOF
        esac
    fi
    cat <<EOF
}

Then run update-grub

Andrew Hamilton (ahamilton9) wrote :

Just confirming that the above (RAID & LVM version) fix is working for a RAID10, 14.04, x64, fresh install. I don't have LVM up though, so I cannot confirm that detail.

Tato Salcedo (tatosalcedo) wrote :

I have no raid, I lvm and present the same error

Aaron Hastings (thecosmicfrog) wrote :

Just installed 14.04 and seeing the same error on boot.

I don't have any RAID setup, but I am using LVM ext4 volumes for /, /home and swap. My /boot is on a separate ext4 primary partition in an msdos partition table.

Agustín Ure (aeu79) wrote :

Confirming that the fix in comment #34 solved the problem in a fresh install of 14.04 with LVM.

Uqbar (uqbar) wrote :

I would like to apply the fix from comment#34 as I am using software RAID6 and LVM at the same time.
Unluckily I am not so good at changing that "recordfail section in /etc/grub.d/00_header".
Would it be possible to attach here the complete fixed /etc/grub.d/00_header file?
Would it be possible to have this as an official "fix released"?

David Twersky (dmtwersky) wrote :

Confirming comment#34 fixed it for me as well.
Im using LVM on all partitions.

tags: added: patch
Changed in grub2 (Ubuntu):
status: Confirmed → Triaged
Changed in grub:
importance: Undecided → Unknown
status: New → Unknown
importance: Unknown → Undecided
status: Unknown → New
tags: added: utopic
Steve Langasek (vorlon) on 2014-06-17
Changed in grub2 (Ubuntu):
importance: Medium → High
Anders Kaseorg (andersk) on 2014-07-16
Changed in grub:
status: New → Invalid
Changed in grub2 (Debian):
status: Unknown → New
Changed in mdadm (Ubuntu):
assignee: nobody → Dimitri John Ledkov (xnox)
Changed in mdadm (Ubuntu):
status: New → Confirmed
Changed in mdadm (Ubuntu):
importance: Undecided → High
status: Confirmed → Triaged
Changed in grub:
importance: Undecided → Unknown
status: Invalid → Unknown
Changed in grub2 (Ubuntu):
assignee: nobody → Colin Watson (cjwatson)
Changed in mdadm (Ubuntu):
status: Triaged → Invalid
Colin Watson (cjwatson) on 2014-12-18
Changed in grub2 (Ubuntu):
assignee: Colin Watson (cjwatson) → nobody
123 comments hidden view all 165 comments

This message is a notice that Fedora 19 is now at end of life. Fedora
has stopped maintaining and issuing updates for Fedora 19. It is
Fedora's policy to close all bug reports from releases that are no
longer maintained. Approximately 4 (four) weeks from now this bug will
be closed as EOL if it remains open with a Fedora 'version' of '19'.

Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, simply change the 'version'
to a later Fedora version.

Thank you for reporting this issue and we are sorry that we were not
able to fix it before Fedora 19 is end of life. If you would still like
to see this bug fixed and are able to reproduce it against a later version
of Fedora, you are encouraged change the 'version' to a later Fedora
version prior this bug is closed as described in the policy above.

Although we aim to fix as many bugs as possible during every release's
lifetime, sometimes those efforts are overtaken by events. Often a
more recent Fedora release includes newer upstream software that fixes
bugs or makes them obsolete.

Fedora 19 changed to end-of-life (EOL) status on 2015-01-06. Fedora 19 is
no longer maintained, which means that it will not receive any further
security or bug fix updates. As a result we are closing this bug.

If you can reproduce this bug against a currently maintained version of
Fedora please feel free to reopen this bug against that version. If you
are unable to reopen this bug, please file a new report against the
current release. If you experience problems, please add a comment to this
bug.

Thank you for reporting this bug and we are sorry it could not be fixed.

Steve Langasek (vorlon) on 2015-06-08
Changed in grub2 (Ubuntu):
assignee: nobody → Mathieu Trudel-Lapierre (mathieu-tl)
Changed in grub2 (Ubuntu):
status: Triaged → Incomplete
Loïc Minier (lool) on 2015-06-23
Changed in grub2 (Ubuntu):
status: Incomplete → New
Changed in grub2 (Ubuntu):
status: New → Confirmed
Changed in grub2 (Ubuntu):
status: Confirmed → In Progress
Changed in grub2 (Ubuntu):
status: In Progress → Fix Released
38 comments hidden view all 165 comments
Charis (tao-qqmail) wrote :

Where is the solution.

Changed in grub2 (Ubuntu):
assignee: Mathieu Trudel-Lapierre (mathieu-tl) → nobody
Changed in grub2 (Ubuntu Trusty):
status: New → In Progress
Changed in grub2 (Ubuntu Vivid):
status: New → In Progress
importance: Undecided → High
Changed in grub2 (Ubuntu Trusty):
assignee: nobody → Mathieu Trudel-Lapierre (mathieu-tl)
importance: Undecided → High
Changed in grub2 (Ubuntu Vivid):
assignee: nobody → Mathieu Trudel-Lapierre (mathieu-tl)
Changed in mdadm (Ubuntu):
assignee: Dimitri John Ledkov (xnox) → nobody
fermulator (fermulator) wrote :

So as per my comment on "fermulator (fermulator) wrote on 2015-07-04: " there appears to be a few post-comments of confusion.

What /is/ the correct way to re-install grub to mdadm member drives?
(assuming mdadm has member disks with proper RAID partitions)
{{{
fermulator@fermmy-server:~$ cat /proc/mdstat | grep -A3 md60
md60 : active raid1 sdi2[1] sdj2[0]
      58560384 blocks super 1.2 [2/2] [UU]
}}}

grub-install /dev/sdX|Y
or,
grub-install /dev/sdX#|Y#

Ted Cabeen (ted-cabeen) wrote :

fermulator, if Linux is the only operating system on this computer, you want to install the grub bootloader on the drives, not the partitions, so /dev/sdX, /dev/sdY, etc.

Launchpad Janitor (janitor) wrote :

Status changed to 'Confirmed' because the bug affects multiple users.

Changed in mdadm (Ubuntu Trusty):
status: New → Confirmed
Changed in mdadm (Ubuntu Vivid):
status: New → Confirmed
Phillip Susi (psusi) on 2015-11-07
no longer affects: mdadm (Ubuntu)
no longer affects: mdadm (Ubuntu Trusty)
no longer affects: mdadm (Ubuntu Vivid)
1 comments hidden view all 165 comments
Michiel Bruijn (michiel-o) wrote :

This bug is still present and not fixed for me and several other people (for example http://forum.kodi.tv/showthread.php?tid=194447)

I did a clean install of kodibuntu (lubuntu 14.04) and had this error.
I use LVM and installed the OS on a SSD in AHCI mode.
It's annoying, but the system continues after a few seconds.

I would like to have this problem fixed because I have a slow resume of my monitor after suspend. I would like to rule out this problem to be related.

Tom Reynolds (tomreyn) wrote :

mathieu-tl:

Thanks for your work on this issue.

Since you nominated it for trusty and state it's in progress - is there a way to follow this progress?
Are there any test builds you would like to be tested, yet?

In case it's not been sufficiently stated before, this issue does affect 14.04 LTS x86_64.

It would be great to see a SRU, since it slows the boot process and may trick users into thinking their Ubuntu installation is broken when it is not (doing as the message suggests will just reboot your system).

Anyone is welcome copy + paste this text to the first post if that should help with the SRU.

description: updated
dann frazier (dannf) on 2015-12-16
Changed in grub2 (Ubuntu Vivid):
assignee: Mathieu Trudel-Lapierre (mathieu-tl) → dann frazier (dannf)
Changed in grub2 (Ubuntu Trusty):
assignee: Mathieu Trudel-Lapierre (mathieu-tl) → dann frazier (dannf)

Hello Patrick, or anyone else affected,

Accepted grub2 into trusty-proposed. The package will build now and be available at https://launchpad.net/ubuntu/+source/grub2/2.02~beta2-9ubuntu1.7 in a few hours, and then in the -proposed repository.

Please help us by testing this new package. See https://wiki.ubuntu.com/Testing/EnableProposed for documentation how to enable and use -proposed. Your feedback will aid us getting this update out to other Ubuntu users.

If this package fixes the bug for you, please add a comment to this bug, mentioning the version of the package you tested, and change the tag from verification-needed to verification-done. If it does not fix the bug for you, please add a comment stating that, and change the tag to verification-failed. In either case, details of your testing will help us make a better decision.

Further information regarding the verification process can be found at https://wiki.ubuntu.com/QATeam/PerformingSRUVerification . Thank you in advance!

Changed in grub2 (Ubuntu Trusty):
status: In Progress → Fix Committed
tags: added: verification-needed
Chris J Arges (arges) wrote :

Hello Patrick, or anyone else affected,

Accepted grub2 into vivid-proposed. The package will build now and be available at https://launchpad.net/ubuntu/+source/grub2/2.02~beta2-22ubuntu1.5 in a few hours, and then in the -proposed repository.

Please help us by testing this new package. See https://wiki.ubuntu.com/Testing/EnableProposed for documentation how to enable and use -proposed. Your feedback will aid us getting this update out to other Ubuntu users.

If this package fixes the bug for you, please add a comment to this bug, mentioning the version of the package you tested, and change the tag from verification-needed to verification-done. If it does not fix the bug for you, please add a comment stating that, and change the tag to verification-failed. In either case, details of your testing will help us make a better decision.

Further information regarding the verification process can be found at https://wiki.ubuntu.com/QATeam/PerformingSRUVerification . Thank you in advance!

Changed in grub2 (Ubuntu Vivid):
status: In Progress → Fix Committed
Chris J Arges (arges) wrote :

Hello Patrick, or anyone else affected,

Accepted grub2-signed into trusty-proposed. The package will build now and be available at https://launchpad.net/ubuntu/+source/grub2-signed/1.34.8 in a few hours, and then in the -proposed repository.

Please help us by testing this new package. See https://wiki.ubuntu.com/Testing/EnableProposed for documentation how to enable and use -proposed. Your feedback will aid us getting this update out to other Ubuntu users.

If this package fixes the bug for you, please add a comment to this bug, mentioning the version of the package you tested, and change the tag from verification-needed to verification-done. If it does not fix the bug for you, please add a comment stating that, and change the tag to verification-failed. In either case, details of your testing will help us make a better decision.

Further information regarding the verification process can be found at https://wiki.ubuntu.com/QATeam/PerformingSRUVerification . Thank you in advance!

Changed in grub2-signed (Ubuntu Trusty):
status: New → Fix Committed
Changed in grub2-signed (Ubuntu Vivid):
status: New → Fix Committed
Chris J Arges (arges) wrote :

Hello Patrick, or anyone else affected,

Accepted grub2-signed into vivid-proposed. The package will build now and be available at https://launchpad.net/ubuntu/+source/grub2-signed/1.46.5 in a few hours, and then in the -proposed repository.

Please help us by testing this new package. See https://wiki.ubuntu.com/Testing/EnableProposed for documentation how to enable and use -proposed. Your feedback will aid us getting this update out to other Ubuntu users.

If this package fixes the bug for you, please add a comment to this bug, mentioning the version of the package you tested, and change the tag from verification-needed to verification-done. If it does not fix the bug for you, please add a comment stating that, and change the tag to verification-failed. In either case, details of your testing will help us make a better decision.

Further information regarding the verification process can be found at https://wiki.ubuntu.com/QATeam/PerformingSRUVerification . Thank you in advance!

Simon Déziel (sdeziel) on 2015-12-17
tags: added: verification-done-trusty verification-needed-vivid
removed: verification-needed
Anton Eliasson (eliasson) wrote :

Packages from vivid-proposed fixed the issue for me.

Details:

Start-Date: 2015-12-18 12:14:56
Commandline: apt-get install grub-common/vivid-proposed -t vivid-proposed
Upgrade: grub-efi-amd64-bin:amd64 (2.02~beta2-22ubuntu1.4, 2.02~beta2-22ubuntu1.5), grub-efi-amd64:amd64 (2.02~beta2-22ubuntu1.4, 2.02~beta2-22ubuntu1.5), grub-common:amd64 (2.02~beta2-22ubuntu1.4, 2.02~beta2-22ubuntu1.5), grub2-common:amd64 (2.02~beta2-22ubuntu1.4, 2.02~beta2-22ubuntu1.5), grub-efi-amd64-signed:amd64 (1.46.4+2.02~beta2-22ubuntu1.4, 1.46.5+2.02~beta2-22ubuntu1.5)
End-Date: 2015-12-18 12:15:19

Simon Déziel (sdeziel) on 2015-12-18
tags: added: verification-done-vivid
removed: verification-needed-vivid

After installing 2.02~beta2-9ubuntu1.7 on Trusty (14.04.3 32-bit) I no longer see the message during boot.
(This was perfect timing for me! I only just dealt with the upgrade from Grub legacy today and was disappointed to see an error message, which is now gone)
Thanks

Launchpad Janitor (janitor) wrote :

Status changed to 'Confirmed' because the bug affects multiple users.

Changed in grub2-signed (Ubuntu):
status: New → Confirmed
Id2ndR (id2ndr) wrote :

After installing 2.02~beta2-9ubuntu1.7 on Trusty, I had to set execute right on /etc/grub.d/00_header. Now it works normally with my lvm system partition.

So enable proposed repository, and then:
sudo apt-get install grub-efi-amd64/trusty-proposed -t trusty-proposed
sudo chmod +x /etc/grub.d/00_header
sudo update-grub2

Rich Hart (sirwizkid) wrote :

The 1.7 package is working flawlessly on my systems that were effected.
Thanks for fixing this.

fermulator (fermulator) wrote :

Based upon the comments above, and the TEST CASE defined in the main section for this bug, I confirm that verification=done

###
--> PASS
###

I tested on my own system running

{{{
$ mount | grep md60
/dev/md60 on / type ext4 (rw,errors=remount-ro)

$ cat /proc/mdstat | grep -A1 md60
md60 : active raid1 sdd2[0] sdb2[1]
      58560384 blocks super 1.2 [2/2] [UU]

fermulator@fermmy-server:~$ dpkg --list | grep grub
ii grub-common 2.02~beta2-9ubuntu1.7 amd64 GRand Unified Bootloader (common files)
ii grub-gfxpayload-lists 0.6 amd64 GRUB gfxpayload blacklist
ii grub-pc 2.02~beta2-9ubuntu1.7 amd64 GRand Unified Bootloader, version 2 (PC/BIOS version)
ii grub-pc-bin 2.02~beta2-9ubuntu1.7 amd64 GRand Unified Bootloader, version 2 (PC/BIOS binaries)
ii grub2-common 2.02~beta2-9ubuntu1.7 amd64 GRand Unified Bootloader (common files for version 2)
}}}

Full results:
http://paste.ubuntu.com/14259366/

---

NOTE: I'm not sure what to do about the "grub2-signed" properties for this bug...

information type: Public → Public Security
information type: Public Security → Public
Launchpad Janitor (janitor) wrote :

This bug was fixed in the package grub2 - 2.02~beta2-9ubuntu1.7

---------------
grub2 (2.02~beta2-9ubuntu1.7) trusty; urgency=medium

  * Cherry-picks to better handle TFTP timeouts on some arches: (LP: #1521612)
    - (7b386b7) efidisk: move device path helpers in core for efinet
    - (c52ae40) efinet: skip virtual IP devices when enumerating cards
    - (f348aee) efinet: enable hardware filters when opening interface
  * Update quick boot logic to handle abstractions for which there is no
    write support. (LP: #1274320)

 -- dann frazier <email address hidden> Wed, 16 Dec 2015 14:03:48 -0700

Changed in grub2 (Ubuntu Trusty):
status: Fix Committed → Fix Released

The verification of the Stable Release Update for grub2 has completed successfully and the package has now been released to -updates. Subsequently, the Ubuntu Stable Release Updates Team is being unsubscribed and will not receive messages about this bug report. In the event that you encounter a regression using the package from -updates please report a new bug using ubuntu-bug and tag the bug report regression-update so we can easily find any regressions.

Launchpad Janitor (janitor) wrote :

This bug was fixed in the package grub2-signed - 1.34.8

---------------
grub2-signed (1.34.8) trusty; urgency=medium

  * Rebuild against grub-efi-amd64 2.02~beta2-9ubuntu1.7 (LP: #1521612,
    LP: #1274320).

 -- dann frazier <email address hidden> Wed, 16 Dec 2015 14:23:00 -0700

Changed in grub2-signed (Ubuntu Trusty):
status: Fix Committed → Fix Released
Launchpad Janitor (janitor) wrote :

This bug was fixed in the package grub2 - 2.02~beta2-22ubuntu1.5

---------------
grub2 (2.02~beta2-22ubuntu1.5) vivid; urgency=medium

  * Merge in changes from 2.02~beta2-22ubuntu1.3:
    - d/p/arm64-set-correct-length-of-device-path-end-entry.patch: Fixes
      booting arm64 kernels on certain UEFI implementations. (LP: #1476882)
    - progress: avoid NULL dereference for net files. (LP: #1459872)
    - arm64/setjmp: Add missing license macro. (LP: #1459871)
    - Cherry-pick patch to add SAS disks to the device list from the ofdisk
      module. (LP: #1517586)
    - Cherry-pick patch to open Simple Network Protocol exclusively.
      (LP: #1508893)
  * Cherry-picks to better handle TFTP timeouts on some arches: (LP: #1521612)
    - (7b386b7) efidisk: move device path helpers in core for efinet
    - (c52ae40) efinet: skip virtual IP devices when enumerating cards
    - (f348aee) efinet: enable hardware filters when opening interface
  * Update quick boot logic to handle abstractions for which there is no
    write support. (LP: #1274320)

 -- dann frazier <email address hidden> Wed, 16 Dec 2015 13:31:15 -0700

Changed in grub2 (Ubuntu Vivid):
status: Fix Committed → Fix Released
Launchpad Janitor (janitor) wrote :

This bug was fixed in the package grub2-signed - 1.46.5

---------------
grub2-signed (1.46.5) vivid; urgency=medium

  * Rebuild against grub2 2.02~beta2-22ubuntu1.5 (LP: #1476882, LP: #1459872,
    LP: 1459871, LP: #1517586, LP:#1508893, LP: #1521612, LP: #1274320).

 -- dann frazier <email address hidden> Wed, 16 Dec 2015 14:18:28 -0700

Changed in grub2-signed (Ubuntu Vivid):
status: Fix Committed → Fix Released
Lior Goikhburg (goikhburg) wrote :

Problens with installing latest 14.04.3

I Have tried every solution mentioned in this thread and no luck.
Grub would not install...

HP server with 4 SATA disks, RAID 10 (md0) with /boot and / on it, No LVM

installing with:
update-grub - works fine
grub-install /dev/md0 - fails

Went up to grub version 2.02~beta2-32ubuntu1 latest from XENIAL .... still getting diskfilter error ... nothing helps.

Any ideas, anyone ?

wiley.coyote (tjwiley) wrote :

Did you try simply updating the packages from the Trusty repos? The fix has already been released.

2.02~beta2-9ubuntu1.7

The fix is there & working...at least for me.

Changed in grub2 (Debian):
status: New → Fix Released
armaos (alexandros-k) wrote :

hi,
so more or less i have tried the solutions above but still without luck.
@Lior Goikhburg (goikhburg): did you manage to solve it?

all ideas are more than welcome
thnx

Lior Goikhburg (goikhburg) wrote :

I ended up with the following workaround:

When setting up the server i configured the following:

0. RAID 10 on /sda /sdb /sdc /sdd
1. /boot / and swap partition are on RAID but NOT IN LVM VOLUME
2. Rest of the RAID space - LVM partition

At the end of install, when you get error message:
Install grub manually on /sda1 and /sda2 (/sda3 and /sda4 will not let you, cause they're striped) use console to run:
# update-grub
# grub-install /dev/sda1
# grub-install /dev/sda2
Return to setup and skip installation of grub (you installed it manally)

Hope that helps.

Paul Tomblin (ptomblin) wrote :

I upgraded to Kubuntu 16.04 and it's still happening. When am I supposed to see this supposed fix?

Changed in grub2-signed (Ubuntu):
status: Confirmed → Fix Released
oglop (1oglop1) wrote :

Yeah,Xubuntu 17.04 still present..

I hope 17.10 will get fixed

Anders Kaseorg (andersk) wrote :

This was fixed in 16.04, but if you had manually modified /etc/grub.d/00_header before the upgrade, the new version will not have been installed. You may have an unmodified version in /etc/grub.d/00_header.dpkg-new. If not, run ‘apt-get download grub-common; dpkg -x grub-common_2.02~beta3-4ubuntu6_amd64.deb grub-common-extracted’ and you’ll have the unmodified version in grub-common-extracted/etc/grub.d/00_header.dpkg-new.

Changed in grub2 (Fedora):
importance: Unknown → Undecided
status: Unknown → Won't Fix
Displaying first 40 and last 40 comments. View all 165 comments or add a comment.
This report contains Public information  Edit
Everyone can see this information.

Duplicates of this bug

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.