Error: diskfilter writes are not supported
| Affects | Status | Importance | Assigned to | Milestone | |
|---|---|---|---|---|---|
| | grub |
Unknown
|
Unknown
|
||
| | grub2 (Debian) |
Fix Released
|
Unknown
|
||
| | grub2 (Fedora) |
Unknown
|
Unknown
|
||
| | grub2 (Ubuntu) |
High
|
Unassigned | ||
| | Trusty |
High
|
dann frazier | ||
| | Vivid |
High
|
dann frazier | ||
| | grub2-signed (Ubuntu) |
Undecided
|
Unassigned | ||
| | Trusty |
Undecided
|
Unassigned | ||
| | Vivid |
Undecided
|
Unassigned | ||
Bug Description
[Impact]
RAID and LVM users may run into a cryptic warning on boot from GRUB; because some variants of RAID and LVM are not supported for writing by GRUB itself. GRUB would typically try to write a tiny file to the boot partition for things like remembering the last selected boot entry.
[Test Case]
On an affected system (typically any RAID/LVM setup where the boot device is on RAID or on a LVM device), try to boot. Without the patch, the message will appear, otherwise it will not.
[Regression Potential]
The potential for regression is minimal as the patch involves enforcing the fact that diskfilter writes are unsupported by grub in menu building scripts, which will automatically avoid enabling recordfail (the offending feature which saves GRUB's state) if the boot partition is detected to be on a device which does not support diskfilter writes.
----
Once grub chooses what to boot to, an error shows up and will sit on the screen for approx. 5 seconds
"Error: diskfilter writes are not supported.
Press any key to continue..."
From what I understand, this error is related to raid partitions, and I have two of them (md0, md1). Both partitions are used (root and swap). Raid is assembled with mdadm and are raid0
This error message started appearing right after grub2 was updated on 01/27/2014.
System: Kernel: 3.13.0-5-generic x86_64 (64 bit) Desktop: KDE 4.11.5 Distro: Ubuntu 14.04 trusty
Drives: HDD Total Size: 1064.2GB (10.9% used)
1: id: /dev/sda model: SanDisk_SDSSDRC0 size: 32.0GB
2: id: /dev/sdb model: SanDisk_SDSSDRC0 size: 32.0GB
3: id: /dev/sdc model: ST31000528AS size: 1000.2GB
RAID: Device-1: /dev/md1 - active raid: 0 components: online: sdb2 sda3 (swap) Device-2: /dev/md0 - active raid: 0 components: online: sdb1 sda1 ( / )
Grub2: grub-efi-amd64 version 2.02~beta2-5
ProblemType: Bug
DistroRelease: Ubuntu 14.04
Package: grub-efi-amd64 2.02~beta2-5
ProcVersionSign
Uname: Linux 3.13.0-5-generic x86_64
NonfreeKernelMo
ApportVersion: 2.13.2-0ubuntu2
Architecture: amd64
CurrentDesktop: KDE
Date: Wed Jan 29 17:37:59 2014
SourcePackage: grub2
UpgradeStatus: Upgraded to trusty on 2014-01-23 (6 days ago)
| Launchpad Janitor (janitor) wrote : | #3 |
Status changed to 'Confirmed' because the bug affects multiple users.
| Changed in grub2 (Ubuntu): | |
| status: | New → Confirmed |
| summary: |
- Error after boot menu + Error: diskfilter writes are not supported |
| Changed in grub2 (Ubuntu): | |
| importance: | Undecided → Low |
| importance: | Low → Medium |
| KrautOS (krautos) wrote : | #4 |
Got the same issue on a up-to-date "trusty" machine with / on mdadm RAID 1 and swap on madam RAID 0. Any clues how to fix it?
| Patrick Houle (buddlespit) wrote : | #5 |
I changed my raids <dump> and <pass> to '0 0' instead of '0 1' in /etc/fstab.
| Singtoh (singtoh) wrote : | #6 |
Hello all,
Just thought I would throw in as well. Just started seeing this after a fresh install of Ubuntu-Trusty today at the first bootup and all boots there after. I am not running RAID but I am using LVM. This is Ubuntu-Trusty amd64. Just as a side note, on todays install I didn't give the system a /boot partition like I have seen in all the LVM tutorials. I just have two disks that I made LVM partitions on ie. /root /home /Storage /Storage1 and swap. Runs real nice but get that nagging error at boot????? Hope it gets a fix soon.
Cheers,
Singtoh
| Huygens (huygens-25) wrote : | #7 |
Here is another different kind of setup which triggers the problem:
/boot ext4 on a md RAID10 (4 HDD) partition
/ btrfs RAID10 (4 HDD) partition
swap on a md RAID0 (4HDD) partition
The boot and kernel are on a MD RAID (software RAID), whereas the rest of the system is using Btrfs RAID.
| Cybjit (cybjit) wrote : | #8 |
I also get this message.
/ is on ext4 LVM, and no separate /boot.
Booting works fine after the delay or key press.
| Singtoh (singtoh) wrote : | #9 |
Just to add this tidbit. I just re-installed with normal partitioning (no raid and no LVM) just /root /home and swap and it boots normally, no 5sec wait and no errors. So I guess LVM & or RAID related???? I am just about to re-install again to a new SSD and will install to LVM again and I'll post back with the outcome.
Cheers,
Singtoh
| robled (robled) wrote : | #10 |
This is definitely RAID/LVM related. On my 14.04 system with a ext4 boot partition I don't get the error, but on another system that's fully LVM I do get the error.
Has anyone come up with a grub config workaround to prevent the delay on boot?
| Jean-Mi (forum-i) wrote : | #11 |
You guys should be happy your system still boots. I just got that error (diskfilter writes are not supported) but grub exists immediately after, leaving my uefi with no other choice than booting another distro.
I had to spam press the pause key on my keyboard to get the error message before it disappears.
On my setup, the boot error occurs with openSuse installed on LVM2. The other distro is installed with a regular /boot (ext4) separate partition. Both are using grub. I could load both by calling their respective grubx64.efi from the ESP partition.
The last thing I remember having done on openSuse was to create a btrfs partition and tweaked /etc/fstab a little bit.
From the other distro, I can read openSuse's files and everything looks fine. It's like the boot loader used to work and suddenly failed.
I'd love to remember what else I did since it worked. And I'd love to be able to boot openSuse again.
| Jean-Mi (forum-i) wrote : | #12 |
I may have found the reason for my particular crash. Now my system boots normally.
According to the bug report #1006289 at redhat, the bug could come with insmod diskfilter but someone deactivated that mod and still got the error. I don't even have this mod declared. But I noticed openSuse loves to handle everything on reboot, like setting the next OS to load.
My /boot/grub/grubenv contains 2 lines. Basically, save_entry=openSUSE and next_entry=LMDE Cinnamon.
I removed those lines and the error disappeared. /Maybe/ those line instructs grub to write something on the boot partition, which it's perfectly unable to do since it cannot write to LVM.
Anyway, it seems that solving this bug requires to find out why grub tries to write data.
| hamish (hamish-b) wrote : | #13 |
Hi, I get the same error with 14.04 beta1 booting into RAID1.
for those running their swap partitions in a raid, I'm wondering if it would be better to just mount the multiple swap partitions in fstab and give them all equal priority? For soft-raid it would cut out a layer of overhead, and for swap anything which cuts out overhead is a good thing. (e.g., mount options in fstab for all swap partitions: sw,pri=10)
See `man 2 swapon` for details.
"If two or more areas
have the same priority, and it is the highest priority available, pages
are allocated on a round-robin basis between them."
| Phillip Susi (psusi) wrote : | #14 |
That would defeat the purpose of raid1, which is to keep the system up and running when a drive fails. With two separate swaps, if the disk fails you're going to probably have half of user space die and end up with a badly broken server that needs rebooted.
| Artyom Nosov (artyom.nosov) wrote : | #15 |
Got the same issue on the daily build of trust (20140319). / , /home and swap all is RAID1
| Денис Тельнов (denis-telnov) wrote : | #16 |
I have this issue on ubuntu Trusty 14.04 on / in lvm. Deleting /boot/grub/grubenv prevent this error on next boot, but grub will create this file every time boot, thus I have rm /boot/grub/grubenv in my crontab.
| stoffel010170 (stoffel-010170) wrote : | #17 |
Have the same bug on my LVM system,too. My systems without LVM and RAID are not affected.
| Thomas (t.c) wrote : | #18 |
I have the bug too, I use root filesystem (/) as a software raid 1
| Thomas (t.c) wrote : | #19 |
# GRUB Environment Block
#######
thats the content of /boot/grub/grubenv - is it right?
| VladimirCZ (vlabla) wrote : | #20 |
I also get this message.
/ and swap volumes are on ext4 LVM, and no separate /boot.
Booting works fine after the delay or key press.
| Ubuntu QA Website (ubuntuqa) wrote : | #21 |
This bug has been reported on the Ubuntu ISO testing tracker.
A list of all reports related to this bug can be found here:
http://
| tags: | added: iso-testing |
| Moritz Baumann (mo42) wrote : | #22 |
The problem is the call to the recordfail function in each menuentry. If I comment it in /boot/grub/
I'm also affected by this bug. So I can confirm it's still there on a fresh install of 14.04
I'm using RAID0 for /
| Moritz Baumann (mo42) wrote : | #24 |
As a temporary fix, you can edit /etc/grub.
| Jan Rathmann (kaiserclaudius) wrote : Re: [Bug 1274320] Re: Error: diskfilter writes are not supported | #25 |
I can confirm that the workaround mentioned by Moritz (setting
'quickboot=0') works for me.
| Drew Michel (drew-michel) wrote : | #26 |
I can also confirm this bug is happening with the latest beta version of Trusty with /boot living on an EXT4 LVM partition.
* setting quick_boot="0" in /etc/grub.
* setting GRUB_SAVEDEFAUL
* removing recordfail from /boot/grub/grub.cfg fixes the issue
3.13.0-23-generic #45-Ubuntu
Distributor ID: Ubuntu
Description: Ubuntu Trusty Tahr (development branch)
Release: 14.04
apt-cache policy grub-pc
grub-pc:
Installed: 2.02~beta2-9
| Gus (gus-lgze) wrote : | #27 |
Just to confirm, this is in the release version of 14.04. I've got it on a fresh build with raid 1 via mdadm, no swap.
It does not halt booting, just a brief delay.
| Quesar (rick-microway) wrote : | #28 |
I just made a permanent clean fix for this, at least for MD (software RAID). It can easily be modified to fix for LVM too. Edit /etc/grub.
if [ "$quick_boot" = 1 ]; then
cat <<EOF
function recordfail {
set recordfail=1
EOF
FS=
GRUBMDDEVIC
if [ $? -eq 0 ] ; then
cat <<EOF
# GRUB lacks write support for $GRUBMDDEVICE, so recordfail support is disabled.
EOF
else
case "$FS" in
btrfs | cpiofs | newc | odc | romfs | squash4 | tarfs | zfs)
cat <<EOF
# GRUB lacks write support for $FS, so recordfail support is disabled.
EOF
;;
*)
cat <<EOF
if [ -n "\${have_grubenv}" ]; then if [ -z "\${boot_once}" ]; then save_env recordfail; fi; fi
EOF
esac
fi
cat <<EOF
}
EOF
fi
| robled (robled) wrote : | #29 |
The work-around from #24 gets rid of the error for me. I timed my boot process after the change and didn't notice any appreciable difference in boot time with the work-around in place. This testing was performed using a recent laptop with an SSD.
| Zeth (adair-boder) wrote : | #30 |
I got this error message too - with a fresh install of 14.04 Server Official Release.
I also have 2 RAID-1 setups.
I have recently installed Xubuntu 14.04 (Official Release) on two computers. On one of them I did not use RAID and allowed automatic disk partititioning; no boot error has been observed. For the second computer I used the Minimal CD, installed two RAID0 devices (one for swap and one for /) and Xubuntu; on this computer the error message appeared every time I booted. The workaround suggested by Moritz Baumann (#24) eliminated the error message.
| bolted (k-minnick) wrote : | #33 |
I followed comment #28 from Quesar (rick-microway) above with lubuntu 14.04 running on a 1U supermicro server. Rebooted multiple times to test, and I am no longer getting this error message. A huge thank you to Quesar for a fix!
| Vadim Nevorotin (malamut) wrote : | #34 |
Fix from #28 extended to support LVM (so, I think, it is universal clean fix of this bug). Change recordfail section in /etc/grub.
if [ "$quick_boot" = 1 ]; then
cat <<EOF
function recordfail {
set recordfail=1
EOF
GRUBMDDEVIC
GRUBLVMDEVI
if echo "$GRUBMDDEVICE" | grep "/dev/md" > /dev/null; then
cat <<EOF
# GRUB lacks write support for $GRUBMDDEVICE, so recordfail support is disabled.
EOF
elif echo "$GRUBLVMDEVICE" | grep "/dev/mapper" > /dev/null; then
cat <<EOF
# GRUB lacks write support for $GRUBLVMDEVICE, so recordfail support is disabled.
EOF
else
case "$FS" in
btrfs | cpiofs | newc | odc | romfs | squash4 | tarfs | zfs)
cat <<EOF
# GRUB lacks write support for $FS, so recordfail support is disabled.
EOF
;;
*)
cat <<EOF
if [ -n "\${have_grubenv}" ]; then if [ -z "\${boot_once}" ]; then save_env recordfail; fi; fi
EOF
esac
fi
cat <<EOF
}
Then run update-grub
| Andrew Hamilton (ahamilton9) wrote : | #35 |
Just confirming that the above (RAID & LVM version) fix is working for a RAID10, 14.04, x64, fresh install. I don't have LVM up though, so I cannot confirm that detail.
| Tato Salcedo (tatosalcedo) wrote : | #36 |
I have no raid, I lvm and present the same error
| Aaron Hastings (thecosmicfrog) wrote : | #37 |
Just installed 14.04 and seeing the same error on boot.
I don't have any RAID setup, but I am using LVM ext4 volumes for /, /home and swap. My /boot is on a separate ext4 primary partition in an msdos partition table.
| Agustín Ure (aeu79) wrote : | #38 |
Confirming that the fix in comment #34 solved the problem in a fresh install of 14.04 with LVM.
| Uqbar (uqbar) wrote : | #39 |
I would like to apply the fix from comment#34 as I am using software RAID6 and LVM at the same time.
Unluckily I am not so good at changing that "recordfail section in /etc/grub.
Would it be possible to attach here the complete fixed /etc/grub.
Would it be possible to have this as an official "fix released"?
| David Twersky (dmtwersky) wrote : | #40 |
Confirming comment#34 fixed it for me as well.
Im using LVM on all partitions.
| tags: | added: patch |
| Changed in grub2 (Ubuntu): | |
| status: | Confirmed → Triaged |
| Changed in grub: | |
| importance: | Undecided → Unknown |
| status: | New → Unknown |
| importance: | Unknown → Undecided |
| status: | Unknown → New |
| tags: | added: utopic |
| Changed in grub2 (Ubuntu): | |
| importance: | Medium → High |
| Changed in grub: | |
| status: | New → Invalid |
| Changed in grub2 (Debian): | |
| status: | Unknown → New |
| Changed in mdadm (Ubuntu): | |
| assignee: | nobody → Dimitri John Ledkov (xnox) |
| Changed in mdadm (Ubuntu): | |
| status: | New → Confirmed |
| Changed in mdadm (Ubuntu): | |
| importance: | Undecided → High |
| status: | Confirmed → Triaged |
| Changed in grub: | |
| importance: | Undecided → Unknown |
| status: | Invalid → Unknown |
| Changed in grub2 (Ubuntu): | |
| assignee: | nobody → Colin Watson (cjwatson) |
| Changed in mdadm (Ubuntu): | |
| status: | Triaged → Invalid |
| Changed in grub2 (Ubuntu): | |
| assignee: | Colin Watson (cjwatson) → nobody |
| Changed in grub2 (Ubuntu): | |
| assignee: | nobody → Mathieu Trudel-Lapierre (mathieu-tl) |
Anders, thanks. I'm reviewing the patch and I'll apply it to grub in Debian.
Is there anyone here *NOT* using LVM or RAID on a system which is showing this error message?
| Changed in grub2 (Ubuntu): | |
| status: | Triaged → Incomplete |
| Loïc Minier (lool) wrote : | #113 |
keeping this as new as to not expire it -- potentially no one has this issue without LVM/MD, so Mathieu's question might not get an answer but we still want to fix this
| Changed in grub2 (Ubuntu): | |
| status: | Incomplete → New |
| Launchpad Janitor (janitor) wrote : | #114 |
Status changed to 'Confirmed' because the bug affects multiple users.
| Changed in grub2 (Ubuntu): | |
| status: | New → Confirmed |
| Seb Bonnard (sebmansfeld) wrote : | #115 |
Hi, this bug also affected to me because I'm using LVM.
I want to thank Anders for his patch (see comment #70).
I pasted his patch in a file I called 00_header_
$ sed -i "s/00_header.
$ cd /etc/ && sudo patch -p2 < ~/00_header_
$ sudo update-grub
Hope this helps.
Seb.
| fermulator (fermulator) wrote : | #116 |
I just tried applying the patch manually and re-ran grub-install, not go.
I also tried pulling in the patch via Forest (foresto) PPA: https:/
fermulator@
Installing for i386-pc platform.
grub-install: error: diskfilter writes are not supported.
(full output w/ -v is here: http://
fermulator@
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 sdb1[0] sda1[1]
156158720 blocks super 1.2 [2/2] [UU]
fermulator@
Linux fermmy-basement 3.16.0-30-generic #40~14.04.1-Ubuntu SMP Thu Jan 15 17:43:14 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
fermulator@
Ubuntu 14.04.2 LTS \n \l
fermulator@
ii grub2 2.02~beta2-
@fermulator, that's on purpose, we don't have write support on /dev/md (diskfilter) devices. You might want to use /dev/sda1 instead (as per mdstat), the changes will get synced on the other drive.
I'm applying the changes for Ubuntu now, changes for Debian are in Debian git.
| Changed in grub2 (Ubuntu): | |
| status: | Confirmed → In Progress |
| Launchpad Janitor (janitor) wrote : | #118 |
This bug was fixed in the package grub2 - 2.02~beta2-
---------------
grub2 (2.02~beta2-
* debian/
variable OsIndicationsSu
compared as hex values in 30_uefi-
* Update quick boot logic to handle abstractions for which there is no
write support. (LP: #1274320)
-- Mathieu Trudel-Lapierre <email address hidden> Mon, 06 Jul 2015 16:32:11 -0400
| Changed in grub2 (Ubuntu): | |
| status: | In Progress → Fix Released |
| Phillip Susi (psusi) wrote : | #119 |
On 7/6/2015 4:43 PM, Mathieu Trudel-Lapierre wrote:
> @fermulator, that's on purpose, we don't have write support on /dev/md
> (diskfilter) devices. You might want to use /dev/sda1 instead (as per
> mdstat), the changes will get synced on the other drive.
No, no, no... you NEVER write directly to a disk that is a component of
a raid array, and if you do, it will NOT be synced to the other drive,
since md has no idea you did such a thing.
Hum, of course, you're right. Things won't get synced.
That said, you *do* need to write directly to each disk of the RAID array to install grub on them given that grub doesn't have support for the overlaying device representation.
| Seb Bonnard (sebmansfeld) wrote : | #121 |
Hi,
Oops !
I forgot to add to my comment #115 :
sudo chmod +x /etc/grub.
BEFORE the "update-grub" command.
Sebastien.
| Shahar Or (mightyiam) wrote : | #122 |
Not seeing this on startup feels so good. But it was around for so long I almost miss it.
Who'se to blame for the fix?
Thanks a lot.
| Rarylson Freitas (rarylson) wrote : | #123 |
One question:
The solution made by "Mathieu Trudel-Lapierre <email address hidden> " is marked as Fix Released.
However, I can't update my grub package to the released one. The new version is 2.02~beta2-
I've tried to get it from the trusty-proposed repo, without success (https:/
What should I do now? Should I only wait for the fix being at the trusty-main repo?
| Simon Déziel (sdeziel) wrote : | #124 |
On 07/17/2015 11:16 AM, Rarylson Freitas wrote:
> One question:
>
> The solution made by "Mathieu Trudel-Lapierre <email address hidden> "
> is marked as Fix Released.
>
> However, I can't update my grub package to the released one. The new
> version is 2.02~beta2-
>
> I've tried to get it from the trusty-proposed repo, without success
> (https:/
>
> What should I do now? Should I only wait for the fix being at the
> trusty-main repo?
>
The version 2.02~beta2-
wait for a Trusty specific version to hit trusty-proposed to be able to
test it.
Indeed. To get this in trusty (or other releases), please see http://
| Charis (tao-qqmail) wrote : | #126 |
Where is the solution.
| Changed in grub2 (Ubuntu): | |
| assignee: | Mathieu Trudel-Lapierre (mathieu-tl) → nobody |
| Changed in grub2 (Ubuntu Trusty): | |
| status: | New → In Progress |
| Changed in grub2 (Ubuntu Vivid): | |
| status: | New → In Progress |
| importance: | Undecided → High |
| Changed in grub2 (Ubuntu Trusty): | |
| assignee: | nobody → Mathieu Trudel-Lapierre (mathieu-tl) |
| importance: | Undecided → High |
| Changed in grub2 (Ubuntu Vivid): | |
| assignee: | nobody → Mathieu Trudel-Lapierre (mathieu-tl) |
| Changed in mdadm (Ubuntu): | |
| assignee: | Dimitri John Ledkov (xnox) → nobody |
| fermulator (fermulator) wrote : | #127 |
So as per my comment on "fermulator (fermulator) wrote on 2015-07-04: " there appears to be a few post-comments of confusion.
What /is/ the correct way to re-install grub to mdadm member drives?
(assuming mdadm has member disks with proper RAID partitions)
{{{
fermulator@
md60 : active raid1 sdi2[1] sdj2[0]
58560384 blocks super 1.2 [2/2] [UU]
}}}
grub-install /dev/sdX|Y
or,
grub-install /dev/sdX#|Y#
| Ted Cabeen (ted-cabeen) wrote : | #128 |
fermulator, if Linux is the only operating system on this computer, you want to install the grub bootloader on the drives, not the partitions, so /dev/sdX, /dev/sdY, etc.
| Launchpad Janitor (janitor) wrote : | #129 |
Status changed to 'Confirmed' because the bug affects multiple users.
| Changed in mdadm (Ubuntu Trusty): | |
| status: | New → Confirmed |
| Changed in mdadm (Ubuntu Vivid): | |
| status: | New → Confirmed |
| no longer affects: | mdadm (Ubuntu) |
| no longer affects: | mdadm (Ubuntu Trusty) |
| no longer affects: | mdadm (Ubuntu Vivid) |
| Michiel Bruijn (michiel-o) wrote : | #131 |
This bug is still present and not fixed for me and several other people (for example http://
I did a clean install of kodibuntu (lubuntu 14.04) and had this error.
I use LVM and installed the OS on a SSD in AHCI mode.
It's annoying, but the system continues after a few seconds.
I would like to have this problem fixed because I have a slow resume of my monitor after suspend. I would like to rule out this problem to be related.
| Tom Reynolds (tomreyn) wrote : | #132 |
mathieu-tl:
Thanks for your work on this issue.
Since you nominated it for trusty and state it's in progress - is there a way to follow this progress?
Are there any test builds you would like to be tested, yet?
In case it's not been sufficiently stated before, this issue does affect 14.04 LTS x86_64.
It would be great to see a SRU, since it slows the boot process and may trick users into thinking their Ubuntu installation is broken when it is not (doing as the message suggests will just reboot your system).
Anyone is welcome copy + paste this text to the first post if that should help with the SRU.
| description: | updated |
| Changed in grub2 (Ubuntu Vivid): | |
| assignee: | Mathieu Trudel-Lapierre (mathieu-tl) → dann frazier (dannf) |
| Changed in grub2 (Ubuntu Trusty): | |
| assignee: | Mathieu Trudel-Lapierre (mathieu-tl) → dann frazier (dannf) |
Hello Patrick, or anyone else affected,
Accepted grub2 into trusty-proposed. The package will build now and be available at https:/
Please help us by testing this new package. See https:/
If this package fixes the bug for you, please add a comment to this bug, mentioning the version of the package you tested, and change the tag from verification-needed to verification-done. If it does not fix the bug for you, please add a comment stating that, and change the tag to verification-
Further information regarding the verification process can be found at https:/
| Changed in grub2 (Ubuntu Trusty): | |
| status: | In Progress → Fix Committed |
| tags: | added: verification-needed |
| Chris J Arges (arges) wrote : | #134 |
Hello Patrick, or anyone else affected,
Accepted grub2 into vivid-proposed. The package will build now and be available at https:/
Please help us by testing this new package. See https:/
If this package fixes the bug for you, please add a comment to this bug, mentioning the version of the package you tested, and change the tag from verification-needed to verification-done. If it does not fix the bug for you, please add a comment stating that, and change the tag to verification-
Further information regarding the verification process can be found at https:/
| Changed in grub2 (Ubuntu Vivid): | |
| status: | In Progress → Fix Committed |
| Chris J Arges (arges) wrote : | #135 |
Hello Patrick, or anyone else affected,
Accepted grub2-signed into trusty-proposed. The package will build now and be available at https:/
Please help us by testing this new package. See https:/
If this package fixes the bug for you, please add a comment to this bug, mentioning the version of the package you tested, and change the tag from verification-needed to verification-done. If it does not fix the bug for you, please add a comment stating that, and change the tag to verification-
Further information regarding the verification process can be found at https:/
| Changed in grub2-signed (Ubuntu Trusty): | |
| status: | New → Fix Committed |
| Changed in grub2-signed (Ubuntu Vivid): | |
| status: | New → Fix Committed |
| Chris J Arges (arges) wrote : | #136 |
Hello Patrick, or anyone else affected,
Accepted grub2-signed into vivid-proposed. The package will build now and be available at https:/
Please help us by testing this new package. See https:/
If this package fixes the bug for you, please add a comment to this bug, mentioning the version of the package you tested, and change the tag from verification-needed to verification-done. If it does not fix the bug for you, please add a comment stating that, and change the tag to verification-
Further information regarding the verification process can be found at https:/
| tags: |
added: verification-done-trusty verification-needed-vivid removed: verification-needed |
| Anton Eliasson (eliasson) wrote : | #137 |
Packages from vivid-proposed fixed the issue for me.
Details:
Start-Date: 2015-12-18 12:14:56
Commandline: apt-get install grub-common/
Upgrade: grub-efi-
End-Date: 2015-12-18 12:15:19
| tags: |
added: verification-done-vivid removed: verification-needed-vivid |
| YitzchokL (yitzchok+launchpad) wrote : | #138 |
After installing 2.02~beta2-
(This was perfect timing for me! I only just dealt with the upgrade from Grub legacy today and was disappointed to see an error message, which is now gone)
Thanks
| Launchpad Janitor (janitor) wrote : | #139 |
Status changed to 'Confirmed' because the bug affects multiple users.
| Changed in grub2-signed (Ubuntu): | |
| status: | New → Confirmed |
| Id2ndR (id2ndr) wrote : | #140 |
After installing 2.02~beta2-
So enable proposed repository, and then:
sudo apt-get install grub-efi-
sudo chmod +x /etc/grub.
sudo update-grub2
| Rich Hart (sirwizkid) wrote : | #141 |
The 1.7 package is working flawlessly on my systems that were effected.
Thanks for fixing this.
| fermulator (fermulator) wrote : | #142 |
Based upon the comments above, and the TEST CASE defined in the main section for this bug, I confirm that verification=done
###
--> PASS
###
I tested on my own system running
{{{
$ mount | grep md60
/dev/md60 on / type ext4 (rw,errors=
$ cat /proc/mdstat | grep -A1 md60
md60 : active raid1 sdd2[0] sdb2[1]
58560384 blocks super 1.2 [2/2] [UU]
fermulator@
ii grub-common 2.02~beta2-
ii grub-gfxpayload
ii grub-pc 2.02~beta2-
ii grub-pc-bin 2.02~beta2-
ii grub2-common 2.02~beta2-
}}}
Full results:
http://
---
NOTE: I'm not sure what to do about the "grub2-signed" properties for this bug...
| information type: | Public → Public Security |
| information type: | Public Security → Public |
| Launchpad Janitor (janitor) wrote : | #143 |
This bug was fixed in the package grub2 - 2.02~beta2-
---------------
grub2 (2.02~beta2-
* Cherry-picks to better handle TFTP timeouts on some arches: (LP: #1521612)
- (7b386b7) efidisk: move device path helpers in core for efinet
- (c52ae40) efinet: skip virtual IP devices when enumerating cards
- (f348aee) efinet: enable hardware filters when opening interface
* Update quick boot logic to handle abstractions for which there is no
write support. (LP: #1274320)
-- dann frazier <email address hidden> Wed, 16 Dec 2015 14:03:48 -0700
| Changed in grub2 (Ubuntu Trusty): | |
| status: | Fix Committed → Fix Released |
The verification of the Stable Release Update for grub2 has completed successfully and the package has now been released to -updates. Subsequently, the Ubuntu Stable Release Updates Team is being unsubscribed and will not receive messages about this bug report. In the event that you encounter a regression using the package from -updates please report a new bug using ubuntu-bug and tag the bug report regression-update so we can easily find any regressions.
| Launchpad Janitor (janitor) wrote : | #145 |
| Changed in grub2-signed (Ubuntu Trusty): | |
| status: | Fix Committed → Fix Released |
| Launchpad Janitor (janitor) wrote : | #146 |
This bug was fixed in the package grub2 - 2.02~beta2-
---------------
grub2 (2.02~beta2-
* Merge in changes from 2.02~beta2-
- d/p/arm64-
booting arm64 kernels on certain UEFI implementations. (LP: #1476882)
- progress: avoid NULL dereference for net files. (LP: #1459872)
- arm64/setjmp: Add missing license macro. (LP: #1459871)
- Cherry-pick patch to add SAS disks to the device list from the ofdisk
module. (LP: #1517586)
- Cherry-pick patch to open Simple Network Protocol exclusively.
(LP: #1508893)
* Cherry-picks to better handle TFTP timeouts on some arches: (LP: #1521612)
- (7b386b7) efidisk: move device path helpers in core for efinet
- (c52ae40) efinet: skip virtual IP devices when enumerating cards
- (f348aee) efinet: enable hardware filters when opening interface
* Update quick boot logic to handle abstractions for which there is no
write support. (LP: #1274320)
-- dann frazier <email address hidden> Wed, 16 Dec 2015 13:31:15 -0700
| Changed in grub2 (Ubuntu Vivid): | |
| status: | Fix Committed → Fix Released |
| Launchpad Janitor (janitor) wrote : | #147 |
This bug was fixed in the package grub2-signed - 1.46.5
---------------
grub2-signed (1.46.5) vivid; urgency=medium
* Rebuild against grub2 2.02~beta2-
LP: 1459871, LP: #1517586, LP:#1508893, LP: #1521612, LP: #1274320).
-- dann frazier <email address hidden> Wed, 16 Dec 2015 14:18:28 -0700
| Changed in grub2-signed (Ubuntu Vivid): | |
| status: | Fix Committed → Fix Released |
| Lior Goikhburg (goikhburg) wrote : | #148 |
Problens with installing latest 14.04.3
I Have tried every solution mentioned in this thread and no luck.
Grub would not install...
HP server with 4 SATA disks, RAID 10 (md0) with /boot and / on it, No LVM
installing with:
update-grub - works fine
grub-install /dev/md0 - fails
Went up to grub version 2.02~beta2-
Any ideas, anyone ?
| wiley.coyote (tjwiley) wrote : | #149 |
Did you try simply updating the packages from the Trusty repos? The fix has already been released.
2.02~beta2-
The fix is there & working...at least for me.
| Changed in grub2 (Debian): | |
| status: | New → Fix Released |
| armaos (alexandros-k) wrote : | #150 |
hi,
so more or less i have tried the solutions above but still without luck.
@Lior Goikhburg (goikhburg): did you manage to solve it?
all ideas are more than welcome
thnx
| Lior Goikhburg (goikhburg) wrote : | #151 |
I ended up with the following workaround:
When setting up the server i configured the following:
0. RAID 10 on /sda /sdb /sdc /sdd
1. /boot / and swap partition are on RAID but NOT IN LVM VOLUME
2. Rest of the RAID space - LVM partition
At the end of install, when you get error message:
Install grub manually on /sda1 and /sda2 (/sda3 and /sda4 will not let you, cause they're striped) use console to run:
# update-grub
# grub-install /dev/sda1
# grub-install /dev/sda2
Return to setup and skip installation of grub (you installed it manally)
Hope that helps.


I should also point out that the system will boot normally once any key is pressed or 5 seconds elapses.