grub boot error : "symbol 'grub_calloc' not found

Bug #1889509 reported by Jeff Turner
262
This bug affects 45 people
Affects Status Importance Assigned to Milestone
grub2 (Debian)
Unknown
Unknown
grub2 (Ubuntu)
Confirmed
High
Unassigned

Bug Description

After updating grub2 (to 2.02~beta2-36ubuntu3.26) and rebooting, my server does not boot:

Booting from Hard Disk 0...
error: symbol `grub_calloc' not found.
Entering rescue mode...
grub rescue> _

I rebooted 3 servers in this way (all running Ubuntu 16.04.6 LTS) and all hung.

A lot of other people are reporting the same problem at:

https://askubuntu.com/questions/1263125/how-to-fix-a-grub-boot-error-symbol-grub-calloc-not-found.

---

Above most likely means that dpkg debconf no longer knows about the correct drives to install grub onto.

Please boot & mount all the target disks and execute

$ sudo dpkg-reconfigure grub-pc

You will be asked which drives to install grub onto, and then grub will be installed onto them, and more importantly, it will remember where to install grub to, on all future upgrades.

Revision history for this message
Jeff Turner (jefft7610) wrote :
Revision history for this message
Launchpad Janitor (janitor) wrote :

Status changed to 'Confirmed' because the bug affects multiple users.

Changed in grub2 (Ubuntu):
status: New → Confirmed
Revision history for this message
Marin Nedea (marin-n) wrote :

Same bug on Ubuntu 20.04 pro in Azure.
Actions before issue happened:

2020-07-30 00:11:20 status installed grub-common:amd64 2.04-1ubuntu26.1
2020-07-30 00:11:20 status installed grub-efi-amd64-bin:amd64 2.04-1ubuntu26.1
2020-07-30 00:11:20 status installed grub2-common:amd64 2.04-1ubuntu26.1
2020-07-30 00:11:20 status installed grub-pc-bin:amd64 2.04-1ubuntu26.1
2020-07-30 00:11:29 status installed grub-pc:amd64 2.04-1ubuntu26.1
2020-07-30 00:11:32 status installed grub-efi-amd64-signed:amd64 1.142.3+2.04-1ubuntu26.1
2020-07-30 00:11:32 status installed man-db:amd64 2.9.1-1
2020-07-30 00:11:33 status installed install-info:amd64 6.7.0.dfsg.2-5
2020-07-30 00:11:33 status installed systemd:amd64 245.4-4ubuntu3.2

Rebooted machine post update.
Hope this helps narrowing down the issue.

Revision history for this message
maverick000 (andrzej-borawski) wrote :

Same at Mint Ulyana. Update pre instalation, a shell of a Grub - problem.
Had to restore backup and ignore this update.

description: updated
Revision history for this message
Marin Nedea (marin-n) wrote :

Ubuntu 18.04-LTS also affected, same problem.

Revision history for this message
Marin Nedea (marin-n) wrote :

I have this reproduces on Debian 10.

2020-07-30 06:29:50 status installed grub-common:amd64 2.02+dfsg1-20+deb10u1
2020-07-30 06:29:50 status installed grub-efi-amd64-bin:amd64 2.02+dfsg1-20+deb10u1
2020-07-30 06:29:50 status installed grub2-common:amd64 2.02+dfsg1-20+deb10u1
2020-07-30 06:29:50 status installed grub-pc-bin:amd64 2.02+dfsg1-20+deb10u1

The above packages were installed before the issue occurrence.

Revision history for this message
David Andruczyk (dandruczyk) wrote :

I am getting the same problem now when imaging machines with MAAS. none of them are booting up properly. The machines being imaged are Bionic, using mdmadm RAID1 OS mirrors

All result in:

error: symbol `grub_calloc' not found.
Entering rescue mode...
grub rescue>

This has us dead in the water.

Revision history for this message
David Andruczyk (dandruczyk) wrote :

Note for our maas installs (bionic image, in house built from a upstream ubuntu cloudimg customized with our specific needs) the ephemeral being used to image the machine is
  Thu, 30 Jul. 2020 11:09:10HTTP Request - /images/ubuntu/amd64/ga-20.04/focal/daily/boot-initrd
  Thu, 30 Jul. 2020 11:09:10HTTP Request - /images/ubuntu/amd64/ga-20.04/focal/daily/boot-kernel

Revision history for this message
Jeff Turner (jefft7610) wrote :

Has anyone on an EFI system experienced this bug? I'm only seeing BIOS-based systems affected.

My servers were AWS EC2 instances where the 'grub rescue>' state is only a screenshot. To recover you need to reattach the broken volume to a rescue instance and 'grub-install' to the volume there. Write-up of the AWS process at:

https://www.redradishtech.com/display/~jturner/2020/07/30/symbol+%27grub_calloc%27+not+found+--+how+to+fix+on+AWS

Once recovered, I checked what my debconf settings were for grub. Debconf had grub installing to one partition:

grub-pc grub-pc/install_devices_disks_changed multiselect /dev/disk/by-id/nvme-Amazon_Elastic_Block_Store_vol01678967bebb56f55-part1

However I know from trying manually that this would have failed with:

grub-install: warning: File system `ext2` doesn't support embedding.
grub-install: warning: Embedding is not possible. GRUB can only be installed in this setup by using blocklists. However, blocklists are UNRELIABLE and their use is discouraged..
grub-install: error: will not proceed with blockists.

I'm guessing I just didn't spot the error during the upgrade, and so was left with an unbootable server.

Another observation: grub-pc is a rarely upgraded package. Prior to today's update for BootHole, my dpkg.log suggests the last update was July 2019. So perhaps we're all discovering problems that were latent in the Debian/Ubuntu grub update process, and the problem is not BootHole-specific?

Revision history for this message
Julian Andres Klode (juliank) wrote :

I'd say you are discovering bugs in your system configuration, not the update process.

Revision history for this message
Maarten Vink (mavink) wrote :

@juliank This happens with the _default_ AWS Ubuntu images, so I highly doubt it's a bug in anyone's configuration (for example: ami-05ed2c1359acd8af6 (Ubuntu 16.04) or ami-0d359437d1756caa8 (Ubuntu 18.04) in AWS region eu-central-1/Frankfurt). Steps to reproduce:

1. Launch new EC2 instances using either of these images
2. sudo apt-get update && sudo DEBIAN_FRONTEND=noninteractive apt-get -yq upgrade
3. Reboot and find yourself stuck at the grub rescue prompt.

Revision history for this message
daveadams (daveadams) wrote :

I have to agree with Maarten. The pattern we ran into it with was using an Ubuntu base AMI on AWS. It boots fine. If you upgrade interactively, you get a prompt that can be answered correctly. If you use `cloud-init` and specify `package_upgrade: true` then the instance will come up correctly on that boot, but when you restart it, it will be unable to boot. Seems like that's an issue in the package. Likewise, unattended-upgrades will upgrade this package into a broken state.

Revision history for this message
Maarten Vink (mavink) wrote :

Hmm, actually the default configuration on these images does appear to contain an error. From the debconf database:

> Name: grub-pc/install_devices
> Template: grub-pc/install_devices
> Value: /dev/sda
> Owners: grub-pc
> Flags: seen

The correct device on newer EC2 instances would be /dev/nvme0n1. Jeff Turner might be onto something here.

Revision history for this message
daveadams (daveadams) wrote :

Looks like this issue is related: https://bugs.launchpad.net/cloud-init/+bug/1877491

I've verified now that using cloud-init `package_upgrade: true` using an Ubuntu AMI from later than June 1 avoids the problem described in this ticket.

Jeff Turner (jefft7610)
description: updated
Revision history for this message
C de-Avillez (hggdh2) wrote :

Setting Importance to High -- this is breaking many systems in Azure.

Changed in grub2 (Ubuntu):
importance: Undecided → High
Revision history for this message
Marin Nedea (marin-n) wrote :

For Azure users (the same should work in any cloud, with small changes) that end up here while looking for this bug, the steps to recover are:

Deploy a recovery VM using AzCli or just attach a copy of the affected OS vm disk to a rescue VM.
Once done, connected to rescue VM and:

$ sudo su -
# mkdir /rescue
# mount /dev/sdc1 /rescue
# for fs in {proc,sys,tmp,dev}; do mount -o bind /$fs /rescue/$fs; done
# cd /rescue
# chroot /rescue
# lsblk <-- this will identify the attached disk, usualy /dev/sdc
# grub-install /dev/sdc
# exit
# cd /
# for fs in {proc,sys,tmp,dev}; do umount /rescue/$fs; done
# umount /rescue

Now you should be able to swap back the repaired disk to affected VM.

Revision history for this message
Helmar Schütz (haselnuss87) wrote :

I don't know if this adds anything to the discussion, but my private Lenovo T530, running debian stable, boots into the same error. So not only Azure systems are affected.

Revision history for this message
Kal Sze (ksze) wrote :

cybrNaut on #ubuntu on Freenode IRC claims that the problem is resolved with a new kernel version.

Is that true? What kernel version would I need? Right now I'm running Ubuntu Bionic 18.04 with the HWE kernel 5.4.0-42-generic #46~18.04.1-Ubuntu SMP Fri Jul 10 07:21:24 UTC 2020. That's not new enough, right?

Revision history for this message
Julian Andres Klode (juliank) wrote :

No certainly not. The issue here is if you have the wrong device configured to install grub on, it fails to install to it, but does not fail the upgrade process, and the disks grub loader does not match the grub on the boot partition.

Or you converted to raid1 manually or copied your disk and did not reconfigure grub.

Running dpkg-reconfigure grub-pc should likely fix it, as it will ask you for the devices to install on again.

This is not a problem on UEFI systems fwiw, as they do not use a small image in MBR and load the rest from /boot, but a single monolithic grub image in the ESP.

Some more work is being done to harden this.

Revision history for this message
Paige Scheller (pawsch) wrote :
Revision history for this message
Robert C Jennings (rcj) wrote :

@marin-n & @hggdh2,

I see that you have both reported failures on Azure. @marin-n, from your grub versions it looks like you're on focal. I'd like to learn more so I can recreate and get this addressed.

I would like to find out:
- which image release and build you are running:
$ cat /etc/lsb-release
$ cat /etc/cloud/build.info

- the image source (publisher/offer) and the vmSize:
$ curl --silent -H Metadata:true "http://169.254.169.254/metadata/instance/compute/?api-version=2017-08-01" | python3 -c 'import json,sys; data=json.load(sys.stdin);print("\n".join([data["vmSize"], data["publisher"], data["offer"]]));'

- anything special you've done with your config that would be relevant.

Thank you so much.

Revision history for this message
C de-Avillez (hggdh2) wrote :

@rcj: here you have one of them

DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=18.04
DISTRIB_CODENAME=bionic
DISTRIB_DESCRIPTION="Ubuntu 18.04.4 LTS"

installed grub-efi-amd64-signed:amd64 1.93.18+2.02-2ubuntu8.16

VM size: Standard_D2s_v3

Publisher|Offer|SKU: Canonical|UbuntuServer|18.04-LTS|18.04.201906271

Revision history for this message
Ian Channing (ianchanning) wrote :

@marin-n's fix (https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/1889509/comments/16) worked perfectly for us.

I took some slightly more detailed notes of what I did through the portal rather than the CLI:

> For Azure users (the same should work in any cloud, with small changes) that end up here while looking for this bug, the steps to recover are:
>
> Deploy a recovery VM using AzCli or just attach a copy of the affected OS vm disk to a rescue VM.
> Once done, connected to rescue VM and:
>
> $ sudo su -
> # mkdir /rescue
> # mount /dev/sdc1 /rescue
> # for fs in {proc,sys,tmp,dev}; do mount -o bind /$fs /rescue/$fs; done
> # cd /rescue
> # chroot /rescue
> # lsblk <-- this will identify the attached disk, usualy /dev/sdc
> # grub-install /dev/sdc
> # exit
> # cd /
> # for fs in {proc,sys,tmp,dev}; do umount /rescue/$fs; done
> # umount /rescue
>
> Now you should be able to swap back the repaired disk to affected VM.

Ok, step by step:

> Deploy a recovery VM

What sort of VM is that? Attempted creating a regular Ubuntu 18.04 LTS VM (same as our broken VM)

All normal except for connecting to an existing disk. Looks like you can't attach to a disk unless you first somehow move it from another machine (detach it) first.

> attach a copy of the affected OS vm disk to a rescue VM.

To create a copy, you can take a read only snapshot of the disk and then create a new Managed Disk based on the snapshot.

The only disk you need a snapshot of is the *OS* disk, not the data disk.

You can create the recovery VM without a data disk, just the OS disk that automatically gets created.

You can then add the Managed Disk OS snapshot to the recovery VM as a data disk.

Then you can log into the recovery VM and follow the steps above.

All the steps completed without error - we could copy and paste the exact messages

Then log out and stop the VM.

You can then go into the broken VM and under the Disks section of the VM select 'Swap OS Disk'.

Revision history for this message
Pat Viafore (patviafore) wrote :

Thank you @ianchanning for the recovery notes through the portal.

I'll also add in a few links that might help.
For users looking for general recovery steps : https://wiki.ubuntu.com/SecurityTeam/KnowledgeBase/GRUB2SecureBootBypass#Recovery
For users looking how to mitigate this before rebooting: https://wiki.ubuntu.com/SecurityTeam/KnowledgeBase/GRUB2SecureBootBypass#Known_issues

@hggdh2 Thank you for the informatoin, I am working with rcj on reproducing this issue. I was able to reproduce this issue with the following steps:

Launch an Ubuntu VM in Azure

az vm create --name grub-test --resource-group <resource-group-name> --location southcentralus --image Canonical:UbuntuServer:18.04-LTS:18.04.201906271 --size Standard_D2s_v3 --admin-username ubuntu --ssh-key-value <path-to-ssh-file>

Apt update and install grub:
sudo apt update
DEBIAN_FRONTEND=noninteractive sudo apt install grub2-common

You will see a prompt indicating that grub has failed to install.

Here's a pastebin of relevant disk information that I was able to gather before I installed grub. In #ubuntu-release on FreeNode there was talk of NVME devices exacerbating this bug, but there are no nvme devices on this instance (as checked by ls /dev/nvme*)
https://pastebin.ubuntu.com/p/jD7kgDVtxk/

Revision history for this message
MM (bmn.one) wrote :
Jeff Turner (jefft7610)
description: updated
Revision history for this message
Bernd Porr (berndporr) wrote :

Also affects my desktop Ubuntu 18.04.4 LTS. Took me ages because the original desktop boot disk for bionic just gave a blank monitor but here is a fix:
I downloaded the boot/repair iso (https://sourceforge.net/projects/boot-repair-cd/) and created a bootable USB disk with the startup disk creator on another Linux box.
Then I did a manual grub re-install via the terminal (didn't use the one-click rescue of the ISO):
sudo -s
mount /dev/sdaX /mnt
grub-install --boot-directory=/mnt/boot /dev/sda

Revision history for this message
Kal Sze (ksze) wrote :

How do I tell whether my grub is ok (without rebooting)?

My machine still hasn't rebooted since the grub2 upgrade, and I'm too afraid to do so now.

Revision history for this message
Jeff Turner (jefft7610) wrote :

> How do I tell whether my grub is ok (without rebooting)?

I too would like to know this. I have a linode server with OS on /dev/sda, that grub-install doesn't like:

root@radish-linode:~# grub-install /dev/sda
Installing for i386-pc platform.
grub-install: warning: File system `ext2' doesn't support embedding.
grub-install: warning: Embedding is not possible. GRUB can only be installed in this setup by using blocklists. However, blocklists are UNRELIABLE and their use is discouraged..
grub-install: error: will not proceed with blocklists.
root@localhost:~# file -s /dev/sda
/dev/sda: DOS/MBR boot sector

I don't know whether to force it or not.

Revision history for this message
nosilver4u (shanebishop) wrote :

This took down one of my Digital Ocean droplets running Debian 10 (because I was an idiot and said "yes" to continue when grub reported a failure).
But I was able to fix it similar to Bernd by booting the DO recovery ISO (that runs Ubuntu), mount my root disk at /mnt, chroot, and then run:
grub-install --root-directory=/mnt/ /dev/vda

Revision history for this message
Erben Péter (erben-peter) wrote :

Same problem on Ubuntu 20.04 desktop.

The fix was as described above: boot from USB, mount boot disk, chroot, reinstall grub.

Revision history for this message
Marin Nedea (marin-n) wrote :
Revision history for this message
Maarten Vink (mavink) wrote :

Any recommendations to fix this in a non-interactive way for those with more than a handful of servers?

Revision history for this message
Eugene Z (eugene1111) wrote :

Hello all! somebody have this issue on Ubuntu 18.04 desktop with encrypted lvm partition? because i am installed these updates yesterday, but without reboot. I will glad for your advices about this issue.

Revision history for this message
Roman Shipovskij (roman-shipovskij) wrote :
Revision history for this message
Eugene Z (eugene1111) wrote :

@roman-shipovskij, i don't have an installed grub-pc in my system, when i try to install it i see next:

sudo apt install grub-pc
Чтение списков пакетов… Готово
Построение дерева зависимостей
Чтение информации о состоянии… Готово
Будут установлены следующие дополнительные пакеты:
  grub-gfxpayload-lists grub-pc-bin
Предлагаемые пакеты:
  desktop-base
Следующие пакеты будут УДАЛЕНЫ:
  grub-efi-amd64
Следующие НОВЫЕ пакеты будут установлены:
  grub-gfxpayload-lists grub-pc grub-pc-bin
Обновлено 0 пакетов, установлено 3 новых пакетов, для удаления отмечено 1 пакетов, и 0 пакетов не обновлено.
Необходимо скачать 1.043 kB архивов.
После данной операции объём занятого дискового пространства возрастёт на 3.419 kB.
Хотите продолжить? [Д/н] n
Прервано.

Should i continue the installation?

Eugene Z (eugene1111)
Changed in grub2 (Ubuntu):
assignee: nobody → Eugene Z (eugene1111)
assignee: Eugene Z (eugene1111) → nobody
Revision history for this message
Roman Shipovskij (roman-shipovskij) wrote :

No, your system is not affected

Revision history for this message
Alex N. (a-nox) wrote :

I ran into this grub error during boot after upgrading my Desktop Ubuntu 20.04 to Kernel 5.4.0.42.45 and Grub 2.04-1ubuntu26.1.

Revision history for this message
Peter Daplyn (peter-daplyn-mg-com) wrote :

Running "sudo dpkg-reconfigure grub-pc" before reboot is not working for me - prompts for some settings, but doesn't prompt for drives to install. I'm running Ubunutu 18.04 on AWS EC2 T3 instance. Rebooting after this still results in the issue.

Post-issue recovery as described by Marin works fine (https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/1889509/comments/16).

Could I run the grub-install *before* reboot to prevent getting into the problem?

Revision history for this message
Ian Channing (ianchanning) wrote :

@mavink - you only need the one recovery VM - you can attach and detach all the broken OS disks to that.

Potentially you can script all the steps on the recovery VM, and you can script all the azure steps using the Azure CLI. You can translate all the steps I did via the portal to be via the CLI and run that as a bash script.

Revision history for this message
Maarten Vink (mavink) wrote :

@ianchanning Yes, that's easy enough, but I was actually looking for a way to prevent that for machines that haven't received the grub update yet :)

Revision history for this message
Gramler (maxf) wrote :

Just did

apt update
apt upgrade
autoclean etc.

Rebooted into this.

Have 20.04 low latency running on HP Pavillion

Revision history for this message
Eugene Z (eugene1111) wrote :

@maxf So, I think you need to reinstall grub via rescue live cd.

Revision history for this message
Bernd Porr (berndporr) wrote :

After the rescue via live USB stick one can halt updating of the grub packages till a proper fix is available. I've done it with "dselect" (hotkey "H").

*== Opt admin grub-pc amd64 i386 2.02-2ubunt 2.02-2ubunt
*== Opt admin grub-pc-bin amd64 i386 2.02-2ubunt 2.02-2ubunt
*== Opt admin grub2-common amd64 i386 2.02-2ubunt 2.02-2ubunt

or you can it on the commandline:

apt-mark hold grub2
apt-mark hold grub2-pc
apt-mark hold grub-efi-amd64-signed

and any other which are currently installed.

/Bernd

Changed in grub2 (Debian):
importance: Undecided → Unknown
status: New → Unknown
Revision history for this message
Ian Channing (ianchanning) wrote :

We had a problem with our recovery VM switching the disk names so here's a minor modification from [comment #16](https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/1889509/comments/16) to first run `lsblk` to check for the correct disk before running `mount /dev/sdc1`:

$ sudo su -
# lsblk <-- this will identify the attached disk, usually /dev/sdc - there should be no mount points on it
# mkdir /rescue
# mount /dev/sdc1 /rescue
# for fs in {proc,sys,tmp,dev}; do mount -o bind /$fs /rescue/$fs; done
# cd /rescue
# chroot /rescue
# grub-install /dev/sdc
# exit
# cd /
# for fs in {proc,sys,tmp,dev}; do umount /rescue/$fs; done
# umount /rescue
# rmdir /rescue

Also this was the refined set of steps that we used (always using the same recovery VM:

1. Make a snapshot of 'broken' OS disk (postfix `_snap`)
2. Create a Managed Disk from the snapshot - this must be the same grade as the old OS disk (30 GB Premium SSD) as we are going to fully replace the old OS disk with this one (postfix `_recovery`) - source type snapshot and use the just created snapshot
3. Attach Managed OS Disk to recovery VM (stop/start of recoveryVM not required)
4. Login via SSH, run recovery steps, logout again
5. Detach Managed OS Disk from recovery VM (edit the VM disks and detach the recovery OS Disk)
6. Stop the 'broken' VM (possibly not necessary as the OS Disk swap stops it)
7. In the 'broken' VM Disks click 'Swap OS Disk' and select the recovery OS Disk as the replacement
8. Start the 'recovered' VM
9. Clean up the snapshot - but leave the broken OS disk for now - reminder for a month or so to remove it too

Finally turn off the recovery VM and delete that in a month too

Revision history for this message
Matt Nordhoff (mnordhoff) wrote :

> I too would like to know this. I have a linode server with OS on /dev/sda, that grub-install doesn't like:

On Linode, I believe you should be safe unless the kernel is set to "Direct Disk". It doesn't matter if the GRUB installation works if it's not being used.

<https://www.linode.com/docs/platform/disk-images/disk-images-and-configuration-profiles/#configuration-profiles>

Revision history for this message
Eugene Z (eugene1111) wrote :

Hi all! as i see after apt update we have a new fixed packages, for ubuntu 18.04, i am installed it, hope it will solve the issue completely.

(Чтение базы данных … на данный момент установлено 320114 файлов и каталогов.)
Подготовка к распаковке …/grub-efi-amd64_2.02-2ubuntu8.17_amd64.deb …
Распаковывается grub-efi-amd64 (2.02-2ubuntu8.17) на замену (2.02-2ubuntu8.16) …
Подготовка к распаковке …/grub2-common_2.02-2ubuntu8.17_amd64.deb …
Распаковывается grub2-common (2.02-2ubuntu8.17) на замену (2.02-2ubuntu8.16) …
Подготовка к распаковке …/grub-efi-amd64-signed_1.93.19+2.02-2ubuntu8.17_amd64.deb …
Распаковывается grub-efi-amd64-signed (1.93.19+2.02-2ubuntu8.17) на замену (1.93.18+2.02-2ubuntu8.16) …
Подготовка к распаковке …/grub-efi-amd64-bin_2.02-2ubuntu8.17_amd64.deb …
Распаковывается grub-efi-amd64-bin (2.02-2ubuntu8.17) на замену (2.02-2ubuntu8.16) …
Подготовка к распаковке …/grub-common_2.02-2ubuntu8.17_amd64.deb …
Распаковывается grub-common (2.02-2ubuntu8.17) на замену (2.02-2ubuntu8.16) …
Настраивается пакет grub-common (2.02-2ubuntu8.17) …
update-rc.d: warning: start and stop actions are no longer supported; falling back to defaults
Настраивается пакет grub-efi-amd64-bin (2.02-2ubuntu8.17) …
Настраивается пакет grub2-common (2.02-2ubuntu8.17) …
Настраивается пакет grub-efi-amd64 (2.02-2ubuntu8.17) …
Выполняется установка для платформы x86_64-efi.
Установка завершена. Ошибок нет.
Sourcing file `/etc/default/grub'
Генерируется файл настройки grub …
Найден образ linux: /boot/vmlinuz-4.15.0-112-generic
Найден образ initrd: /boot/initrd.img-4.15.0-112-generic
Найден образ linux: /boot/vmlinuz-4.15.0-111-generic
Найден образ initrd: /boot/initrd.img-4.15.0-111-generic
Найден образ linux: /boot/vmlinuz-4.15.0-109-generic
Найден образ initrd: /boot/initrd.img-4.15.0-109-generic
Добавление записи в загрузочное меню для конфигурации с микропрограммой EFI
завершено
Настраивается пакет grub-efi-amd64-signed (1.93.19+2.02-2ubuntu8.17) …
Выполняется установка для платформы x86_64-efi.
Установка завершена. Ошибок нет.
Обрабатываются триггеры для man-db (2.8.3-2ubuntu0.1) …
Обрабатываются триггеры для ureadahead (0.100.0-21) …
Обрабатываются триггеры для install-info (6.5.0.dfsg.1-2) …
Обрабатываются триггеры для systemd (237-3ubuntu10.41) …
Обрабатываются триггеры для shim-signed (1.37~18.04.3+15+1533136590.3beb971-0ubuntu1) …
Secure Boot not enabled on this system.

Revision history for this message
Pablo (pabloa98) wrote :

Ubuntu 16.04 is affected and all the fixes and workaround posted in this bug do not work.

Revision history for this message
Pablo (pabloa98) wrote :

In a workstation with Ubuntu 18.04 using EUFI, new entries (related to devuan and Debian) were inserted before the Ubuntu entry. The workstation does not have those OSes so I am not sure why they did showed. I am assuming they were inserted by the faulty grub2 update.

Anyway, I edited (using the BIOS) those entries so Ubuntu boot is listed first and it fixed the problem in the workstation.

Now it boots normally. I updated packages and still is booting.

Revision history for this message
Améy Prabhu Gaonkar (ameyvprabhu) wrote :

My Ubuntu 18.04 at work is stuck on GRUB error after reboot. Too bad when I'm miles away from work. WFH nightmare!

Revision history for this message
Rocko (rockorequin) wrote :
Revision history for this message
Steve Langasek (vorlon) wrote :

Yes, it is the same bug. Marking as a duplicate.

Revision history for this message
Edelmar Ziegler (ecz4) wrote :

Just had the same issue on an aws instance running ubuntu 18.04.4 (GNU/Linux 5.3.0-1028-aws x86_64)

It bricked that instance when I rebooted it. I did run apt a few days ago and today noticed the system was asking for a reboot, which I did after business hours.

Now I can only see the screenshot, it says:
error: symbol `grub_calloc´ not found.
Entering rescue mode...
grub rescue> _

Seems to be a prompt, I cannot interact with.
The instance monitoring graph on CPU use is up to 49% since this happened. Is it actually trying to fix itself?

From what I've read the only way back for that one would be to launch another instance an manually recreate the messed config on the bricked one's volume. Is it the case?

Luckily enough I had an almost exact copy of that instance running already, so I just redirected traffic. But this second instance seems to be in the same stage, it is asking for a reboot and I have no idea if I can do that. Can anyone point me to the direction of what I should do to fix it?

Revision history for this message
Jim (jimbolino) wrote :

I'm unable to fix this on my ubuntu 16 machine.

# debconf-get-selections | grep -E 'grub-(efi|pc)/install_devices[^_]'
grub-pc grub-pc/install_devices multiselect /dev/xvda1

# debconf-set-selections
grub-pc grub-pc/install_devices multiselect /dev/vda1
grub-pc grub-pc/install_devices_disks_changed multiselect /dev/vda1

# dpkg-reconfigure grub-pc
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-4.4.0-186-generic
Found initrd image: /boot/initrd.img-4.4.0-186-generic
Found linux image: /boot/vmlinuz-4.4.0-185-generic
Found initrd image: /boot/initrd.img-4.4.0-185-generic
Found linux image: /boot/vmlinuz-4.4.0-184-generic
Found initrd image: /boot/initrd.img-4.4.0-184-generic
done

When i reboot i get the default grub> prompt, and can boot by typing the correct commands from:
https://www.linux.com/training-tutorials/how-rescue-non-booting-grub-2-linux/

Revision history for this message
J. R. (jr20) wrote :

Dear all,

I‘m affected, too, on my Lenovo laptop running Kubuntu 18.04. I have two questions:

1. How can I boot the system from within the Grub rescue prompt? I don‘t have a live System and can‘t get hands on one for several days.

2. I did reboot my computer right now, and minutes before I did a full upgrade. From the above conversation one can see that the bug is known for almost a week. So why is there no emergency patch like a package that goes back to the last functional version and does a grub reinstall?

Thanks for your help!

Peter

Revision history for this message
Jeff Turner (jefft7610) wrote :

> 1. How can I boot the system from within the Grub rescue prompt? I don‘t have a live System and can‘t get hands on one for several days.

You can't. The part of grub that loaded from the MBR (and printed 'grub rescue>') is incompatible with stage 2 of grub at /boot/grub/. You either need to upgrade the MBR part ('grub-reinstall /dev/...') or downgrade /boot/grub/ somehow (perhaps copy off another system?). Either way you need the install media.

> 2. I did reboot my computer right now, and minutes before I did a full upgrade. From the above conversation one can see that the bug is known for almost a week. So why is there no emergency patch like a package that goes back to the last functional version and does a grub reinstall?

The bug (#1889556) has been fixed. Did you 'apt-get update' and 'apt-get upgrade' prior to rebooting? If so, 'apt-get upgrade' would terminated early with an exit code, which is as much of a hint as dpkg can give you that grub is hosed, and needs fixing ('dpkg-reconfigure grub-pc') prior to reboot.

Revision history for this message
halfgaar (wiebe-halfgaar) wrote :

I had an amazon EC2 Ubuntu 18.04 instance fail to boot. Grub was updated to the fixed version on aug 3 (2.02-2ubuntu8.17). A fix for #1889556 that prevents this in the future may have been released, but the broken state the machine is in, is not recovered.

Amazon AWS is especially tricky, because you don't have a VGA console to see the error in. Even tier 1 support only has virtual console access, which is not there yet.

Revision history for this message
Judah Richardson (jdrch) wrote :

This bug bit me pretty hard: My Ubuntu 20.04 laptop updated via unattended-upgrades. Then it lost power due to last week's derecho. Now the OS won't boot.

Anyway, FWIW, I think OP's # dpkg-reconfigure grub-pc advice applies to MBR installations only. If you have a modern EFI installation I think you need dpkg-reconfigure grub-efi-amd64 instead.

Revision history for this message
Julian Andres Klode (juliank) wrote :

@Judah This bug does not affect EFI systems, as the EFI grubs are self contained single file blobs.

Revision history for this message
Judah Richardson (jdrch) wrote :

@Julian Welp, the laptop (detailed specs: https://github.com/jdrch/Hardware/blob/master/HP%20ProBook%204530s.md) my Ubuntu 20.04 installation is on has always been in UEFI boot mode, so I'd assume the Ubuntu installation itself also uses EFI. As I said, same symptoms as everyone else on here. Maybe it wasn't using EFI this whole time but bottom line is I had the exact same issue but the instructions on this thread didn't work for me as written.

Anyway, for those like me who are having the same problem and use both EFI and LVM, the instructions in the OP don't work.

Here's what to do instead:

These instructions assume your boot disk is sda and your root(?) volume group name is ubuntu-vg. They are based on:

https://askubuntu.com/a/772892/932418

1) Boot into a live Ubuntu USB
2) Open a terminal

# mkdir /mnt/chrootdir/

Note that, contrary to many of the instructions on this page, you'll have to mount your volume group instead of your boot disk or partition. If you try to do the latter you'll get weird errors about the mount point being busy.

# mount /dev/ubuntu-vg/root /mnt/chrootdir/
# for dir in proc dev sys etc bin sbin var usr lib lib64 tmp; mount --bind /$dir /mnt/chrootdir/$dir ; done
# chroot /mnt/chrootdir/

# apt install grub-efi-amd64

You'll see some warning message about grub-pc being removed. Proceed with "Y" because that's what you want to happen

# grub-install /dev/sda # <- This reinstalls grub onto your boot disk
# dpkg-reconfigure grub-efi-amd64

Leave the text fields blank and select Yes for the removable device EFI and NVRAM options if you don't multiboot.

# update-grub2 /dev/sda # <- this writes the grub config created in the previous command to grub on your boot disk

# exit # <- exit chroot
# exit # <- exit root
# exit # <- close terminal

Power off the PC via the Ubuntu power menu, following the onscreen instructions.

Reboot the PC, making sure the BIOS is in UEFI mode. Ubuntu should now boot as expected.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.