grub-pc upgrade on Amazon EC2: The GRUB boot loader was previously installed to a disk that is no longer present, or whose unique identifier has changed for some reason.

Bug #751057 reported by Alan
36
This bug affects 7 people
Affects Status Importance Assigned to Milestone
Ubuntu on EC2
Invalid
Undecided
Unassigned
grub2 (Ubuntu)
Confirmed
Medium
Unassigned

Bug Description

Binary package hint: grub2

I'm running Natty 11.04 beta in Amazon EC2. ami-5c39c435, ebs/ubuntu-images-milestone/ubuntu-natty-11.04-beta1-amd64-server-20110329. The objective is to do an apt-get upgrade smoothly, but the upgrade of the grub-pc package is causing problems.

The present version is grub-pc_1.99~rc1-6ubuntu1_amd64. apt attempts to install grub-pc_1.99~rc1-8ubuntu1_amd64 and the corresponding version of grub-common.

1. apt-get update
2. apt-get install grub-pc grub-common
3. Choose to replace /etc/default/grub (I've looked at it, seems to have no changes on point.)
4. Error: "The GRUB boot loader was previously installed to a disk that is no longer present, or whose unique identifier has changed for some reason."
5. Regardless of the choices selected from here, the configuration utility seems to get stuck in a loop.

I don't know if this has anything to do with the fact that Amazon EC2 partitions are xen, which are in /dev/xvda1 instead of /dev/sda1.

I understand that grub-legacy-ec2 is installed, which may handle bootloading.

Whatever the cause, apt-get upgrade, which may include new versions of the installed grub-pc, should work smoothly.

Scott Moser (smoser)
Changed in ubuntu-on-ec2:
status: New → Invalid
Scott Moser (smoser)
tags: added: ec2-imags uec-images
Dave Walker (davewalker)
tags: added: server-nro
Revision history for this message
Scott Moser (smoser) wrote :

I can verify this bug is present, and will affect anyone upgrading from a instance originally booted with cloud-init older than 0.6.1-0ubuntu6 *and* a natty kernel on EC2. Bug 752361 fixed the issue in newly booted instances.

I tested the following scenarios
A.) started with natty image older than 2011-04-06 (as the bug opener did)
    In this scenario, I see a prompt for /etc/default/grub changes. I
    selected to accept the maintainer's version.
    Then, I was propmted because grub was installed to a disk no longer present.
    It showed a list of devices (xvda1). I did not select this and hit 'continue'. Then I said
   that grub should continue without installing. (This is OK as grub is not managed inside
   EC2 instances). That seemed to work fine.

    I also believe that selecting xvda1 in the list would have *also* worked. but I did not try that yet.

    I did not see how to get into an endless loop as the opener described.

B.) started with natty image newer than 2011-04-06
    I was shown /etc/default/grub modification prompt. That was all.
C.) started with maverick image and do-release-upgrade [-d] to natty.
    In this scenario, the upgrade should go OK. The user would then reboot
    into the natty kernel, and /dev/sda1 would be renamed to /dev/xvda1. The
    next time grub-pc is upgraded, the user may be prompted to address the
    missing /dev/sda1. Selecting /dev/xvda1 or selecting to go on without
    installing should be OK.

In A and B above, you're shown a prompt for merging /etc/default/grub, and should accept the maintainers version. The differences are either a bug in grub-pc not recognizing older versions of its file, or newer modifications done in the image build process so that user will see a prompt (with a timeout) on first boot of the images.
In C above, you're shown the prompt, this is bug 759545.

The relevant cloud-init code can be see at [1]
--
[1] http://bazaar.launchpad.net/%7Ecloud-init-dev/cloud-init/trunk/annotate/head%3A/cloudinit/CloudConfig/cc_grub_dpkg.py

Changed in grub2 (Ubuntu):
importance: Undecided → Medium
status: New → Confirmed
Revision history for this message
Aaron Roydhouse (aaron-roydhouse) wrote :

This problem has happened again. Updating 16.04 instances on AWS, it upgrades 'grub-pc' and displays the missing disk error and tries to push the user into installing grub on one or more disks. If you resist the prompts and do not install, no harm seems to occur. Really misleading prompt though.

Revision history for this message
David Morgan (david-sinnott-morgan) wrote :

I've just hit this too: in my case updating an 18.04 instance on AWS. It was not an issue when I last updated/rebooted about a month ago. In my case it prompted me to install onto a disk and a partition during the update, via a lurid full screen display. Not knowing quite what to do I selected *both*, thinking more is safer. I haven't attempted a reboot yet though, so I will now try to reconfigure that to neither. I'm not sure whether this would have actually prevented the instance from booting.

Revision history for this message
David Morgan (david-sinnott-morgan) wrote :

UPDATE: in the end I just rebooted from command line (not the AWS console), without removing grub from the disk or the partition, and it worked fine. I'm not sure whether AWS is even using this virtual disk in the boot process.

Output from update-grub is:

Sourcing file `/etc/default/grub'
Sourcing file `/etc/default/grub.d/50-cloudimg-settings.cfg'
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-4.15.0-1056-aws
Found initrd image: /boot/initrd.img-4.15.0-1056-aws
Found linux image: /boot/vmlinuz-4.15.0-1054-aws
Found initrd image: /boot/initrd.img-4.15.0-1054-aws
Found linux image: /boot/vmlinuz-4.15.0-1044-aws
Found initrd image: /boot/initrd.img-4.15.0-1044-aws
done

Revision history for this message
Andrew G. Saushkin (saushkin) wrote :

I've noticed that this bug appears only on several instance types such as t3, r5 but t2 instances are not affected by this bug.

The fresh installation of Ubuntu Server 18.04 LTS (HVM), SSD Volume Type (64-bit x86) (ami-0d1cd67c26f5fca19) causes error while `apt-get upgrade` with pc-grub on the t3 and r5 while t2 instances are not affected.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.