Comment 45 for bug 1889509

Revision history for this message
Ian Channing (ianchanning) wrote :

We had a problem with our recovery VM switching the disk names so here's a minor modification from [comment #16](https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/1889509/comments/16) to first run `lsblk` to check for the correct disk before running `mount /dev/sdc1`:

$ sudo su -
# lsblk <-- this will identify the attached disk, usually /dev/sdc - there should be no mount points on it
# mkdir /rescue
# mount /dev/sdc1 /rescue
# for fs in {proc,sys,tmp,dev}; do mount -o bind /$fs /rescue/$fs; done
# cd /rescue
# chroot /rescue
# grub-install /dev/sdc
# exit
# cd /
# for fs in {proc,sys,tmp,dev}; do umount /rescue/$fs; done
# umount /rescue
# rmdir /rescue

Also this was the refined set of steps that we used (always using the same recovery VM:

1. Make a snapshot of 'broken' OS disk (postfix `_snap`)
2. Create a Managed Disk from the snapshot - this must be the same grade as the old OS disk (30 GB Premium SSD) as we are going to fully replace the old OS disk with this one (postfix `_recovery`) - source type snapshot and use the just created snapshot
3. Attach Managed OS Disk to recovery VM (stop/start of recoveryVM not required)
4. Login via SSH, run recovery steps, logout again
5. Detach Managed OS Disk from recovery VM (edit the VM disks and detach the recovery OS Disk)
6. Stop the 'broken' VM (possibly not necessary as the OS Disk swap stops it)
7. In the 'broken' VM Disks click 'Swap OS Disk' and select the recovery OS Disk as the replacement
8. Start the 'recovered' VM
9. Clean up the snapshot - but leave the broken OS disk for now - reminder for a month or so to remove it too

Finally turn off the recovery VM and delete that in a month too