I had done that already. But I redid it one more time anyway to be 100% certain. I then removed /dev/sda and it failed as given above. No change.
But then I decided to test by removing different drives, not just /dev/sda.
If I remove /dev/sdb, the exact errors given are:
error: fd0 read error.
error: unknown LVM metadata header.
error: fd0 read error.
error: no such disk.
The "error: fd0 read error." messages are normally present and not relevant.
At grub rescue>, "set" gives
prefix=(vg-boot)/grub
root=vg-boot
and "ls (vg-boot)/grub" as expected by the LVM error above, gives
error: no such disk.
If I remove /dev/sdc, in addition to the normal "fd0" messages, I only get
error: invalid arch independent ELF magic.
If I remove /dev/sdd, it boots perfectly fine, except for the degraded raid boot notices (I have "bootdegraded=yes" on the kernel command line).
So it appears that the behaviour is completely dependent on which drive of the raid set has failed.
Slightly strange behaviour after reconnecting /dev/sdd, grub flashed up the message "Invalid environment block" as it booted, but then it started up fine. I had to 'mdadm --add /dev/md0 /dev/sdd1' to get the array back into full operation after startup of course.
I think I should retest the entire scenario, but remove LVM from the equation. Will post results in a bit.
I had done that already. But I redid it one more time anyway to be 100% certain. I then removed /dev/sda and it failed as given above. No change.
But then I decided to test by removing different drives, not just /dev/sda.
If I remove /dev/sdb, the exact errors given are:
error: fd0 read error.
error: unknown LVM metadata header.
error: fd0 read error.
error: no such disk.
The "error: fd0 read error." messages are normally present and not relevant.
At grub rescue>, "set" gives (vg-boot) /grub
prefix=
root=vg-boot
and "ls (vg-boot)/grub" as expected by the LVM error above, gives
error: no such disk.
If I remove /dev/sdc, in addition to the normal "fd0" messages, I only get
error: invalid arch independent ELF magic.
If I remove /dev/sdd, it boots perfectly fine, except for the degraded raid boot notices (I have "bootdegraded=yes" on the kernel command line).
So it appears that the behaviour is completely dependent on which drive of the raid set has failed.
Slightly strange behaviour after reconnecting /dev/sdd, grub flashed up the message "Invalid environment block" as it booted, but then it started up fine. I had to 'mdadm --add /dev/md0 /dev/sdd1' to get the array back into full operation after startup of course.
I think I should retest the entire scenario, but remove LVM from the equation. Will post results in a bit.