2019-10-15 10:22:36 |
Gabriele Tozzi |
description |
I have two lvm volumes (/dev/mapper/raid-btrfs and /dev/mapper/fast-btrfs) in two different volume groups. I have created a btrfs (raid1) filesystem on top of them and that's my root filesystem.
If i define it bu UUID in the root= kernel argument, i just hit bug #1574333. Forcing my root to "/dev/mapper/fast-btrfs" by defining GRUB_DEVICE in /etc/default/grub works around that bug.
The problem now is that initrd is only activating the device given as root= argument, leaving the other inactive; consequently the btrfs mount fails to find its second device and the system fails to boot giving up at initramfs prompt.
Manually adding a line to activate also 2nd device at the bottom of /usr/share/initramfs-tools/scripts/local-top/lvm2 and rebuilding the initramfs works around this issue too, but i suppose my mods will be washed away by next package upgrade.
Here is the result:
> activate "$ROOT"
> activate "$resume"
> activate "/dev/mapper/raid-btrfs"
Proposed solution:
I understand this is an uncommon setup and correctly handling multidevice LVM roots is complicated, please just add a configuration option to manually define/append the list of volume groups to be activated at initrd time. |
I have two lvm volumes (/dev/mapper/raid-btrfs and /dev/mapper/fast-btrfs) in two different volume groups. I have created a btrfs (raid1) filesystem on top of them and that's my root filesystem.
If i define it by UUID in the root= kernel argument, i just hit bug #1574333. Forcing my root to "/dev/mapper/fast-btrfs" by defining GRUB_DEVICE in /etc/default/grub works around that bug.
The problem now is that initrd is only activating the device given as root= argument, leaving the other inactive; consequently the btrfs mount fails to find its second device and the system fails to boot giving up at initramfs prompt.
Manually adding a line to activate also 2nd device at the bottom of /usr/share/initramfs-tools/scripts/local-top/lvm2 and rebuilding the initramfs works around this issue too, but i suppose my mods will be washed away by next package upgrade.
Here is the result:
> activate "$ROOT"
> activate "$resume"
> activate "/dev/mapper/raid-btrfs"
Proposed solution:
I understand this is an uncommon setup and correctly handling multidevice LVM roots is complicated, please just add a configuration option to manually define/append the list of volume groups to be activated at initrd time. |
|