Activity log for bug #1848180

Date Who What changed Old value New value Message
2019-10-15 10:21:21 Gabriele Tozzi bug added bug
2019-10-15 10:22:36 Gabriele Tozzi description I have two lvm volumes (/dev/mapper/raid-btrfs and /dev/mapper/fast-btrfs) in two different volume groups. I have created a btrfs (raid1) filesystem on top of them and that's my root filesystem. If i define it bu UUID in the root= kernel argument, i just hit bug #1574333. Forcing my root to "/dev/mapper/fast-btrfs" by defining GRUB_DEVICE in /etc/default/grub works around that bug. The problem now is that initrd is only activating the device given as root= argument, leaving the other inactive; consequently the btrfs mount fails to find its second device and the system fails to boot giving up at initramfs prompt. Manually adding a line to activate also 2nd device at the bottom of /usr/share/initramfs-tools/scripts/local-top/lvm2 and rebuilding the initramfs works around this issue too, but i suppose my mods will be washed away by next package upgrade. Here is the result: > activate "$ROOT" > activate "$resume" > activate "/dev/mapper/raid-btrfs" Proposed solution: I understand this is an uncommon setup and correctly handling multidevice LVM roots is complicated, please just add a configuration option to manually define/append the list of volume groups to be activated at initrd time. I have two lvm volumes (/dev/mapper/raid-btrfs and /dev/mapper/fast-btrfs) in two different volume groups. I have created a btrfs (raid1) filesystem on top of them and that's my root filesystem. If i define it by UUID in the root= kernel argument, i just hit bug #1574333. Forcing my root to "/dev/mapper/fast-btrfs" by defining GRUB_DEVICE in /etc/default/grub works around that bug. The problem now is that initrd is only activating the device given as root= argument, leaving the other inactive; consequently the btrfs mount fails to find its second device and the system fails to boot giving up at initramfs prompt. Manually adding a line to activate also 2nd device at the bottom of /usr/share/initramfs-tools/scripts/local-top/lvm2 and rebuilding the initramfs works around this issue too, but i suppose my mods will be washed away by next package upgrade. Here is the result: > activate "$ROOT" > activate "$resume" > activate "/dev/mapper/raid-btrfs" Proposed solution: I understand this is an uncommon setup and correctly handling multidevice LVM roots is complicated, please just add a configuration option to manually define/append the list of volume groups to be activated at initrd time.
2019-10-20 01:16:21 Launchpad Janitor lvm2 (Ubuntu): status New Confirmed
2020-12-28 18:44:57 Steve Dodd attachment added /etc/initramfs-tools/hooks/btrfs-lvm https://bugs.launchpad.net/ubuntu/+source/lvm2/+bug/1848180/+attachment/5447425/+files/local-top.hook
2020-12-28 18:45:46 Steve Dodd attachment added /etc/initramfs-tools/scripts/local-top/btrfs-lvm https://bugs.launchpad.net/ubuntu/+source/lvm2/+bug/1848180/+attachment/5447426/+files/local-top.script