Wily LVM-RAID1 – md: personality for level 1 is not loaded

Bug #1509717 reported by MegaBrutal
30
This bug affects 5 people
Affects Status Importance Assigned to Milestone
linux (Ubuntu)
Invalid
Undecided
Unassigned
Bionic
Invalid
Undecided
Unassigned
Cosmic
Invalid
Undecided
Unassigned
Disco
Invalid
Undecided
Unassigned
Eoan
Invalid
Undecided
Unassigned
lvm2 (Debian)
Fix Released
Unknown
lvm2 (Ubuntu)
Fix Released
High
Unassigned
Bionic
Fix Released
Undecided
Unassigned
Cosmic
Won't Fix
Undecided
Unassigned
Disco
Won't Fix
Undecided
Unassigned
Eoan
Fix Released
High
Unassigned

Bug Description

[Impact]
system does not boot after converting lvm volume to raid1 w/o having mdadm installed.

[Test case]

1. Install server to LVM
2. Add second disk to it
3. Run pvcreate /dev/vdb
4. Run vgextend ubuntu-vg /dev/vdb
5. Run lvconvert -m1 --type raid1 ubuntu-vg/ubuntu-lv

Reboot and check that it still boots.

6. Remove mdadm
7. Upgrade to lvm2 from proposed

Reboot and check that it still boots.

8. Downgrade lvm2 to release

Reboot and check that it fails to boot

[Regression potential]
Not really anything, we just add the raid1 module to initramfs, so it might be loaded during boot, and raid1 logical volumes might appear earlier.

[Original bug report]
After upgrading to Wily, raid1 LVs don't activate during the initrd phase. Since the root LV is also RAID1-mirrored, the system doesn't boot.

I get the following message each time LVM tries to activate a raid1 LV:
md: personality for level 1 is not loaded!

Everything was fine with Vivid. I had to downgrade to Vivid kernel (3.19.0-30) to get my system to a usable state. I pretty much hope it to be a temporary workaround and I'll get the new 4.2.0 kernel work with Wily in days.

Revision history for this message
Brad Figg (brad-figg) wrote : Missing required logs.

This bug is missing log files that will aid in diagnosing the problem. From a terminal window please run:

apport-collect 1509717

and then change the status of the bug to 'Confirmed'.

If, due to the nature of the issue you have encountered, you are unable to run this command, please add a comment stating that fact and change the bug status to 'Confirmed'.

This change has been made by an automated script, maintained by the Ubuntu Kernel Team.

Changed in linux (Ubuntu):
status: New → Incomplete
Revision history for this message
MegaBrutal (qbu6to) wrote :

I can't collect logs on non-booting system.

Changed in linux (Ubuntu):
status: Incomplete → Confirmed
MegaBrutal (qbu6to)
tags: added: regression-release wily
Revision history for this message
MegaBrutal (qbu6to) wrote :

Reproducible: upgraded another Ubuntu installation in VM and got the same result.
Since the bug prevents booting, I suggest to increase priority to High.

Revision history for this message
Andy Whitcroft (apw) wrote :

It seems that that module is built and installed into /lib/modules, but is not in the initramfs. Sounds like an initramfs-tools bug.

Changed in initramfs-tools (Ubuntu):
status: New → In Progress
importance: Undecided → High
assignee: nobody → Andy Whitcroft (apw)
milestone: none → ubuntu-15.11
Changed in lvm2 (Ubuntu):
status: New → Invalid
Changed in linux (Ubuntu):
status: Confirmed → Invalid
Andy Whitcroft (apw)
Changed in lvm2 (Ubuntu):
status: Invalid → In Progress
importance: Undecided → High
assignee: nobody → Andy Whitcroft (apw)
milestone: none → ubuntu-15.11
Andy Whitcroft (apw)
Changed in lvm2 (Ubuntu):
milestone: ubuntu-15.11 → ubuntu-15.12
Changed in initramfs-tools (Ubuntu):
milestone: ubuntu-15.11 → ubuntu-15.12
Revision history for this message
Thomas Johnson (ntmatter) wrote :

As a workaround, you can add the relevant modules to the initramfs. A quick walkthrough is as follows:

- Boot from the install media, and choose "Rescue a broken system."
- Run through all of the basic configuration steps, configuring location, keyboard, networking, timezone, etc.
- When prompted for root filesystem, select your usual root volume (eg, /dev/my-vg/root)
- Also mount a separate /boot partition
- Execute a shell in /dev/my-vg/root
- Type "mount" and ensure that the correct / and /boot volumes are actually mounted.
- Add the raid1 and mirror modules to /etc/initramfs-tools/modules, and rebuild the initramfs
# echo raid1 >> /etc/initramfs-tools/modules
# echo dm_mirror >> /etc/initramfs-tools/modules
# update-initramfs -u
- Exit out of the shell, and reboot the system. Don't forget to remove the install media!

Andy Whitcroft (apw)
Changed in lvm2 (Ubuntu):
milestone: ubuntu-15.12 → ubuntu-16.01
Changed in initramfs-tools (Ubuntu):
milestone: ubuntu-15.12 → ubuntu-16.01
Revision history for this message
MegaBrutal (qbu6to) wrote :

Thanks for the workaround! It seems adding raid1 is enough.

Andy Whitcroft (apw)
Changed in lvm2 (Ubuntu):
milestone: ubuntu-16.01 → ubuntu-16.02
Changed in initramfs-tools (Ubuntu):
milestone: ubuntu-16.01 → ubuntu-16.02
Andy Whitcroft (apw)
Changed in lvm2 (Ubuntu):
milestone: ubuntu-16.02 → ubuntu-16.03
Changed in initramfs-tools (Ubuntu):
milestone: ubuntu-16.02 → ubuntu-16.03
Revision history for this message
Chaskiel Grundman (cg2v) wrote :

Also affects 18.04

The "easiest" fix I can see is to add the raid1 and raid10 modules to the manually added modules in /usr/share/initramfs-tools/hooks/lvm2 (dm_raid.ko depends on raid456.ko, but not raid1.ko or raid10.ko)

tags: added: rls-bb-incoming
Steve Langasek (vorlon)
no longer affects: initramfs-tools (Ubuntu)
no longer affects: initramfs-tools (Ubuntu Bionic)
no longer affects: initramfs-tools (Ubuntu Cosmic)
no longer affects: initramfs-tools (Ubuntu Disco)
no longer affects: initramfs-tools (Ubuntu Eoan)
tags: removed: rls-bb-incoming
Changed in lvm2 (Ubuntu Eoan):
assignee: Andy Whitcroft (apw) → nobody
milestone: ubuntu-16.03 → none
tags: added: id-5d0ba914f340e31a8e9f2cf1
Changed in lvm2 (Debian):
status: Unknown → Incomplete
Brad Figg (brad-figg)
tags: added: cscc
Steve Langasek (vorlon)
Changed in linux (Ubuntu Bionic):
status: New → Invalid
Changed in linux (Ubuntu Cosmic):
status: New → Invalid
Changed in linux (Ubuntu Disco):
status: New → Invalid
Changed in lvm2 (Ubuntu Cosmic):
status: New → Won't Fix
Changed in lvm2 (Ubuntu Eoan):
status: In Progress → Triaged
Revision history for this message
Launchpad Janitor (janitor) wrote :

This bug was fixed in the package lvm2 - 2.03.02-2ubuntu7

---------------
lvm2 (2.03.02-2ubuntu7) focal; urgency=medium

  * Include raid1 in the list of modules installed by the initramfs hook,
    as this is not a kernel module dependency of dm-raid but if the user's
    root disk is configured as RAID1 it is definitely required.
    Closes: #841423, LP: #1509717.

 -- Steve Langasek <email address hidden> Fri, 22 Nov 2019 13:59:47 -0800

Changed in lvm2 (Ubuntu):
status: In Progress → Fix Released
Changed in lvm2 (Ubuntu Disco):
status: New → Won't Fix
description: updated
Changed in lvm2 (Ubuntu Eoan):
status: Triaged → In Progress
Changed in lvm2 (Ubuntu Bionic):
status: New → In Progress
Revision history for this message
Łukasz Zemczak (sil2100) wrote : Please test proposed package

Hello MegaBrutal, or anyone else affected,

Accepted lvm2 into eoan-proposed. The package will build now and be available at https://launchpad.net/ubuntu/+source/lvm2/2.03.02-2ubuntu6.1 in a few hours, and then in the -proposed repository.

Please help us by testing this new package. See https://wiki.ubuntu.com/Testing/EnableProposed for documentation on how to enable and use -proposed. Your feedback will aid us getting this update out to other Ubuntu users.

If this package fixes the bug for you, please add a comment to this bug, mentioning the version of the package you tested and change the tag from verification-needed-eoan to verification-done-eoan. If it does not fix the bug for you, please add a comment stating that, and change the tag to verification-failed-eoan. In either case, without details of your testing we will not be able to proceed.

Further information regarding the verification process can be found at https://wiki.ubuntu.com/QATeam/PerformingSRUVerification . Thank you in advance for helping!

N.B. The updated package will be released to -updates after the bug(s) fixed by this package have been verified and the package has been in -proposed for a minimum of 7 days.

Changed in lvm2 (Ubuntu Eoan):
status: In Progress → Fix Committed
tags: added: verification-needed verification-needed-eoan
Changed in lvm2 (Ubuntu Bionic):
status: In Progress → Fix Committed
tags: added: verification-needed-bionic
Revision history for this message
Łukasz Zemczak (sil2100) wrote :

Hello MegaBrutal, or anyone else affected,

Accepted lvm2 into bionic-proposed. The package will build now and be available at https://launchpad.net/ubuntu/+source/lvm2/2.02.176-4.1ubuntu3.18.04.3 in a few hours, and then in the -proposed repository.

Please help us by testing this new package. See https://wiki.ubuntu.com/Testing/EnableProposed for documentation on how to enable and use -proposed. Your feedback will aid us getting this update out to other Ubuntu users.

If this package fixes the bug for you, please add a comment to this bug, mentioning the version of the package you tested and change the tag from verification-needed-bionic to verification-done-bionic. If it does not fix the bug for you, please add a comment stating that, and change the tag to verification-failed-bionic. In either case, without details of your testing we will not be able to proceed.

Further information regarding the verification process can be found at https://wiki.ubuntu.com/QATeam/PerformingSRUVerification . Thank you in advance for helping!

N.B. The updated package will be released to -updates after the bug(s) fixed by this package have been verified and the package has been in -proposed for a minimum of 7 days.

Revision history for this message
Ubuntu SRU Bot (ubuntu-sru-bot) wrote : Autopkgtest regression report (lvm2/2.03.02-2ubuntu6.1)

All autopkgtests for the newly accepted lvm2 (2.03.02-2ubuntu6.1) for eoan have finished running.
The following regressions have been reported in tests triggered by the package:

ganeti/2.16.0-5ubuntu1 (ppc64el)
systemd/242-7ubuntu3.2 (amd64)
resource-agents/1:4.2.0-1ubuntu2 (armhf)

Please visit the excuses page listed below and investigate the failures, proceeding afterwards as per the StableReleaseUpdates policy regarding autopkgtest regressions [1].

https://people.canonical.com/~ubuntu-archive/proposed-migration/eoan/update_excuses.html#lvm2

[1] https://wiki.ubuntu.com/StableReleaseUpdates#Autopkgtest_Regressions

Thank you!

Revision history for this message
Brian Murray (brian-murray) wrote : [lvm2/bionic] verification still needed

The fix for this bug has been awaiting testing feedback in the -proposed repository for bionic for more than 90 days. Please test this fix and update the bug appropriately with the results. In the event that the fix for this bug is still not verified 15 days from now, the package will be removed from the -proposed repository.

tags: added: removal-candidate
Revision history for this message
Julian Andres Klode (juliank) wrote :

sorry, working on verifying it, got distracted by other stuff.

description: updated
description: updated
description: updated
description: updated
description: updated
Revision history for this message
Julian Andres Klode (juliank) wrote :

verified in bionic

Did all the steps (install vm, create pv, extend pv, remove dmadm, upgrade).

It booted after the upgrade to 2.02.176-4.1ubuntu3.18.04.3, and it failed again following a downgrade to the 2.02.176-4.1ubuntu3.18.04.2 from updates pocket.

tags: added: verification-done-bionic
removed: verification-needed-bionic
tags: added: id-5d0ba914f340e31a8e9f2cf
removed: id-5d0ba914f340e31a8e9f2cf1 removal-candidate
tags: added: id-5d0ba914f340e31a8e9f2cf1
Revision history for this message
Julian Andres Klode (juliank) wrote :

eoan verified.

So, eoan is tougher. I installed it with subiquity and tried to follow the steps, but failed to convert to raid1, apparently, subiquity created the LV 1 PE larger than before or something, so it needed 1 free PE in the original PV to be able to add the raid1 metadata, which it did not have.

Worked around this by extending the PV size after the install (after tryin to shrink the existing LV first, messing up the install, and reinstalling ...).

With proposed lvm2 ubuntu6.1 it boots as wanted, downgrading to ubuntu6 in release breaks it again.

tags: added: verification-done verification-done-eoan
removed: verification-needed verification-needed-eoan
Revision history for this message
Julian Andres Klode (juliank) wrote :

The regressions in bionic seem unrelated to me adding the raid1 module to the list of modules installed to the initramfs.

Revision history for this message
Launchpad Janitor (janitor) wrote :

This bug was fixed in the package lvm2 - 2.02.176-4.1ubuntu3.18.04.3

---------------
lvm2 (2.02.176-4.1ubuntu3.18.04.3) bionic; urgency=medium

  [ Steve Langasek ]
  * Include raid1 in the list of modules installed by the initramfs hook,
    as this is not a kernel module dependency of dm-raid but if the user's
    root disk is configured as RAID1 it is definitely required.
    Closes: #841423, LP: #1509717.

 -- Julian Andres Klode <email address hidden> Thu, 23 Jan 2020 16:45:10 +0100

Changed in lvm2 (Ubuntu Bionic):
status: Fix Committed → Fix Released
Revision history for this message
Brian Murray (brian-murray) wrote : Update Released

The verification of the Stable Release Update for lvm2 has completed successfully and the package is now being released to -updates. Subsequently, the Ubuntu Stable Release Updates Team is being unsubscribed and will not receive messages about this bug report. In the event that you encounter a regression using the package from -updates please report a new bug using ubuntu-bug and tag the bug report regression-update so we can easily find any regressions.

Revision history for this message
Launchpad Janitor (janitor) wrote :

This bug was fixed in the package lvm2 - 2.03.02-2ubuntu6.1

---------------
lvm2 (2.03.02-2ubuntu6.1) eoan; urgency=medium

  [ Steve Langasek ]
  * Include raid1 in the list of modules installed by the initramfs hook,
    as this is not a kernel module dependency of dm-raid but if the user's
    root disk is configured as RAID1 it is definitely required.
    Closes: #841423, LP: #1509717.

 -- Julian Andres Klode <email address hidden> Thu, 23 Jan 2020 16:41:24 +0100

Changed in lvm2 (Ubuntu Eoan):
status: Fix Committed → Fix Released
Revision history for this message
MegaBrutal (qbu6to) wrote :

Hi Steve, Julian, Łukasz, everyone,

Sorry that I didn't test the package on time – my life circumstances have significantly changed since I reported this bug, nowadays I don't have enough free time to do as much testing as I did back in the day. I've been using the workaround ever since then, returning to this now after more than 4 years really feels nostalgic.

I still had my test VM from 2016, upgraded it up to Eoan and confirmed the problem was present there as well. Then I installed the new lvm2 package which just got into -updates and indeed it fixed the issue. I know this finding is not so useful after Julian has already tested, but I wanted to see.

Since this problem was present from Wily and been around in all Ubuntu releases onwards, are you going to port this fix to all currently supported releases? While it's fixed for Bionic and Eoan; Xenial, Focal, and the actual development version (Groovy) are still affected. Now that I caught up on this again, do you want me to verify the problem in all the remaining releases?

Anyway, thanks everyone who contributed to fixing this! :)

Revision history for this message
Julian Andres Klode (juliank) wrote :

That's certainly incorrect, lvm2 was fixed in focal (and hence groovy). That only leaves xenial, but updating xenial close to its EOL for something as unusual as this seems unneccessary. The supported configuration for raid1 is mdadm after all, not removing mdadm and the meta packages that depend on it and using lvm's raid support.

Revision history for this message
MegaBrutal (qbu6to) wrote :

Sorry, I didn't actually test Focal and Groovy.

Xenial will be supported for about a year, so I think it worth to fix it there as well; however it's relative what worth to fix and what not, as most users probably left Xenial behind already or applied a workaround if they were ever affected by this issue. On the other hand it can be a real pain if you don't know about this and just happen to convert your root LV to raid1 and then your system doesn't boot.

"The supported configuration for raid1 is mdadm after all, not removing mdadm and the meta packages that depend on it and using lvm's raid support."

I don't know where LVM over DM-RAID is defined as the "supported configuration" as opposed to RAID1 over LVM. I always thought about them as equal alternatives for slightly different use cases. My rule of thumb is when you can afford to get identical sized disks and mirror everything, go with LVM over DM-RAID. If you can't guarantee to have identical sized disks and need a more dynamic solution, then RAID1 over LVM (i.e. using LVM's raid1 support) is more suitable. I recommend the former in productive server environments and the latter in smaller SOHO, home servers and personal computers. If one of my disks goes bye-bye, I wouldn't want to reinstall my system or lose my documents, so I have my / and /home LVs in raid1. However I couldn't care less about my Steam library because I can just re-download the games anytime, so I don't have it mirrored, as I'd rather not want it to use twice the precious disk space.

Revision history for this message
Julian Andres Klode (juliank) wrote :

There are two things:

- On servers, ubuntu-server depends on mdadm, so removing mdadm also removes ubuntu-server package, which means it's not really supported

- In general, the installers don't offer installing like that.

I don't know why we did not have a task for xenial.

description: updated
Changed in lvm2 (Debian):
status: Incomplete → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.