Wily LVM-RAID1 – md: personality for level 1 is not loaded

Bug #1509717 reported by MegaBrutal on 2015-10-24
32
This bug affects 5 people
Affects Status Importance Assigned to Milestone
linux (Ubuntu)
Undecided
Unassigned
Bionic
Undecided
Unassigned
Cosmic
Undecided
Unassigned
Disco
Undecided
Unassigned
Eoan
Undecided
Unassigned
lvm2 (Debian)
Incomplete
Unknown
lvm2 (Ubuntu)
High
Unassigned
Bionic
Undecided
Unassigned
Cosmic
Undecided
Unassigned
Disco
Undecided
Unassigned
Eoan
High
Unassigned

Bug Description

[Impact]
system does not boot after converting lvm volume to raid1 w/o having mdadm installed.

[Test case]

1. Install server with subiquity to VM
2. Add second disk to it
3. Run pvcreate /dev/vdb
4. Run vgextend ubuntu-vg /dev/vdb
5. Run lvconvert -m1 --type raid1 ubuntu-vg/lv

Reboot and check that it still boots.

[Regression potential]
Not really anything, we just add the raid1 module to initramfs, so it might be loaded during boot, and raid1 logical volumes might appear earlier.

[Original bug report]
After upgrading to Wily, raid1 LVs don't activate during the initrd phase. Since the root LV is also RAID1-mirrored, the system doesn't boot.

I get the following message each time LVM tries to activate a raid1 LV:
md: personality for level 1 is not loaded!

Everything was fine with Vivid. I had to downgrade to Vivid kernel (3.19.0-30) to get my system to a usable state. I pretty much hope it to be a temporary workaround and I'll get the new 4.2.0 kernel work with Wily in days.

This bug is missing log files that will aid in diagnosing the problem. From a terminal window please run:

apport-collect 1509717

and then change the status of the bug to 'Confirmed'.

If, due to the nature of the issue you have encountered, you are unable to run this command, please add a comment stating that fact and change the bug status to 'Confirmed'.

This change has been made by an automated script, maintained by the Ubuntu Kernel Team.

Changed in linux (Ubuntu):
status: New → Incomplete
MegaBrutal (qbu6to) wrote :

I can't collect logs on non-booting system.

Changed in linux (Ubuntu):
status: Incomplete → Confirmed
MegaBrutal (qbu6to) on 2015-10-25
tags: added: regression-release wily
MegaBrutal (qbu6to) wrote :

Reproducible: upgraded another Ubuntu installation in VM and got the same result.
Since the bug prevents booting, I suggest to increase priority to High.

Andy Whitcroft (apw) wrote :

It seems that that module is built and installed into /lib/modules, but is not in the initramfs. Sounds like an initramfs-tools bug.

Changed in initramfs-tools (Ubuntu):
status: New → In Progress
importance: Undecided → High
assignee: nobody → Andy Whitcroft (apw)
milestone: none → ubuntu-15.11
Changed in lvm2 (Ubuntu):
status: New → Invalid
Changed in linux (Ubuntu):
status: Confirmed → Invalid
Andy Whitcroft (apw) on 2015-10-26
Changed in lvm2 (Ubuntu):
status: Invalid → In Progress
importance: Undecided → High
assignee: nobody → Andy Whitcroft (apw)
milestone: none → ubuntu-15.11
Andy Whitcroft (apw) on 2015-12-07
Changed in lvm2 (Ubuntu):
milestone: ubuntu-15.11 → ubuntu-15.12
Changed in initramfs-tools (Ubuntu):
milestone: ubuntu-15.11 → ubuntu-15.12
Thomas Johnson (ntmatter) wrote :

As a workaround, you can add the relevant modules to the initramfs. A quick walkthrough is as follows:

- Boot from the install media, and choose "Rescue a broken system."
- Run through all of the basic configuration steps, configuring location, keyboard, networking, timezone, etc.
- When prompted for root filesystem, select your usual root volume (eg, /dev/my-vg/root)
- Also mount a separate /boot partition
- Execute a shell in /dev/my-vg/root
- Type "mount" and ensure that the correct / and /boot volumes are actually mounted.
- Add the raid1 and mirror modules to /etc/initramfs-tools/modules, and rebuild the initramfs
# echo raid1 >> /etc/initramfs-tools/modules
# echo dm_mirror >> /etc/initramfs-tools/modules
# update-initramfs -u
- Exit out of the shell, and reboot the system. Don't forget to remove the install media!

Andy Whitcroft (apw) on 2016-01-19
Changed in lvm2 (Ubuntu):
milestone: ubuntu-15.12 → ubuntu-16.01
Changed in initramfs-tools (Ubuntu):
milestone: ubuntu-15.12 → ubuntu-16.01
MegaBrutal (qbu6to) wrote :

Thanks for the workaround! It seems adding raid1 is enough.

Andy Whitcroft (apw) on 2016-02-01
Changed in lvm2 (Ubuntu):
milestone: ubuntu-16.01 → ubuntu-16.02
Changed in initramfs-tools (Ubuntu):
milestone: ubuntu-16.01 → ubuntu-16.02
Andy Whitcroft (apw) on 2016-03-10
Changed in lvm2 (Ubuntu):
milestone: ubuntu-16.02 → ubuntu-16.03
Changed in initramfs-tools (Ubuntu):
milestone: ubuntu-16.02 → ubuntu-16.03
Chaskiel Grundman (cg2v) wrote :

Also affects 18.04

The "easiest" fix I can see is to add the raid1 and raid10 modules to the manually added modules in /usr/share/initramfs-tools/hooks/lvm2 (dm_raid.ko depends on raid456.ko, but not raid1.ko or raid10.ko)

tags: added: rls-bb-incoming
Steve Langasek (vorlon) on 2019-06-20
no longer affects: initramfs-tools (Ubuntu)
no longer affects: initramfs-tools (Ubuntu Bionic)
no longer affects: initramfs-tools (Ubuntu Cosmic)
no longer affects: initramfs-tools (Ubuntu Disco)
no longer affects: initramfs-tools (Ubuntu Eoan)
tags: removed: rls-bb-incoming
Changed in lvm2 (Ubuntu Eoan):
assignee: Andy Whitcroft (apw) → nobody
milestone: ubuntu-16.03 → none
tags: added: id-5d0ba914f340e31a8e9f2cf1
Changed in lvm2 (Debian):
status: Unknown → Incomplete
Brad Figg (brad-figg) on 2019-07-24
tags: added: cscc
Steve Langasek (vorlon) on 2019-11-22
Changed in linux (Ubuntu Bionic):
status: New → Invalid
Changed in linux (Ubuntu Cosmic):
status: New → Invalid
Changed in linux (Ubuntu Disco):
status: New → Invalid
Changed in lvm2 (Ubuntu Cosmic):
status: New → Won't Fix
Changed in lvm2 (Ubuntu Eoan):
status: In Progress → Triaged
Launchpad Janitor (janitor) wrote :

This bug was fixed in the package lvm2 - 2.03.02-2ubuntu7

---------------
lvm2 (2.03.02-2ubuntu7) focal; urgency=medium

  * Include raid1 in the list of modules installed by the initramfs hook,
    as this is not a kernel module dependency of dm-raid but if the user's
    root disk is configured as RAID1 it is definitely required.
    Closes: #841423, LP: #1509717.

 -- Steve Langasek <email address hidden> Fri, 22 Nov 2019 13:59:47 -0800

Changed in lvm2 (Ubuntu):
status: In Progress → Fix Released
Changed in lvm2 (Ubuntu Disco):
status: New → Won't Fix
description: updated
Changed in lvm2 (Ubuntu Eoan):
status: Triaged → In Progress
Changed in lvm2 (Ubuntu Bionic):
status: New → In Progress

Hello MegaBrutal, or anyone else affected,

Accepted lvm2 into eoan-proposed. The package will build now and be available at https://launchpad.net/ubuntu/+source/lvm2/2.03.02-2ubuntu6.1 in a few hours, and then in the -proposed repository.

Please help us by testing this new package. See https://wiki.ubuntu.com/Testing/EnableProposed for documentation on how to enable and use -proposed. Your feedback will aid us getting this update out to other Ubuntu users.

If this package fixes the bug for you, please add a comment to this bug, mentioning the version of the package you tested and change the tag from verification-needed-eoan to verification-done-eoan. If it does not fix the bug for you, please add a comment stating that, and change the tag to verification-failed-eoan. In either case, without details of your testing we will not be able to proceed.

Further information regarding the verification process can be found at https://wiki.ubuntu.com/QATeam/PerformingSRUVerification . Thank you in advance for helping!

N.B. The updated package will be released to -updates after the bug(s) fixed by this package have been verified and the package has been in -proposed for a minimum of 7 days.

Changed in lvm2 (Ubuntu Eoan):
status: In Progress → Fix Committed
tags: added: verification-needed verification-needed-eoan
Changed in lvm2 (Ubuntu Bionic):
status: In Progress → Fix Committed
tags: added: verification-needed-bionic
Łukasz Zemczak (sil2100) wrote :

Hello MegaBrutal, or anyone else affected,

Accepted lvm2 into bionic-proposed. The package will build now and be available at https://launchpad.net/ubuntu/+source/lvm2/2.02.176-4.1ubuntu3.18.04.3 in a few hours, and then in the -proposed repository.

Please help us by testing this new package. See https://wiki.ubuntu.com/Testing/EnableProposed for documentation on how to enable and use -proposed. Your feedback will aid us getting this update out to other Ubuntu users.

If this package fixes the bug for you, please add a comment to this bug, mentioning the version of the package you tested and change the tag from verification-needed-bionic to verification-done-bionic. If it does not fix the bug for you, please add a comment stating that, and change the tag to verification-failed-bionic. In either case, without details of your testing we will not be able to proceed.

Further information regarding the verification process can be found at https://wiki.ubuntu.com/QATeam/PerformingSRUVerification . Thank you in advance for helping!

N.B. The updated package will be released to -updates after the bug(s) fixed by this package have been verified and the package has been in -proposed for a minimum of 7 days.

All autopkgtests for the newly accepted lvm2 (2.03.02-2ubuntu6.1) for eoan have finished running.
The following regressions have been reported in tests triggered by the package:

ganeti/2.16.0-5ubuntu1 (ppc64el)
systemd/242-7ubuntu3.2 (amd64)
resource-agents/1:4.2.0-1ubuntu2 (armhf)

Please visit the excuses page listed below and investigate the failures, proceeding afterwards as per the StableReleaseUpdates policy regarding autopkgtest regressions [1].

https://people.canonical.com/~ubuntu-archive/proposed-migration/eoan/update_excuses.html#lvm2

[1] https://wiki.ubuntu.com/StableReleaseUpdates#Autopkgtest_Regressions

Thank you!

To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.