Wily LVM-RAID1 – md: personality for level 1 is not loaded

Bug #1509717 reported by MegaBrutal on 2015-10-24
28
This bug affects 5 people
Affects Status Importance Assigned to Milestone
linux (Ubuntu)
Status tracked in Eoan
Bionic
Undecided
Unassigned
Cosmic
Undecided
Unassigned
Disco
Undecided
Unassigned
Eoan
Undecided
Unassigned
lvm2 (Debian)
Unknown
Unknown
lvm2 (Ubuntu)
Status tracked in Eoan
Bionic
Undecided
Unassigned
Cosmic
Undecided
Unassigned
Disco
Undecided
Unassigned
Eoan
High
Unassigned

Bug Description

After upgrading to Wily, raid1 LVs don't activate during the initrd phase. Since the root LV is also RAID1-mirrored, the system doesn't boot.

I get the following message each time LVM tries to activate a raid1 LV:
md: personality for level 1 is not loaded!

Everything was fine with Vivid. I had to downgrade to Vivid kernel (3.19.0-30) to get my system to a usable state. I pretty much hope it to be a temporary workaround and I'll get the new 4.2.0 kernel work with Wily in days.

This bug is missing log files that will aid in diagnosing the problem. From a terminal window please run:

apport-collect 1509717

and then change the status of the bug to 'Confirmed'.

If, due to the nature of the issue you have encountered, you are unable to run this command, please add a comment stating that fact and change the bug status to 'Confirmed'.

This change has been made by an automated script, maintained by the Ubuntu Kernel Team.

Changed in linux (Ubuntu):
status: New → Incomplete
MegaBrutal (qbu6to) wrote :

I can't collect logs on non-booting system.

Changed in linux (Ubuntu):
status: Incomplete → Confirmed
MegaBrutal (qbu6to) on 2015-10-25
tags: added: regression-release wily
MegaBrutal (qbu6to) wrote :

Reproducible: upgraded another Ubuntu installation in VM and got the same result.
Since the bug prevents booting, I suggest to increase priority to High.

Andy Whitcroft (apw) wrote :

It seems that that module is built and installed into /lib/modules, but is not in the initramfs. Sounds like an initramfs-tools bug.

Changed in initramfs-tools (Ubuntu):
status: New → In Progress
importance: Undecided → High
assignee: nobody → Andy Whitcroft (apw)
milestone: none → ubuntu-15.11
Changed in lvm2 (Ubuntu):
status: New → Invalid
Changed in linux (Ubuntu):
status: Confirmed → Invalid
Andy Whitcroft (apw) on 2015-10-26
Changed in lvm2 (Ubuntu):
status: Invalid → In Progress
importance: Undecided → High
assignee: nobody → Andy Whitcroft (apw)
milestone: none → ubuntu-15.11
Andy Whitcroft (apw) on 2015-12-07
Changed in lvm2 (Ubuntu):
milestone: ubuntu-15.11 → ubuntu-15.12
Changed in initramfs-tools (Ubuntu):
milestone: ubuntu-15.11 → ubuntu-15.12
Thomas Johnson (ntmatter) wrote :

As a workaround, you can add the relevant modules to the initramfs. A quick walkthrough is as follows:

- Boot from the install media, and choose "Rescue a broken system."
- Run through all of the basic configuration steps, configuring location, keyboard, networking, timezone, etc.
- When prompted for root filesystem, select your usual root volume (eg, /dev/my-vg/root)
- Also mount a separate /boot partition
- Execute a shell in /dev/my-vg/root
- Type "mount" and ensure that the correct / and /boot volumes are actually mounted.
- Add the raid1 and mirror modules to /etc/initramfs-tools/modules, and rebuild the initramfs
# echo raid1 >> /etc/initramfs-tools/modules
# echo dm_mirror >> /etc/initramfs-tools/modules
# update-initramfs -u
- Exit out of the shell, and reboot the system. Don't forget to remove the install media!

Andy Whitcroft (apw) on 2016-01-19
Changed in lvm2 (Ubuntu):
milestone: ubuntu-15.12 → ubuntu-16.01
Changed in initramfs-tools (Ubuntu):
milestone: ubuntu-15.12 → ubuntu-16.01
MegaBrutal (qbu6to) wrote :

Thanks for the workaround! It seems adding raid1 is enough.

Andy Whitcroft (apw) on 2016-02-01
Changed in lvm2 (Ubuntu):
milestone: ubuntu-16.01 → ubuntu-16.02
Changed in initramfs-tools (Ubuntu):
milestone: ubuntu-16.01 → ubuntu-16.02
Andy Whitcroft (apw) on 2016-03-10
Changed in lvm2 (Ubuntu):
milestone: ubuntu-16.02 → ubuntu-16.03
Changed in initramfs-tools (Ubuntu):
milestone: ubuntu-16.02 → ubuntu-16.03
Chaskiel Grundman (cg2v) wrote :

Also affects 18.04

The "easiest" fix I can see is to add the raid1 and raid10 modules to the manually added modules in /usr/share/initramfs-tools/hooks/lvm2 (dm_raid.ko depends on raid456.ko, but not raid1.ko or raid10.ko)

tags: added: rls-bb-incoming
no longer affects: initramfs-tools (Ubuntu)
no longer affects: initramfs-tools (Ubuntu Bionic)
no longer affects: initramfs-tools (Ubuntu Cosmic)
no longer affects: initramfs-tools (Ubuntu Disco)
no longer affects: initramfs-tools (Ubuntu Eoan)
tags: removed: rls-bb-incoming
Changed in lvm2 (Ubuntu Eoan):
assignee: Andy Whitcroft (apw) → nobody
milestone: ubuntu-16.03 → none
To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.