30 seconds boot delay when root fs is on lvm

Bug #1807499 reported by Eduard Hasenleithner on 2018-12-08
This bug affects 1 person
Affects Status Importance Assigned to Milestone
lvm2 (Ubuntu)

Bug Description

On my system the root filesystem is located on a logical volume. I have constant boot delay of 30 seconds. My current conclusion is that this problem exists due to lvm performing the initramfs-tools local-top initialization incorrectly.

According to initramfs-tools(8):
local-top OR nfs-top After these scripts have been executed, the root device node is expected to be present (local) or the network interface is expected to be usable (NFS).

But what /usr/share/initramfs-tools/scripts/local-top/lvm2 is simply try to activate the requested root volume by means of scanning the (at that moment) available block devices. What happens on my system is that local-top/lvm2 is executed before the pv block device (SATA-SSD) shows up. Then initramfs-tools calls the wait-for-root executable which waits for the root device node notification via udev.

This has a (minimum) timeout of 30 seconds configured. This timeout is exceeded since nothing will make the root volume device to be created. Then later (I guess in local-block) the root volume device is created (since the pv is available).

ProblemType: Bug
DistroRelease: Ubuntu 18.04
Package: lvm2 2.02.176-4.1ubuntu3
ProcVersionSignature: Ubuntu 4.15.0-42.45-generic 4.15.18
Uname: Linux 4.15.0-42-generic x86_64
ApportVersion: 2.20.9-0ubuntu7.5
Architecture: amd64
CurrentDesktop: ubuntu:GNOME
Date: Sat Dec 8 13:52:06 2018
 PATH=(custom, no user)
SourcePackage: lvm2
UpgradeStatus: No upgrade log present (probably fresh install)

A the moment I can see following solutions:

Make the lvchange_activate function in /usr/share/initramfs-tools/scripts/local-top/lvm2 a while loop which executes the "lvchange" command repeatedly until the root LV appears

Utilize udevd to create the LV devices while wait-for-root is waiting for the root LV to appear. A hack for doing so is to replace in /lib/udev/rules.d/69-lvm-metad.rules the line 96
  ACTION!="remove", ENV{LVM_PV_GONE}=="1", RUN+="/bin/systemd-run /sbin/lvm pvscan --cache $major:$minor --activate ay", GOTO="lvm_end"
  ACTION!="remove", RUN+="/sbin/lvm pvscan --cache $major:$minor --activate ay"

gongxp (453059375-c) wrote :

According to your post, I have modified /lib/udev/rules.d/69-lvm-metad.rules the line 96, but no.
# the ACTION!="remove", ENV{violently _pv_gone}=="1", RUN+="/usr/bin/systemd-run /sbin/ LVM pvscan --cache $major:$minor", GOTO=" violently _end"
The ACTION!="remove", RUN+="/sbin/ LVM pvscan --cache $major:$minor --activate ay"

Could you please tell me how to modify /usr/share/initramfs-tools/scripts/local-top/lvm2 ?


PREREQ="mdadm mdrun multipath"

 echo "$PREREQ"

case $1 in
# get pre-requisites
 exit 0

if [ ! -e /sbin/lvm ]; then
 exit 0

lvchange_activate() {
 lvm lvchange -aay -y --sysinit --ignoreskippedcluster "$@"

activate() {
 local dev="$1"

 # Make sure that we have a non-empty argument
 if [ -z "$dev" ]; then
  return 1

 case "$dev" in
 # Take care of lilo boot arg, risky activating of all vg
  exit 0
 # FIXME: check major
  exit 0

  eval $(dmsetup splitname --nameprefixes --noheadings --rows "${dev#/dev/mapper/}")
  if [ "$DM_VG_NAME" ] && [ "$DM_LV_NAME" ]; then
   lvchange_activate "$DM_VG_NAME/$DM_LV_NAME"

  # Could be /dev/VG/LV; use lvs to check
  if lvm lvs -- "$dev" >/dev/null 2>&1; then
   lvchange_activate "$dev"

activate "$ROOT"
activate "$resume"

exit 0

To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers