Cannot boot server using mdadm RAID1 after creating LVM snapshot

Bug #952063 reported by Roger Hunwicks on 2012-03-11
This bug affects 3 people
Affects Status Importance Assigned to Milestone
lvm2 (Ubuntu)

Bug Description

I'm running a test server on Precise to act as a KVM host. The virtual machines are using LVM logical volumes as backing stores. I have created several virtual machines by taking an LVM snapshot of a base image. When I rebooted the server, it no longer boots.

After switching grub to use the console I get messages similar to the ones reported here: - i.e. "Snapshot cow pairing for exception table handover failed"

The server has 2 2TB SATA drives - partitioned into a smallish sdX1 and a large sdX5. sda1/sdb1 make a mdadm RAID 1 array md0, sda2/sdb2 make md1. Both md devices are clean. md0 contains vgSystem with logical volumes for root, boot and swap. md1 contains vgVMs with logical volumes for all the virtual machines.

I can boot the server of a USB stick and then assemble the md arrays and enable the volume groups and everything seems intact

ProblemType: Bug
DistroRelease: Ubuntu 12.04
Package: grub-pc 1.99-17ubuntu1
ProcVersionSignature: Ubuntu 3.2.0-18.28-generic 3.2.9
Uname: Linux 3.2.0-18-generic x86_64
ApportVersion: 1.94.1-0ubuntu2
Architecture: amd64
Date: Sun Mar 11 11:16:44 2012
InstallationMedia: Ubuntu-Server 12.04 LTS "Precise Pangolin" - Alpha amd64 (20120130)
 PATH=(custom, no user)
SourcePackage: grub2
UpgradeStatus: No upgrade log present (probably fresh install)

description: updated
Phillip Susi (psusi) wrote :

This appears to be a bug in lvm, not grub.

affects: grub2 (Ubuntu) → lvm2 (Ubuntu)

When it boots it pauses for a long time trying to mount the root volume and then the screen shows

udevd[119]: 'watershed sh -c '/sbin/lvm vgscan; sbin/lvm vgchange -ay'' [264] terminated by signal 9 (Killed)
device-mapper: table: 252:8: snapshot: Snapshot cow pairing for exception table handover failed

The logical volumes all seem to work correctly after you boot into a livecd - i.e. I can assemble the raid arrays, enable the volume groups and then mount the logical volumes - at least the ones that contain actual filesystems - the ones that contain vms I haven't done anything with.

I can install ubuntu-libvirt-host in the USB livecd and then run the virtual machines, so all the LVM volume groups seem to be intact.

I don't seem to have any broken volume groups, but does the fact that the boot process is attempting vgchange -ay mean that the boot will fail if any of the LVM volume groups cannot be started, even if the broken one is not needed to start the system?

Launchpad Janitor (janitor) wrote :

Status changed to 'Confirmed' because the bug affects multiple users.

Changed in lvm2 (Ubuntu):
status: New → Confirmed
To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers