pvcreate causing problems for dmraid
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
lvm2 (Ubuntu) |
Expired
|
Undecided
|
Unassigned |
Bug Description
Binary package hint: dmraid
I'm encountering a problem where LVM2 pvcreate appears to mess up the superblock of a RAID1 pair which results in the initramfs system not being able to mount it.
Hardware:
Intel D865GVHZ
(2) WD 200GiB PATA (sda/sdb)
(2) WD 400GiB SATA (sdc/sdd)
Attempted configuration:
sdc1+sdd1>md0>ext3 (/boot)
sdc2+sdd2>
sdc3+sdd3>md2>lvm (vg0)
sda+sdb>md3>lvm (vg0)
vg0>vg0-
vg0>vg0-
vg0>vg0-
vg0>vg0-
vg0>vg0-
vg0>vg0-lv5>ext3 (/local/
vg0>vg0-lv6>ext3 (/local/
In this setup only RAID1 arrays are used and all crypt volumes except vg0-lv0_crypt use key files for passphrases. The Gutsy (and Hardy Alpha 2) i386 Alternate installer made a complete mess of this setup (bug #180269) so a ridiculous amount of manual workarounds were needed to even attempt it. One issue was that Grub ended up on sda which caused all kinds of spurious problems and warnings about invalid superblocks, bd_claim failures, etc., so I wiped sda and sdb recreated the array manually. I verified md3 is accessible from the LiveCD and from the installed system. The sda and sdb superblocks look normal from what I can tell. But once I do a pvcreate on it fdisk reports it is messed up and a dd shows it as zeroed, completely different from sdc/sdd. Yet mdadm seems to think it's normal and lvm doesn't complain either. But if I add md3 to vg0 the system will not boot and it appears that it doesn't get mounted in initramfs so vg0, vg0-lv0, vg0-lv0_crypt, and / are missing. I can start it manually from initramfs. I attempted to work around the superblock issue by creating two partitons on sd[ab] and just making sd[ab]2 a raid but it didn't solve the problem. I wasn't sure if the zeroed superblock was normal so I asked about it but didn't get any answers:
https:/
Changed in dmraid: | |
status: | Invalid → New |
Changed in lvm2 (Ubuntu): | |
status: | New → Incomplete |
After some more attempts, I finally figured out how to get the md3 raid set up the same way as partman does with md[0-2] on the Alternate installer CD. There have been some constant issues that I forgot to mention that may be a factor. Whenever I attempted to repartition sda and set the partition type to FD, as soon as I exited cfdisk the dmraid system would mount md3 forcing me to mdadm -S /dev/md3 so I could repartition sdb. In addition, /dev/sda1 wouldn't show up while /dev/sdb1 did but fdisk -l would show it existed on sda. mdadm didn't see it so I'm not sure if fdisk was getting it out of /proc/partinfo or just assuming it existed. Without it I couldn't create md3 on sd[ab]1, only sd[ab]. Thinking that was part of the problem I tried various ways of getting sda1 to show up including partprobe, various dd zeroings, and repartitioning. I noticed while doing this that cfdisk doesn't set the serial number in the superblock so I used mklabel in parted to set it on both drives first. Finally I repartitioned sdb as ntfs, zeroed the fist 0.5MiB or so of sda, removed the md3 entry from mdadm.conf, and rebooted. I then repartitioned sda as type FD and then sda1 showed up. I then created md3 on sda1 without a second device and rebooted again to make sure it was persistent. I then repartitioned sdb and added sdb1 as a hot-spare and let it resync. After some more reboots and tests, I ran pvcreate on md3 and checked the superblocks - they remained intact this time. I rebooted and double-checked everything then added md3 to vg0 and rebooted. It failed and I ended up in initramfs busybox. I removed it from vg0 and was able to boot again. So there definitely is a problem with booting from a vg with multiple pv devices, at least if they are md. My workaround was to create a new volume group, vg1, that only had md3 in it. I lose some lvm flexibility this way but at least it boots usually. By "usually" I mean that there is an additional error that now pops up every few boots:
device-mapper: table: 254:0: linear: dm-linear: Device lookup failed
device-mapper: ioctl: error adding target to table
Command failed: Not a block device
cryptsetup: cryptsetup failed, bad password or options?
I never get a LUKS passphrase prompt. I can't unlock the volume from initramfs either. All I can do is reboot and try again.
I also noticed a suspicious error on the boot attempt immediately after removing md3 from vg0:
Starting kernel event manager... [ OK ]
* Loading hardware drivers...
error receiving uevent message: No buffer space available
I found some references to an old kernel bug but this message only showed up once so it may be unrelated.