Are you sure this is fixed for Lucid?
I'm running the kernel: 2.6.32-24.40~pre201008060902 from https://launchpad.net/~kernel-ppa/+archive/pre-proposed (as discussed in bug 605551). Looking at its changelog, it appears to contain the fix for this.
I'm still seeing the hang when creating an lvm snapshot volume.
basic info on my system config:
disks sda and sdb, each 2TB SATA. each has 4 partitions. (bios_grub, raid for swap, raid for root, raid for lvm)
md0 -> RAID1 on sda2, sdb2 -- used for swap md1 -> RAID1 on sda3, sdb3 -- used for root md2 -> RAID1 on sda4, sdb4 -- used for lvm VG 'blue'
to repro, reliable for my config (5/5 so far), paste into a root shell:
lvcreate -n TEST -L 10G blue mkfs.ext4 /dev/mapper/blue-TEST mount /dev/mapper/blue-TEST /mnt sleep 3 touch /mnt/foo lvcreate -s -n TEST2 -L 10G /dev/blue/TEST &
After a couple of minutes, I get the hang messages in dmesg from a flush task and from lvcreate.
see attached dmesg.
Are you sure this is fixed for Lucid?
I'm running the kernel: 2.6.32- 24.40~pre201008 060902 /launchpad. net/~kernel- ppa/+archive/ pre-proposed
from https:/
(as discussed in bug 605551). Looking at its changelog, it appears to contain the fix for this.
I'm still seeing the hang when creating an lvm snapshot volume.
basic info on my system config:
disks sda and sdb, each 2TB SATA.
each has 4 partitions. (bios_grub, raid for swap, raid for root, raid for lvm)
md0 -> RAID1 on sda2, sdb2 -- used for swap
md1 -> RAID1 on sda3, sdb3 -- used for root
md2 -> RAID1 on sda4, sdb4 -- used for lvm VG 'blue'
to repro, reliable for my config (5/5 so far), paste into a root shell:
lvcreate -n TEST -L 10G blue blue-TEST blue-TEST /mnt
mkfs.ext4 /dev/mapper/
mount /dev/mapper/
sleep 3
touch /mnt/foo
lvcreate -s -n TEST2 -L 10G /dev/blue/TEST &
After a couple of minutes, I get the hang messages in dmesg from a flush task and from lvcreate.
see attached dmesg.