LVM2 - flock failed: Interrupted system call

Bug #658144 reported by ion-ral
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
lvm2 (Ubuntu)
New
Undecided
Unassigned
Nominated for Lucid by ion-ral

Bug Description

Binary package hint: lvm2

while removing a snapshot, perhaps in the presence of a filling, the system can no longer handle any lv, vg or coating, and the error that issue is this:

 flock failed: Interrupted system call

seems that this problem has already been outlined in previous versions (hardy) but I did not find solutions ...
Is there any remedy in the commands to create snapshots or you have to work on the code?

I found this problem in lucid (10.04.1)

Revision history for this message
ion-ral (daniele-nuzzo) wrote :

t is possible that no one can say something about it? no one can take charge of this problem? no joke, I've already been more than once and now also on other servers ...
 sorry for the outburst, but I think this tool useless if no one answers ...

Revision history for this message
ion-ral (daniele-nuzzo) wrote :

ok, I was told that there are few details, so let me explain better...

My hardware configuration is two nodes (ibm x3650 m2) dual XEON CPU (8 cores) 16GB of RAM, and a controller with RAID LSI SAS in RAID10

The software configuration consists of two ubuntu amd64 server 10.4.1, partitioned as follows:

/ Boot ext4 100MB
/ Ext4 10000MB
swap 16000GB

the rest of the unpartitioned space is used with the DRBD version 8.3.7

drbd device is used as PV in LVM configuration (2:02:54-1ubuntu4)

root @ node1: ~ # pvs
  PV VG Fmt Attr PSize PFree
  / VME dev/drbd0 lvm2 a-532.18g 66.18g

root @ node1: ~ # VGS
  VG # PV # LV # SN Attr vsize VFree
  VME 1 5 0 wz - n-532.18g 66.18g

root @ node1: ~ # lvs
  LV VG Attr LSize Origin Snap% Move Log Copy% Convert
  vm1 VME-wi-a-21.00g
  vm2 VME-wi-a-51.00g
  vm3 VME-wi-a-131.00g
  vm4 VME-wi-ao 101.00g
  vm5 VME-wi-ao 162.00g

each LV is a hard disk for the VM in KVM configuration (0.12.3-0ubuntu9.2 + noroms)

every night, using a script that I programmed, made a lvm snapshot, then a copy of the blocks with dd_rescue (1.14-1) and then it remove the snapshot, I write the commands in sequence:

lvcreate -n vm5S -L 33G -s /dev/vme/vm5

dd_rescue -q -e 5 -l /tmp/dd_rescue.log -a -w /dev/vme/vm5S /tmp/vmebackup/vm5/vm5.raw

(Of course /tmp /vmebackup is a directory to mount a usb disk)

Usually this procedure is successful without any problems, but every now and then during the removal of the last LV, the command lvremove fails, or rather, remains suspended at this point any lvm command remains suspended without any Another possibility ...

for example if I give the command lvs and of course I shall remain suspended and do the ctrl-c gives me the error in question ...
/var/lock/lvm/V_vme: flock failed:

here, that's all ...
if you need more details let me know
thanks

Revision history for this message
Alasdair G. Kergon (agk2) wrote :

flock failed happens if you interrupt (e.g. by pressing control C) an lvm command that is waiting to obtain a lock.

Commands hanging can be caused by races, sometimes involving asynchronous commands issued by udev.

Revision history for this message
Alasdair G. Kergon (agk2) wrote :

So when it hangs, you need to see what else is running/hanging on the machine, whether some other process holds the file locks etc.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.