rcS.d/S30checkfs.sh runs before LVM VGs are available
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
sysvinit (Ubuntu) |
Invalid
|
Undecided
|
Unassigned |
Bug Description
Binary package hint: initscripts
Dell P6950 Server, running Hardy, i386 server version, with SCSI attached, 14TB disk subsystem. RAID system is split into 2TB partitions, and joined by LVM2 into one volume group.
At boot time, checkfs.sh is run, but it does not find any disks from LVM. If the filesystems are marked to be fsck'd, then checkfs.sh fails and system drops to single user.
If they are marked to not be fsck'd, then they are not mounted at boot time. They are available after booting, but not mounted.
Adding a "sleep 60" into checkfs.sh on line #43 works around the problem. I did not bother finding the shortest sleep time.
I also tried replacing the sleep with "udevdevadm settle", but that does not solve the problem.
How do I force checkfs.sh to wait until all LVM volume groups are available?
Added a "vgdisplay" to checkfs.sh. Before the sleep, "No volume groups found". After the sleep, "VG Name VGRaidD", with "VG Size 12.73 TB" is found.
--- Volume group --- KkjI-sBOL- eZLc-mzYi- q38t-P2iv4h
VG Name VGRaidD
System ID
Format lvm2
Metadata Areas 7
Metadata Sequence No 6
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 5
Open LV 0
Max PV 0
Cur PV 7
Act PV 7
VG Size 12.73 TB
PE Size 256.00 MB
Total PE 52150
Alloc PE / Size 24384 / 5.95 TB
Free PE / Size 27766 / 6.78 TB
VG UUID v2e4gl-