system fails to boot with too many /etc/fstab entries

Bug #504258 reported by Cruncher
14
This bug affects 2 people
Affects Status Importance Assigned to Milestone
linux (Ubuntu)
Won't Fix
Medium
Unassigned

Bug Description

Since my upgrade to Karmic, my system fails to boot about 80% of the time. One of several possibilities will happen:
- many of the boot processes will terminate (status 1, 4, or 5) in the console log. The system will hang at some point in the boot process (the last output being either about sysfs or about Timidity)
- the system will boot to a console login. however, trying to login crashes the shell with SEGV each time.
- the system will boot to a console login where I can actually login. Manually starting gdm ("sudo service gdm start") crashes gdm with SEGV.
- the system will boot to X, where I can log in. However, one or several partitions are not mounted, so I need to run "sudo mount -a" manually.
- the system will boot normally

If the system fails to boot, it is not possible to use Ctrl+Alt+Del to reboot the system - the key combination will be mostly ignored (short disk activity, no relevant console output). The only way to cleanly reboot the system each time until it finally boots properly is SysRq+RSEIUB.

About my Xubuntu/xfce4 system:
The only thing that is probably unusual is that /usr, /home, and /var/lib are sitting on a different partition, due to a small root partition. My disk setup has become a bit strange over the ages. I have 3 physical disks with several partitions each.
/dev/sda3 - /
/dev/sda2 - swap
/dev/sdb6 - /mnt/hdf6; /usr, /home, and /var/lib are here, all softlinked on the root-fs (this has never caused any problems at all)
/dev/sda4, /dev/sdb5, /dev/sdc1 - non-essential vfat filesystems

From what I can see from the console messages is that device and/or partition detection seems to fail at random during the boot phase (upstart/mountall). Depending on which partition(s) are not recognized (or if for example /dev/sdb6 is recognized/mounted too late, so that /usr is unavailable), the system will fail to boot at certain points. But this is just a guess, since I am not familiar with the methodology of upstart.

If the system does boot and fails to mount some of the vfat partitions, the ones which are not mounted appear to be random - sometimes sdb5, sometimes sdc1, sometimes both. However, the one vfat that is *always* mounted correctly is sda4, sitting on the boot disk and the same disk as the root-fs.

I only now removed the "quiet" and "splash" options from grub, so hopefully I'll get some error logs I can attach next time.

Unfortunately, this very annoying problem, together with several other work-impeding regressions, make Karmic the worst Ubuntu experience ever for me :-(

I found bug #434395, describing very similar symptoms; however, the reported no longer appears to have this problem and set his report to "Invalid".

ProblemType: Bug
.etc.asound.conf:
 pcm.!default front:Live
 ctl.!default front:Live
Architecture: i386
CRDA: Error: [Errno 2] No such file or directory
Card0.Amixer.info:
 Card hw:0 'CMI8738'/'C-Media CMI8738 (model 55) at 0xb800, irq 21'
   Mixer name : 'CMedia PCI'
   Components : ''
   Controls : 41
   Simple ctrls : 22
Card1.Amixer.info:
 Card hw:1 'Live'/'SB PCI512 [CT4790] (rev.7, serial:0x80231102) at 0xb400, irq 23'
   Mixer name : 'TriTech TR28602'
   Components : 'AC97a:54524123'
   Controls : 216
   Simple ctrls : 38
Date: Thu Jan 7 14:45:43 2010
DistroRelease: Ubuntu 9.10
HibernationDevice: RESUME=UUID=5076dfbc-cbb2-4d4d-a580-c54561478fb7
IwConfig:
 lo no wireless extensions.

 eth2 no wireless extensions.
Lsusb:
 Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
 Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
 Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
 Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
MachineType: System Manufacturer System Name
NonfreeKernelModules: nvidia
Package: linux-image-2.6.31-17-generic 2.6.31-17.54
ProcCmdLine: root=UUID=88c5a1d5-4940-4c63-8d7f-fec9a32cc6b8 ro quiet splash
ProcEnviron:
 LC_COLLATE=C
 PATH=(custom, user)
 LANG=en_US.UTF-8
 SHELL=/bin/tcsh
ProcVersionSignature: Ubuntu 2.6.31-17.54-generic
RelatedPackageVersions:
 linux-backports-modules-2.6.31-17-generic N/A
 linux-firmware 1.25
RfKill:

SourcePackage: linux
Uname: Linux 2.6.31-17-generic i686
WpaSupplicantLog:

XsessionErrors: (polkit-gnome-authentication-agent-1:3836): GLib-CRITICAL **: g_once_init_leave: assertion `initialization_value != 0' failed
dmi.bios.date: 02/21/2003
dmi.bios.vendor: Award Software, Inc.
dmi.bios.version: ASUS P4B533 ACPI BIOS Revision 1014
dmi.board.name: P4B533
dmi.board.vendor: ASUSTeK Computer INC.
dmi.board.version: REV 1.xx
dmi.chassis.asset.tag: Asset-1234567890
dmi.chassis.type: 7
dmi.chassis.vendor: Chassis Manufacture
dmi.chassis.version: Chassis Version
dmi.modalias: dmi:bvnAwardSoftware,Inc.:bvrASUSP4B533ACPIBIOSRevision1014:bd02/21/2003:svnSystemManufacturer:pnSystemName:pvrSystemVersion:rvnASUSTeKComputerINC.:rnP4B533:rvrREV1.xx:cvnChassisManufacture:ct7:cvrChassisVersion:
dmi.product.name: System Name
dmi.product.version: System Version
dmi.sys.vendor: System Manufacturer

Revision history for this message
Cruncher (ubuntu-wkresse) wrote :
Revision history for this message
Cruncher (ubuntu-wkresse) wrote :

At the moment I am in "case 4" as described above (mounting of non-essential fs failed). On the console I get messages like:

mount: special device /dev/disk/by-uuid/42B6-1CF0 does not exist
mountall: mount /mnt/hdc1 [2022] terminated with status 32
mountall: Filesystem could not be mounted: /mnt/hdc1

/dev/disk/by-uuid does not exist:
$ ls -d /dev/disk/by-*
/dev/disk/by-id /dev/disk/by-path

I get these messages only for partitions that failed to mount, although *all* partitions listed in fstab are listed only by UUID.

description: updated
Revision history for this message
Cruncher (ubuntu-wkresse) wrote :

Even in "case 5" (booted normally), I still get errors on the console:

init: ureadahead main process (453) terminated with status 5
fsck from util-linux-ng 2.16
fsck from util-linux-ng 2.16
/dev/sda3: clean, 23690/393600 files, 302406/787185 blocks
/dev/sdb6: clean, 644898/14024704 files, 27684481/28025384 blocks
init: ureadahead-other main process (675) terminated with status 4
init: ureadahead-other main process (713) terminated with status 4

Changed in linux (Ubuntu):
importance: Undecided → Medium
status: New → Triaged
Andy Whitcroft (apw)
tags: added: karmic
Revision history for this message
Cruncher (ubuntu-wkresse) wrote :

As some form of workaround, I removed all non-essential mounts (namely all vfat mounts) from /etc/fstab, mounting them manually after I log in. The system appears to be booting regularly now, although it is still printing the ureadahead errors posted above.
So it is probably dependent on the number of mounts - with too many parallel mount requests, the probability that the vital mounts are processed in time (i.e., before the parallel bootup process starts accessing things that are not yet available) will get smaller. But that's just what my guts say.

Is this a known phenomenon? Any tips/ways to change my configuration in a way that will fix this? Does somebody know the cause of this problem?

Thanks for any insight.

Revision history for this message
GenericAnimeBoy (souletech) wrote :

My netbook running Karmic/Netbook Remix occasionally hangs on boot with that same message. When it happens, the White-Ubuntu-Logo-on-Black screen stays up longer than normal, then a black console window with three lines of text appears before gdm ever shows up:

init: ureadahead main process (xxxx) terminated with status 5
fsck from util-linux-ng 2.16
/dev/sda1: 220981/899360 files, 1028672/3596544 blocks (check after next mount)

The keyboard is not responsive in this state; the only way to get it too do anything is to hard reset via the power button. FYI, sda1 is a Supertalent SSD. I can post whatever logs you need to look at if this is the same bug. Thanks!

Revision history for this message
andrew.dunn (andrew-g-dunn-deactivatedaccount) wrote :

I have this same issue, but my system has many mdadm arrays. is this error regarding mdadm not starting the arrays in time for them to be mounted?!

summary: - system fails to boot most of the time since Karmic upgrade
+ system fails to boot with too many /etc/fstab entries
Revision history for this message
andrew.dunn (andrew-g-dunn-deactivatedaccount) wrote :

On a new install I experience this issue again.

Revision history for this message
andrew.dunn (andrew-g-dunn-deactivatedaccount) wrote :

# /etc/fstab: static file system information.
#
# Use 'blkid -o value -s UUID' to print the universally unique identifier
# for a device; this may be used with UUID= as a more robust way to name
# devices that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
proc /proc proc nodev,noexec,nosuid 0 0

# / was on /dev/sda1 during installation
UUID=2289f46c-1ff1-4791-bce0-9ec8c8c8b238 / ext4 errors=remount-ro 0 1

# /home was on /dev/sda3 during installation
UUID=b9661773-169c-440b-94b0-80ebb7df1c1a /home ext4 defaults 0 2

# swap was on /dev/sda2 during installation
UUID=5d38e2b6-60ee-44ea-a804-c17fbb5ceba4 none swap sw 0 0

# md0 is comprised of 9 WD1001FALS
UUID=e739b348-a2c2-492c-8b32-30780f8da536 /mnt/BLACKARRAY ext4 defaults 0 0

# md1 is comprised of 2 WDC3200
UUID=3d9b7922-b1e2-4c3b-8548-f1dea07e8520 /mnt/ZERO ext4 defaults 0 0

# md2 is comprised of 8 HDS2
UUID=1dc2701b-66c4-4217-af94-24a7a558a7a8 /mnt/HDSARRAY ext4 defaults 0 0

Revision history for this message
Brad Figg (brad-figg) wrote : Unsupported series, setting status to "Won't Fix".

This bug was filed against a series that is no longer supported and so is being marked as Won't Fix. If this issue still exists in a supported series, please file a new bug.

This change has been made by an automated script, maintained by the Ubuntu Kernel Team.

Changed in linux (Ubuntu):
status: Triaged → Won't Fix
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.