NFS will not mount. init: statd pre-start process terminated with status 1

Bug #504776 reported by K-B
30
This bug affects 5 people
Affects Status Importance Assigned to Milestone
upstart (Ubuntu)
New
Undecided
Unassigned

Bug Description

Binary package hint: nfs-utils

I expected the exported NFS share (my server shire) to mount my home directory - I'm 'jamb' on this desktop machine.
This mount fails with my client stanza in/etc/fstab:

shire.gigahome.net:/home/jamb/nfs /home/jamb/nfs nfs rw,hard,intr,nfsvers=3 0 0

During the startup process, this is echoed to screen:
init: statd pre-start process (846) terminated with status 1
mountall: Event failed
mount.nfs: rpc.statd is not running but is required for remote locking.
Either use '-o nolock' to keep locks local, or start statd.
mountall: mount /home/jamb/nfs [819] terminated with status 32
mountall: Filesystem could not be mounted: /home/jamb/nfs

If I change the /etc/fstab and adds the 'nolock' option like so:
shire.gigahome.net:/home/jamb/nfs /home/jamb/nfs nfs rw,hard,intr,nolock,nfsvers=3 0 0
The mount is successful.

Assuming, again, without the 'nolock' option, some observations: the error code from statd is not always seen, but more often starting with the line from: mountall: Event failed and the other lines follows.
Sometime the mount is successful. I have seen this a few times, just after the first reboot when I did a kernel upgrade, or by just doing a few re-starts. A cold boot seems to trigger the failure easier at next startup

Note: It is possible to mount this in a shell, like so:
sudo mount shire.gigahome.net:/home/jamb/nfs /home/jamb/nfs

The original fstab mount stanza works as expected on Jaunty (9.04), Intrepid (8.10), Hardy (9.04), Gutsy (7.10) and Debian Lenny (5.03).

On the Karmic client: 2.6.31-16-generic #53-Ubuntu SMP Tue Dec 8 04:02:15 UTC 2009 x86_64 GNU/Linux
I use ext3 and the traditional Grub (not Grub 2).
On the Gutsy server: 2.6.22-15-generic #1 SMP Tue Oct 21 23:47:12 GMT 2008 i686 GNU/Linux

I have the NFS server on the local LAN (Ubuntu Gutsy 7.10) and NFS 3.
here is the NFS server /etc/exports:
/home/jamb 192.168.0.0/24(rw,no_subtree_check,no_root_squash,sync)

Revision history for this message
Alvin (alvind) wrote :

Reliable mounting of NFS shares at boot is hard in Karmic, and error messages always appear (bug #504224), but I haven't seen the rpc.statd errors after the RC. (bug #431248) Are you using the latest updates? Also, try with just 'defaults' as option.

Otherwise, see if this bug isn't a duplicate of either bug #431248 or bug #470776.

Revision history for this message
K-B (debinix) wrote :

In my case the error message in consistent (bug #504224) , and also have no similarities with escapes to the recovery shell etc. The (bug 470776), where mountall forgot a previous network ready state, and for mountall in Lucid this is fixed. Unfortunately, this can not be applied or be tested in Karmic. In (bug #431248) everything hangs during boot, but some similarities exists. One comment was that the problem is not related to mountall, but to portmap or nfs-common.

I changed my fstab stanza to 'shire.gigahome.net:/home/bekr/nfs /home/bekr/nfs nfs defaults 0 0'

Consistently I see similar results as above:
mount.nfs : rpc.statd not running. etc...
It was also suggested (bug #413248) that network-manager may be involved, and since I use static addresses on my LAN (many servers) I removed network-manager, but the error messages are the still the same. The comment #8, in this (bug #413248) report is very similar to what I see. Then patches for nfs-utils and portmap package are discussed.

I also edited /etc/default/nfs-common and enabled:
# Do you want to start the statd daemon?
NEED_STATD=yes

This did not change the overall behavior. Interestingly though, I can still mount the share manually once the boot process finished.

Here is the installed versions in Karmic:
dpkg -l mountall nfs-utils nfs-common portmap
ii mountall 1.0 filesystem mounting tool
ii nfs-common 1:1.2.0-2ubuntu8 NFS support files common to client and server
ii portmap 6.0-10ubuntu2 RPC port mapper
No packages found matching nfs-utils.

apt-cache show nfs-utils
W: Unable to locate package nfs-utils
E: No packages found

I'm puzzled about this nfs-utils package though.

System is using latest updates.
Linux odin 2.6.31-17-generic #54-Ubuntu SMP Thu Dec 10 17:01:44 UTC 2009 x86_64 GNU/Linux

Revision history for this message
K-B (debinix) wrote :

Typo, should say bug 431248 not 413248. Sorry.

description: updated
Revision history for this message
K-B (debinix) wrote :

Since the message that started all this was about 'statd pre-start process (846) terminated with status 1', I finally found one bug # 484209 which likely is the cause for my observed NFS mount problem. I will try to confirm later.
The error is caused by statd's pre-start script attempting to start portmap when it is already started. This race condition is fixed in nfs-utils 1:1.2.0-2ubuntu9.

My system have:
apt-cache showsrc nfs-utils
Package: nfs-utils
Binary: nfs-kernel-server, nfs-common
Version: 1:1.2.0-2ubuntu8
Priority: optional
Section: net
Maintainer: Ubuntu Developers <email address hidden>

I think we can mark this as a duplicate to bug #484209 which is fixed in the 1:1.2.0-2ubuntu9-nfs-utils source package.

K-B (debinix)
description: updated
Revision history for this message
K-B (debinix) wrote :

I tried to confirm the 'race-free fix' in nfs-utils 1:1.2.0-2ubuntu9 (which is not released for karmic) - see bug #484209.

        start portmap || true
        status portmap | grep -q start/running

I added one logger debug line, after the first line in /etc/init/statd.conf (see below)
=============================================================================
# statd - NSM status monitor

start on (started portmap or mount TYPE=nfs)
stop on stopping portmap

expect fork
respawn

env DEFAULTFILE=/etc/default/nfs-common

pre-start script
 if [ -f "$DEFAULTFILE" ]; then
     . "$DEFAULTFILE"
 fi

 [ "x$NEED_STATD" != xno ] || { stop; exit 0; }

 start portmap || true
 status portmap | logger -p local0.debug -t NFS
 status portmap | grep -q start/running
 exec sm-notify
end script

script
 if [ -f "$DEFAULTFILE" ]; then
     . "$DEFAULTFILE"
 fi

 if [ "x$NEED_STATD" != xno ]; then
  exec rpc.statd -L $STATDOPTS
 fi
end script
=======================================================================
Then a restart of the system.
In /var/log/debug:
Jan 17 13:46:08 odin NFS: portmap start/running, process 836
--> no screen error message, but NO working NFS mount either.

Then I tried to get the system back to the original state, aka re-install network-manager, and enable avahi-daemon in /etc/default/avahi-daemon (AVAHI_DAEMON_DETECT_LOCAL=1). These package/new install actions requires 'restart'.

In var/log/debug:
Jan 17 13:58:14 odin NFS: portmap start/spawned, process 816
Jan 17 13:58:15 odin NFS: portmap start/running, process 845
--> Original set of error messages on screen, but WORKING NFS mount.

another restart system directly after above.
In var/log/debug:
Jan 17 14:02:13 odin NFS: portmap start/running, process 846
--> no screen error messages, but NO working NFS mount either.

Definitely we are close, but I can not prove that the fix solves my problem. The behavior is consistent, the bug is in some way related to occasions when a reboot is required. Directly following, the first restart my system mounts OK, but not any following restarts. The error messages are not consistent as reported before (bug #504224)

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.