Comment 56 for bug 213444

Revision history for this message
C.Cier (c-cier) wrote :

The problem is still there. At least in hardy which is LTS. I used NFSv4. Only one TCP port (2049) is possible between Server and Client due to a firewall.

There is a really simply workaround for this:
Specifiy port=2049 in the mount options on the client. By specifying this option rpcbind is not contacted any more by mount.nfs4, so UDP is not required anymore. Yes, I know that 2049 _is_ already the default port, but the fact that the option is explicitly set changes the behaviour of mount.nfs4.

From the manpage:
[...]
Valid options for the nfs4 file system type
port=n
The numeric value of the server's NFS service port. If the server's NFS service is not available on the specified port, the mount request fails. If this mount option is not specified, the NFS client uses the standard NFS port number of 2049 without first checking the server's rpcbind service. This allows an NFS version 4 client to contact an NFS version 4 server through a firewall that may block rpcbind requests.
[...]

Important is the _last_ part. The nfs client is supposed to try only TCP port 2049 without asking rpcbind first if the "port=" option is _NOT_ given.

But this is simply not the case. It definitly asks rpcbind which fails because UDP is blocked. So the whole mount requests times out and gives the "internal error" then.

I managed to get around this by specifing "port=2049" in the mount options. Now it seems to work. No more rpcbind errors in dmesg output. So I assume that rpcbind is not asked anymore.

So at least there is a discrepancy between the man page and the actual behaviour of NFSv4 mounts with respect to one-tcp-port-only-mounts.