NFSv4: Are portmap_enable, rpc_lockd_enable, rpc_statd_enable needed?

According to the Red Hat Enterprise Linux docs

http://www.centos.org/docs/5/html/Deployment_Guide-en-US/ch-nfs.html

then are portmap, rpc.lockd, and rpc.statd not needed on RHEL5 if NFSv4 is used, as their funtionality is in the protocol.

The FreeBSD Handbook doesn't mention this.

Question

Does that mean, that I can safely remove

Code:
portmap_enable="YES"
rpc_lockd_enable="YES"
rpc_statd_enable="YES"

from /etc/rc.conf on my FreeBSD 9 server, when I only use NFSv4?
 
Locking and 'stat' functionality is built into the NFSv4 protocol. If you don't need to fall back to NFSv3 then you don't need to start rpc_lockd and rpc_statd. As for the port mapper (rpcbind), RCng will auto-start it as a dependency for nfs_server.
 
Been struggling with periodic NFS stalls for almost a minute when accessing files or listing directory contents and no idea how to go about it. Our setup: one NFSv4 server exporting a single directory and many clients all working over tailscale VPN. Of note are constant errors logged on the master:

nfsrv_cache_session: no session IPaddr=100.64.59.78, check NFS clients for unique /etc/hostid's
nfsrv_cache_session: no session IPaddr=100.64.167.144, check NFS clients for unique /etc/hostid's
nfsrv_cache_session: no session IPaddr=100.64.88.247, check NFS clients for unique /etc/hostid's
nfsrv_cache_session: no session IPaddr=100.64.82.231, check NFS clients for unique /etc/hostid's
nfsrv_cache_session: no session IPaddr=100.64.179.61, check NFS clients for unique /etc/hostid's
nfsrv_cache_session: no session IPaddr=100.64.142.248, check NFS clients for unique /etc/hostid's

(all hostids are unique on each machine)

and on every client:
Initiate recovery. If server has not rebooted, check NFS clients for unique /etc/hostid's

which may be related to the use of tailscale addresses (100.*) while the actual tcp/ip communication takes place using normal public ipv4 addresses over its udp port 41641. The connections are direct as per tailscale status, no relays are in use.

First I thought it might be due to too low caching timeouts on the client and added these ac* options to the mount call:

Code:
mount -t nfs -o nfsv4,nosuid,acregmin=300,acregmax=86400,acdirmin=300,acdirmax=86400 nfsserver.local:/foo/ /mnt
but it didn't help.

Then I did some googling and set this on the server:

Code:
vfs.nfs.iodmax=40 (from the default 20)
net.inet.tcp.sack.enable=0

all in vain. Hope someone here more experienced with NFS could have some tips. I also should note that running
showmount -a
on the server shows no output after thinking for some time.
 
Back
Top