Stale NFS file handle - had to reboot server

I had two clients with a NFS mounted from a third system (server) with manual mount (not automatic). I did various things using all three systems. I'm pretty sure that at some point or another each of the clients had a "vi" session open when the ssh got disconnected, sending a HUP to shells and vi. When I tried to mount the filesystem again (which happened to be at the root of a locally mounted device on the server) after rebooting them (but not the server), I got "Stale NFS file handle". I could go to the mount point, and it appeared to be entirely normal. I did "ls -d" and saw nothing unusual. I tried mounting to a different directory/mount point, but got the same behavior. On the server, I did "service nfsd restart" which it did, but to no effect. Finally, I rebooted the server, and THEN I was able to mount the NFS on the other systems again.

Did I miss something, or is it possible that this is a bug of some kind? Shouldn't it be possible to clear this without rebooting?
 
Which version of FreeBSD are the server and clients running on?

Which version(s) of NFS are the server and clients (as a mount option) using? NFSv3 or NFSv4?

How is the exports(5) file (or ZFS "sharenfs", see /etc/zfs/exports) configured?

I'm pretty sure that at some point or another each of the clients had a "vi" session open when the ssh got disconnected, sending a HUP to shells and vi.
I do not understand how ssh(1) is involved in this picture, when the clients have a remote NFS shared file system mounted locally.

(side note: a terminal multiplexer, like sysutils/tmux, can maintain the process in the background instead of terminating it).

In case of locked files, see mount_nfs(8) options intr and nolockd combination.
 
Good questions. I should have provided more detail.

NFSv3 only.
Server: 14.1-RELEASE-p3 "nfsd: master (nfsd)" "nfsd: server (nfsd)"
Client 1: 14.3-RELEASE-p8. From mount "remo:/SysBuilder on /SysBuilder (nfs)"
Client 2: 13.1-RELEASE. From mount "remo:/SysBuilder on /SysBuilder (nfs)"

exports:
Code:
/SysBuilder -maproot=root
I was connected via an ssh client on a Chromebook to each client, with a vi session open on a file on the NFS mounted filesystem, and when those connections got dropped when the laptop went to sleep, I think it sent a HUP to the vi sessions. In any case, they got dropped from the client side. I don't KNOW that caused the problem, but not much else was going on.
 
what's your `rc.conf` look like on both systems? also, basic NFS troubleshooting involves showmount on the clients and showmount -e on the server, and rpcinfo on both. it's been a while since we've done NFS troubleshooting, but those and packet captures were our starting points. (also, we call it "networked failure system" internally, because of our experience doing NFS troubleshooting)
 
Back
Top