NFS file locking questions

I'm wanting to use lockd on my nfs server, but first I'm trying to make sure it will do exactly what I'm thinking it will do. I haven't been able to find a good concise writeup on it and the manpage isn't too helpful, so I thought I'd ask the forum.

The setup:
*NFS server and two NFS clients with their shares mounted at all times (SERVER, CLIENT0, CLIENT1
*One service on the server which has to be able to read any file on the share at any given time
*The same service also needs to be able to create new files in the directory at any given time.

So, what I am expecting to happen with lockd is that I could have any file on the mount open on CLIENT0, and SERVER and CLIENT1 cannot make edits to that open file. Both, however, can interact with file as read-only while CLIENT0 has it open. Meaning, the service on SERVER can read whatever it needs to read, and CLIENT1 can simultaneously just view the contents.

So to summarize, my basic understanding is that the files all just sit there on the server, and whoever opens them first locks it for editing, but anyone can still read the file (although without any changes being made by the opener). Is this correct?
 
Locks need to be applied, just opening a file doesn't lock anything. And as far as I understood it it depends on the type of lock. There are shared and exclusive locks. See flopen(3) and flock(2). Basically what rpc.lockd(8) does is make file locking work consistently across NFS shares.
 
Okay so that was not at all what I was picturing

The initial problem that I'm trying to solve here is that on the client (Linux), when I try to mount a FreeBSD nfs share it tells me something like "server does not support file locking" or something like that (I'm at work and don't have the exact message). So it won't mount unless I specify the nolock option with my mount command. I'd just assumed that it was referring to a locking feature to prevent corruption.

For shared directories that need to be accessed by services on the service, I just mount as ro which is good enough my need because the client doesn't need to be able to write to these directories anyway. But I would like to figure out what I need to do to not have to mount with nolock in the first place, just because I don't like needing to use options that I don't really need.
 
Thread successfully hijacked)

We have many VPS hosts, which are constantly created and destroyed, use an NFS-shared file as a lock (via lockf or flock commands). Sometimes some of the locks are never cleared and the file remains locked forever, even if the offending host has been destroyed in the meantime, which makes other processing stall. It's very unfortunate base FreeBSD NFS/rpc.statd/rpc.lockd doesn't handle these on its own. There's the clear_locks(8) command that can be used to clear locks held by a host, but it isn't obvious which host holds the lock or if it even still exists. It seems statd/lockd don't have any options to control any health-checks or the lifetime of hanging locks. Can switching to nfsv4 (nfsv4_server_enable="YES" in /etc/rc.conf) help here or the problem has nothing to do with NFS version (but with lockd/statd)? Or any other type of NFS implementation from ports that doesn't have this problem? Or at least, is there a way to list held locks? Running FreeBSD 13.2 here. Thank you for your attention.
 
Can switching to nfsv4 (nfsv4_server_enable="YES" in /etc/rc.conf)
From mount_nfs(8):
nolockd
Do not forward fcntl(2) locks over the wire via the NLM
protocol for NFSv3 mounts or via the NFSv4 protocol for
NFSv4 mounts. All locks will be local and not seen by
the server and likewise not seen by other NFS clients for
NFSv3 or NFSv4 mounts. This removes the need to run the
rpcbind(8) service and the rpc.statd(8) and rpc.lockd(8)
servers on the client for NFSv3 mounts. Note that this
option will only be honored when performing the initial
mount, it will be silently ignored if used while updating
the mount options.
Also, note that NFSv4 mounts do not
use these daemons. The NFSv4 protocol handles locks,
unless this option is specified.


Nice)
 
Back
Top