NFS performance

So we know the problem is in NFS and not general networking,

Try nfsv3 just to verify the speed.
I was using NFS v3 until last year when I switched to v4 and I remember that the transfer rates were a bit higher, about 113 MB/s on Linux NFS server and about 80 MB/s on FreeBSD NFS server.

The load on CPU on client hovers around 5-18% with a few peaks around 85%.
Can you show the output of `nfsstat -m`?
nfsstat -m in Linux:
/srv/nfs from server:/nfs
Flags: rw,noatime,nodiratime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,soft,fatal_neterrors=none,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.0.3,local_lock=none,addr=192.168.0.2

nfsstat -m in FreeBSD:
server:/nfs on /srv/nfs
nfsv4,minorversion=2,tcp,resvport,nconnect=1,hard,cto,sec=sys,acdirmin=3,acdirmax=60,acregmin=5,acregmax=60,nametimeo=60,negnametimeo=60,rsize=65536,wsize=65536,readdirsize=65536,readahead=1,wcommitsize=16777216,timeout=120,retrans=2147483647
 
wsize is different, but my understanding is that ut doesn't matter for TCP, and hence nfsv4.

But try to set wsize to what Linux uses just in case.
 
Added -o rsize=1048576,wsize=1047576 to the mount options on the BSD client. No effect, rsize and wsize remain to their (seemingly) default values of 65536.

Then I tried as in post #4 here, again with no effect.

man mount_nfs seems to imply that rsize and wsize are only important for UDP mounts.

Actually, the whole forum post above shows a situation similar if not identical to mine.
 
Try with vfs.maxbcachebuf=131072 in /boot/loader.conf.

server:/exports/test on /mnt/nfs
nfsv4,minorversion=2,tcp,resvport,nconnect=1,hard,cto,sec=sys,acdirmin=3,acdirmax=60,acregmin=5,acregmax=60,nametimeo=60,negnametimeo=60,rsize=131072,wsize=131072,readdirsize=131072,readahead=1,wcommitsize=8388608,timeout=120,retrans=2147483647
 
Try with vfs.maxbcachebuf=131072 in /boot/loader.conf.
With vfs.maxbcachebuf=131072 the rsize/wsize are set at 131072 and the transfer rate of the BSD client in increases to 60 MB/s.

I tried to go higher on vfs.maxbcachebuf.
The maximum accepted is 524288, which is half of Linux rsize/wsize.
Any higher than that sets rsize/wsize to 524288.
With that I get 82 MB/s.

Then I set sync=disabled on server's ZFS pool, per suggestion in post #30.
This raises again the transfer rate to 93 MB/s.

Anyway, NFS sync on ensures that data is written to stable storage (disk) on the server before acknowledging the write to the client, while async allows the server to acknowledge the write before it's actually on disk, potentially improving performance but increasing the risk of data loss in case of server failure.

As for me data safety is more important than performance, I'll always keep sync in server's /etc/exports for Linux.
FreeBSD does not even allow setting sync/async in /etc/exports, so that's that.

On another hand, ZFS sync/async is explained very nicely here.
The last section of that article tells us everything we need to know for making a decision to suit our taste.
I'll always choose to go with 82 MB/s instead of 93!

In conclusion:

1. rsize/wsize on the client largely affects the transfer rate. Although, the performance of the BSD client cannot be made as high as the Linux client, the situation is not too bad either at about 78%. The default of 65536 for rsize/wsize in FreeBSD gives very poor performance.
2. If a compromise on data safety can be made, an extra boost is available by playing with server's ZFS sync property. Not sure what changes would be if the underlying server's storage would differ from ZFS, as I'm not able to test that.
 
Back
Top