NFS Woes...

Hi, I'm a BSD Newbie.

I've recently set up my first ZFS array in the lab.

5x1.5TB RAIDZ2 with 4x160GB 15000RPM SAS Cache. Writes breathe at up to 250 MB/s. The problem that I'm having is that all of my machines cap at 20-30 MB/s upload to the server. I've tried an Intel and a Realtek NIC with similar results. The pool caps at ~80MB/s average write if multiple clients are uploading.

Any tricks or suggestions?

Code:
# zpool status
  pool: storage
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        storage     ONLINE       0     0     0
          raidz2-0  ONLINE       0     0     0
            ada1    ONLINE       0     0     0
            ada2    ONLINE       0     0     0
            ada3    ONLINE       0     0     0
            ada4    ONLINE       0     0     0
            ada5    ONLINE       0     0     0
        cache
          da0       ONLINE       0     0     0
          da1       ONLINE       0     0     0
          da2       ONLINE       0     0     0
          da3       ONLINE       0     0     0

errors: No known data errors

rc.conf:
Code:
ifconfig_em0="DHCP"
ifconfig_fxp0="DHCP"
sshd_enable="YES"
ntpd_enable="YES"
powerd_enable="YES"
dumpdev="AUTO"
zfs_enable="YES"
rpcbind_enable="YES"
nfs_server_enable="YES"
nfs_server_flags="-u -t"
mountd_flags="-rn"
mountd_enable="YES"
 
mautobu said:
Hi, I'm a BSD Newbie.
Welcome!

I've recently set up my first ZFS array in the lab.

5x1.5TB RAIDZ2 with 4x160GB 15000RPM SAS Cache. Writes breathe at up to 250 MB/s. The problem that I'm having is that all of my machines cap at 20-30 MB/s upload to the server. I've tried an Intel and a Realtek NIC with similar results. The pool caps at ~80MB/s average write if multiple clients are uploading.
I think that you're running into the client's writes being converted to synchronous writes to the pool and thus reducing performance.

There's some recent discussion here, although that particular situation is complicated by being run inside a VM.
 
Terry_Kennedy said:
I think that you're running into the client's writes being converted to synchronous writes to the pool and thus reducing performance.
I did some testing which shows interesting results.

A copy from one of my servers to another (13TB of data in a 32TB pool) via NFS with# cp -Rp /localdir/ /remotedir/ went from 400Mbit/sec to 800Mbit/sec after issuing # zfs set sync=disabled tank
on the destination system. Note that the pools in question are built from individual drive units on a 3Ware controller which has battery backup. Disabling sync on a pool without non-volatile data caching can lead to loss of data.

I might get some 10Gbit/sec cards and experiment to see how fast my servers can actually copy data over the network.
 
Rad, thanks very much for your insight. I don't think I'll do that though. This project was born from a RAID 5 array that dropped 2 drives out due to a power failure.
 
From zfs(8):
Code:
disabled  Disables synchronous requests. File system transactions
          are only committed to stable storage periodically. This
          option will give the highest performance.  However, it
          is very dangerous as ZFS would be ignoring the synchro-
          nous transaction demands of applications such as data-
          bases or NFS.  Administrators should only use this
          option when the risks are understood.
 
Hi,

I too am going through setting up an NFS server with ZFS, and have posted some initial bonded Ethernet benchmarks here. I'm now working on NFS layer benchmarks.

Like you, I don't want to turn off synchronous writes.

This is theory (I'm hoping that the SSD will arrive next week), but have you considered terminating the NFS writes into an SSD based ZIL? It only needs to be small, 50% of main memory at most. But wants to be really low latency, and not volatile. I'll be using what's left of the SSD for cache (L2ARC).

Cheers,
 
gpw928 said:
This is theory (I'm hoping that the SSD will arrive next week), but have you considered terminating the NFS writes into an SSD based ZIL? It only needs to be small, 50% of main memory at most. But wants to be really low latency, and not volatile.
I have a 256GB battery-backed PCIe SSD as a ZIL, and that's what was producing my 400Mbit/sec numbers. Turning off ZFS sync doubled the throughput to 800Mbit/sec and stopped hitting the ZIL at all (it has an activity light, so I can see what's going on).
 
Hi Terry,

Last weekend I was benchmarking NFS/ZFS server on FreeBSD 9.1 with bonnie++ running on a Linux client.

The ZFS server has 5 x 3 TB WD red disks in RAID1Z configuration, mostly on SATA 2 (one on SATA 3).

From the NFS client I was seeing 99437 KB/sec block write and 85836 KB/sec block read, using jumbo Ethernet packets and 8K/8K NFS (TCP) read/write on a 2x1 Gbit (roundrobin) bonded Ethernet link.

I have not turned off synchronous write.

I just got my SSD today, so there's quite a bit more testing to go.

Cheers,
 
I'm not 100% sure how to do it, but I believe when sharing ZFS over NFS it is recommended to turn access time tracking off on the filesystem. Otherwise the system will be attempting to update the last accessed time for every read and every write to a file.
 
throAU said:
I'm not 100% sure how to do it, but I believe when sharing ZFS over NFS it is recommended to turn access time tracking off on the filesystem. Otherwise the system will be attempting to update the last accessed time for every read and every write to a file.

Hi,

Yes, I should have mentioned that I have done that:
Code:
zfs set atime=off tank
I have never found a good use for atime, and I don't know of any significant down side to turning it off.

Turning off sync is a whole different ball game. It's a crap shoot for reliability. You might win. You might lose. But the only time you will find out is when you *really* need to win.

Cheers,
 
Back
Top