Bad performance of NFS with ZFS

Hi, I have a FreeBSD 8.2-RELEASE server which shares and backups my data over the network. For this I have five 2TB WD Caviar Green [EARS] HDDs which are together in a RAIDZ2 ZFS Storage.

Over Samba with Windows 7 as client I got up to max. ~120MB/s, average ~90MB/s, min. 75MB/s by copying big files like .iso images. Good results I find, but under NFS I max. 48MB/s, average ~30MB/s, min. ~25MB/s.

I know already from Linux experience with NFS/Samba that Samba has quite better performance (on my Gentoo server before I had some lower rates in Samba and about 80MB/s in NFS on a single disk) but 30MB/s is way too little.

Are there some performance-increasing options to solve this issue?

Regards, bsus
 
Hi,

Join the club. It is well known that NFS performance is low on ZFS. This is due to the fact that NFS goes mad requesting flushes all the time which means that write IO/sec are critical for performance. You are using slow disks in a slow configuration, RAIDZ.

The best thing you can do with the kit you have is recreate the pool using mirrors which will result in higher IO/sec performance, if you had six disks (three mirrors) then the performance would be three times higher for writes. The other alternative is to use fast disks as a ZIL log.

You may be able to tweak performance without doing any of the above but I wouldn't expect you will see a massive improvement. On a NAS server I have set the following network tuning that you could test with:

Code:
kern.ipc.maxsockbuf=2097152
net.inet.tcp.sendbuf_max=16777216
net.inet.tcp.recvbuf_max=16777216
net.inet.tcp.sendspace=262144
net.inet.tcp.recvspace=262144
net.inet.tcp.mssdflt=1452
net.inet.udp.recvspace=65535
net.inet.udp.maxdgram=65535
net.local.stream.recvspace=65535
net.local.stream.sendspace=65535

thanks Andy.
 
Ok, I have set the sysctl variables you posted (by the way are they getting reset after a reboot?). Then I will try how the performance reacts under NFS use. Setting up a new pool is too much work, it would be the third time this week and I am slowly tired of this.

I also only want a performance near the Samba score, more is anyway not possible because of the network infrastructure of only Gigabit Ethernet (10Gbit/s are still way too expensive).

The reason why I have used a RAID6 is because of the head parking problem with the WD Caviar greens, which decimate the life of a disk under unix highly, I need better two reserve disks.

Quite intelligent Marketing which WD pursues:
The target audience of 2TB Caviar Green disks are mostly people who want to run a small/cheap homeserver, most of them who have a server are using unix, and because the alternative (Caviar Black) has double the price, nearly everyone takes the green and because of the feature "IntelliPark" they have to buy every 1,5 year a new one at the end of the day WD earns more money.

Creative and well-thought I have to admit.

Regards
 
Ok, thanks for the advice.

With the sysctl tuning the perfomance over Samba is now increased to 58MB/s. Are there any other things I could to especially nfs-sided tuning?
 
Strange, I have now copied an 1005MB big ISO from the zfs storage to /tmp on the FreeBSD server. I measured the time between pressing enter and getting a ready prompt, twelve seconds, which is ~83MB/s.

On a system? Is this not way too small and how can it be that Samba has up to 120MB/s performance. Is Samba lying to me?

Regards.
 
bsus said:
and because of the feature "IntelliPark" they have to buy every 1,5 year a new one at the end of the day WD earns more money.
Regards

:) yep, there are cunning beasts in WD. But if you use ataidle
Code:
ataidle -P 0 /dev/ad4
even WD green can become usable for noncritical appliance. For example I've backup server where four of these disks are in RAID 5 and after 2 years of 24/7 service there still working fine.

Anyway it'll be nice to compare iostat results for nfs and smb. I don't think that samba can really be that much faster than nfs. So try to check the results using on server
Code:
zpool iostat 1
 
Hi,
Good to hear that theres a option to link them ;)

Code:
 ls /dev
acpi       ad6        devstat    mem        ttyv0      ttyvb      urandom
ad10       ad8        fd         nfslock    ttyv1      ttyvc      usb
ad12       ata        fido       null       ttyv2      ttyvd      usbctl
ad14       audit      geom.ctl   pci        ttyv3      ttyve      xpt0
ad4        bpf        io         ptmx       ttyv4      ttyvf      zero
ad4s1      bpf0       kbd0       pts        ttyv5      ufsid      zfs
ad4s1a     console    kbdmux0    random     ttyv6      ugen0.1
ad4s1b     consolectl klog       stderr     ttyv7      ugen0.2
ad4s1d     ctty       kmem       stdin      ttyv8      ugen0.3
ad4s1e     da0        log        stdout     ttyv9      ugen1.1
ad4s1f     devctl     mdctl      sysmouse   ttyva      ugen1.2
freebsd ataidle # ataidle -P 0 /dev/ad4
ataidle: the device does not support advanced power management
freebsd ataidle # ataidle -P 0 /dev/ad6
ataidle: the device does not support advanced power management

I tried this now with ataidle.
Do I have to do something additional? or do you meant atacontrol instead of ataidle?

I will go on fixing the speed issue after this here. Thanks for the great help :)
 
Back
Top