I'm not sure how you are mounting your NFS share but I've been looking for ways to get decent NFS sync write performance for a while. I have a Linux box serving some VMware images that I really, really would like to get replaced with FreeBSD+ZFS. Not only am I much more comfortable with FreeBSD but I would get better snapshots / zfs send / zfs scrub, etc.
From what I understand the slow NFS write performance is due to the fact that every sync NFS request has to be flushed to disk before the client can proceed with the next request. The (relatively) slow access times of mechanical disks means you can't actually process that many sync NFS requests per second, dragging the throughput down.
NFS to a small test pool only gave me ~5MB/s which matches that seen in this thread.
http://lists.freebsd.org/pipermail/freebsd-fs/2009-September/006884.html
Obviously the local performance was way above this.
I was able to increase this to ~35MB/s by adding a single 60GB vertex 2 ssd.
In order to replace my Linux box I ideally need to be seeing 80MB/s+ so I'd be real interested if anyone can pull it off without spending a fortune on PCIe SSDs. So far, I'm not aware of anyone that has managed more than about 60 with standard hardware. Hacks like trying to mount async (don't think it's even possible in VMware) or disabling ZIL that could jeopardize data integrity are not really an option.
Interestingly, ixSystems make some FreeBSD 8.2 NAS boxes with Fusion-io cards that can apparently max 10Gb ethernet, although I assume that's not with NFS. I be interested to see what their NFS performance is like though (not that I could afford them).
From what I understand the slow NFS write performance is due to the fact that every sync NFS request has to be flushed to disk before the client can proceed with the next request. The (relatively) slow access times of mechanical disks means you can't actually process that many sync NFS requests per second, dragging the throughput down.
NFS to a small test pool only gave me ~5MB/s which matches that seen in this thread.
http://lists.freebsd.org/pipermail/freebsd-fs/2009-September/006884.html
Obviously the local performance was way above this.
I was able to increase this to ~35MB/s by adding a single 60GB vertex 2 ssd.
In order to replace my Linux box I ideally need to be seeing 80MB/s+ so I'd be real interested if anyone can pull it off without spending a fortune on PCIe SSDs. So far, I'm not aware of anyone that has managed more than about 60 with standard hardware. Hacks like trying to mount async (don't think it's even possible in VMware) or disabling ZIL that could jeopardize data integrity are not really an option.
Interestingly, ixSystems make some FreeBSD 8.2 NAS boxes with Fusion-io cards that can apparently max 10Gb ethernet, although I assume that's not with NFS. I be interested to see what their NFS performance is like though (not that I could afford them).