Extremly slow NFS reads

FreeBSD 10 box with RAID-Z (PCI-E passthrough LSI) running inside ESXi. Patched the kernel not to get the flush cache and that's about all that was done to the machine. I have set up iSCSI storage passed back to ESXi and get aroung 300-500Mb/Sec reads and writes from a virtual Win7 machine set up on this storage and running ATTO benchmark on.

The problem is that iscsi doesn't come up automatically (unles you rescan all HBA's) so i need an NFS store as well. And this is where i ran into problems. As the patch was applied I get around 300 MB/Sec writes from virtual W7 machine. And read performance is great as long as the sectors stay below 64K in size. After the ATTO benchmarks starts using 64K sectors my read performance drops below 1Mb/sec. And I also experience that the storage is very slow. What should I check in the first place?

Actually exactly the same problem I had while experimenting with Nexenta and FreeNAS. But runing of Openfiler gave me perfect Read/Write speeds.
/etc/rc.conf
Code:
hostname="xxx"
ifconfig_vmx0="inet 192.168.1.10 netmask 255.255.255.0"
#Interface with jumbo frames for dedicated iSCSI
ifconfig_vmx1="inet 192.168.2.15 netmask 255.255.255.0 mtu 9000"

defaultrouter="192.168.1.4"
ip6addrctl_enable="NO"
ipv6_network_interfaces="none"
ipv6_activate_all_interfaces="NO"
ipv6_gateway_enable="NO"

tmpmfs="AUTO"           # Set to YES to always create an mfs /tmp, NO to never
tmpsize="512m"           # Size of mfs /tmp if created
tmpmfs_flags="-m 0 -o async,noatime -S"

zfs_enable="YES"

sshd_enable="YES"
ctld_enable="YES"
# Set dumpdev to "AUTO" to enable crash dumps, "NO" to disable
dumpdev="AUTO"

rpcbind_enable="YES"
nfs_server_enable="YES"
mountd_enable="YES"
rpc_statd_enable="YES"
rpc_lockd_enable="YES"
nfs_server_flags="-u -t -n 4"
mountd_flags="-r"

smartd_enable="YES"
samba_enable="YES"
dnsmasq_enable="YES"
transmission_enable="YES"

/etc/exports
Code:
/pool/esxi -maproot=root -network 192.168.2 -mask 255.255.255.0
 
Last edited:
It will probably not change the performance, but definitely consider using tmpfs(5) for /tmp. It has less overhead than tmpmfs and only uses the memory it needs instead of allocating a fixed-size chunk of out RAM.
 
wblock@ said:
It will probably not change the performance, but definitely consider using tmpfs(5) for /tmp. It has less overhead than tmpmfs and only uses the memory it needs instead of allocating a fixed-size chunk of out RAM.

Thank you for the suggestion! I will look it.

For now I have migrated to virtualised OpenFiler for serving NFS datastore but would really like to understand such a slow read perfomance of NFS on FreeBSD.
 
Back
Top