Hello,
I'm having terrible performance with NFS and iSCSI from one of my FreeBSD servers. NFS and iSCSI both go below 5 MB/sec from one server to another while SCP goes at ~60 MB/sec and CIFS go at line-speed (~100 MB/sec)
The server with the issue:
A few tests I did:
iSCSI : 17sec for 100MB = ~5.8MB/sec (did this from an vmware esxi server)
NFS: 29 seconds for 100 MB = ~3.6 MB/sec (from another FreeBSD server)
SCP: 38 seconds for 2 GB = ~52.6 MB/sec (from the same FreeBSD server as the NFS test)
CIFS: On a windows machine using Samba I get ~98 MB/sec
I've already changed the switch and network cables, keep having the low performance.
My iSCSI settings: (I copied it from a Zfsguru box I also have, that one is running without issues)
My NFS related settings in rc.conf:
and my /etc/exports:
I'm having terrible performance with NFS and iSCSI from one of my FreeBSD servers. NFS and iSCSI both go below 5 MB/sec from one server to another while SCP goes at ~60 MB/sec and CIFS go at line-speed (~100 MB/sec)
The server with the issue:
Code:
# uname -a
FreeBSD gin 9.1-RELEASE-p5 FreeBSD 9.1-RELEASE-p5 #0 r254003: Wed Aug 7 03:09:03 UTC 2013 ferry@gin:/usr/obj/usr/src/sys/KA-NERU amd64
A few tests I did:
iSCSI : 17sec for 100MB = ~5.8MB/sec (did this from an vmware esxi server)
Code:
# date; dd if=/dev/zero of=test.dd bs=1M count=100; date
Fri Aug 9 19:48:43 UTC 2013
100+0 records in
100+0 records out
Fri Aug 9 19:49:00 UTC 2013
NFS: 29 seconds for 100 MB = ~3.6 MB/sec (from another FreeBSD server)
Code:
# dd if=/dev/zero of=test.dd bs=1M count=100
100+0 records in
100+0 records out
104857600 bytes transferred in 29.008874 secs (3614673 bytes/sec)
SCP: 38 seconds for 2 GB = ~52.6 MB/sec (from the same FreeBSD server as the NFS test)
Code:
# scp /mnt/zpool/tmp/test3.dd user@10.0.0.4:/mnt/zpool/tmp/
Password:
test3.dd 100% 2000MB 52.6MB/s 00:38
CIFS: On a windows machine using Samba I get ~98 MB/sec
I've already changed the switch and network cables, keep having the low performance.
My iSCSI settings: (I copied it from a Zfsguru box I also have, that one is running without issues)
Code:
[Global]
Comment "Global configuration"
NodeBase "lan.network.gin"
PidFile /var/run/istgt.pid
AuthFile /usr/local/etc/istgt/auth.conf
MediaDirectory /var/istgt
LogFacility "local7"
DiscoveryAuthMethod Auto
Timeout 30
NopInInterval 20
MaxSessions 16
MaxConnections 4
MaxR2T 32
MaxOutstandingR2T 16
DefaultTime2Wait 2
DefaultTime2Retain 60
FirstBurstLength 262144
MaxBurstLength 1048576
MaxRecvDataSegmentLength 262144
InitialR2T Yes
ImmediateData Yes
DataPDUInOrder Yes
DataSequenceInOrder Yes
ErrorRecoveryLevel 0
[UnitControl]
Comment "Internal Logical Unit Controller"
AuthMethod CHAP Mutual
AuthGroup AuthGroup10000
Portal UC1 127.0.0.1:3261
Netmask 127.0.0.1
[PortalGroup1]
Comment "PortalGroup1"
Portal DA1 10.0.0.4:3260
[InitiatorGroup1]
Comment "Gin"
InitiatorName "ALL"
Netmask 10.0.0.0/24
[LogicalUnit1]
TargetName ginesx
Mapping PortalGroup1 InitiatorGroup1
AuthGroup AuthGroup1
UnitType Disk
QueueDepth 64
LUN0 Storage /dev/zvol/zpool/esx1 200GB
My NFS related settings in rc.conf:
Code:
nfs_reserved_port_only="YES"
nfs_server_enable="YES"
nfsv4_server_enable="YES"
nfsuserd_enable="YES"
rpcbind_enable="YES"
nfs_flags="-u -t -n 4" # serve udp, serve tcp, start 4 instances
mountd_flags="-l -p 1026"
mountd_enable="YES"
rpc_lockd_enable="YES"
rpc_lockd_flags="-p 1027"
rpc_statd_enable="YES"
rpc_statd_flags="-p 1028"
and my /etc/exports:
Code:
/mnt/gin/vm -alldirs -maproot=root 10.0.0.5