Slow network performance

I am experiencing slow network performance in FreeBSD 10.3 r297264 running as a VM on ESXi6.0

just to rule hardware out i set up a VM of a windows server on the same ESXi server and assigned identical hardware resources as FreeBSD has.

If I attempt to copy a test 2GB file over the network (SMB) from another physical machine on the same network to FreeBSD VM I am getting speeds of 20MB/s - 35MB/s with several drops to 0MB/s for 13 to 20 seconds during the transfer.

copying the same 2GB file to a windows server VM averages speeds of 115MB/s with 0 drops.

slow network performance in FreeBSD can be observed with VMXNET3 and E1000 nics.

here is a graph of the network utilization from the computer hosting the 2GB file.

1st portion of the graph is the copy to FreeBSD 2nd portion is a copy to windows server.

What config changes could be done to FreeBSD to optimize network performance?

compare_2gb.jpg


Thanks
 
Try turning off TSO on the interfaces: ifconfig vmx0 -tso.
 
I think you need to address the packet drop issue before optimizing the interface performance.
You might want to do the following experiments:
- check the file transfer between WindowsVM to FreeBSD-VM on the same ESXI
- try a file transfer between two FreeBSD-VM and see what transfer rate you get
- Have you tried ftp or sftp to see if the issue is not related to SMB
 
Thank you for your responses.

-tso made no difference in file transfer.

I tested transfer speed using iperf between the FreeBSD VM and a physical machine in both directions.
Test results are below

FreeBSD as a client
ip3_cl.jpg


FreeBSD as a server

frbs_srv.jpg


So it seems that the problem I am facing is affecting SMB protocol, as iperf is showing almost wirespeed transfers
 
Windows TCP settings

netsh interface tcp show global
Querying active state...

TCP Global Parameters
----------------------------------------------
Receive-Side Scaling State : enabled
Chimney Offload State : automatic
NetDMA State : enabled
Direct Cache Acess (DCA) : disabled
Receive Window Auto-Tuning Level : normal
Add-On Congestion Control Provider : none
ECN Capability : disabled
RFC 1323 Timestamps : disabled
** The above autotuninglevel setting is the result of Windows Scaling heuristics

overriding any local/policy configuration on at least one profile.

netsh interface tcp show heuristics
TCP Window Scaling heuristics Parameters
----------------------------------------------
Window Scaling heuristics : enabled
Qualifying Destination Threshold : 3
Profile type unknown : normal
Profile type public : restricted
Profile type private : restricted
Profile type domain : normal

How could I obtain TCP parameters on the FreeBSD computer?
 
How could I obtain TCP parameters on the FreeBSD computer?
To answer you directly, I suggest you take a look at man tuning

But I think before tuning TCP/IP, you still need to consider that your iperf test shows that your TCP is already very well tuned since you are getting almost wire speed.
Notice that you've used the "TCP" transfer test of iperf not the "UDP" because you did not use the iperf -u switch.

I suggest, you look at tuning SMB (Samba, I guess) on FreeBSD, but someone else more knowledgable can help you hopefully.
 
I note that nobody has mentioned sysctl(8). Do you have anything network related in /etc/sysctl.conf (or any network related sysctl stuff elsewhere)? If you do have anything configured, please share all non-default sysctl settings with us. I'm asking this because some of the network tuning advice out there on the big bad web can be outdated and actually harmful to performance.

Something looks slightly odd with those graphs above, relative to the reported issue. The graph to the Windows server shows performance somewhere in the range of 100–150Mbit/s. The graph to the FreeBSD server shows performance consistently at 200Mbit/s other than when it stops for a short while. I am left wondering if the disk subsystem can actually deliver 200Mbit/s (approx 20MByte/s); if it can't then the pause is easily explained by the FreeBSD host having buffered as much as it is willing to buffer, and waiting for more to be written to disk before allowing the client to resume sending.

For the two graphs, am I correct that the top one is Windows->Windows, and the bottom one is Windows->FreeBSD? (not the Windows Task Manager, the two graphs further down the thread)

Be careful of your bits and bytes here, as I think what I may be seeing is a Windows server handling 115Mbit/s (approx 11.5MByte/s); and a FreeBSD server handling 200Mbit/s (approx 20MByte/s) other than when its buffers fill up due to lack of disk bandwidth.

N.B. hard drives with spinning metal plates and moving heads (i.e. not SSD) do not actually deliver anything like the data rate performance stated by their manufacturers, other than in very specific circumstances. General purpose workloads often only achieve only around 10% of the manufacturer's data rate, due to the time lost to head movement. 10–20MByte/s is not so unusual for achieved performance of a single SATA-150 / SATA-300 disk (or mirrored pair).

Apologies if I've misinterpreted it.
 
To answer you directly, I suggest you take a look at man tuning

But I think before tuning TCP/IP, you still need to consider that your iperf test shows that your TCP is already very well tuned since you are getting almost wire speed.
Notice that you've used the "TCP" transfer test of iperf not the "UDP" because you did not use the iperf -u switch.

I suggest, you look at tuning SMB (Samba, I guess) on FreeBSD, but someone else more knowledgable can help you hopefully.

SMB uses TCP, so I thought it would make scenes to test TCP, but UDP shows the same results.

udp.jpg
 
I think you need to address the packet drop issue before optimizing the interface performance.
You might want to do the following experiments:
- check the file transfer between WindowsVM to FreeBSD-VM on the same ESXI
- try a file transfer between two FreeBSD-VM and see what transfer rate you get
- Have you tried ftp or sftp to see if the issue is not related to SMB

There is no packet drop issue.
The problem seems to be affecting other protocols. FTP transfer graph shown below Windows -> FreeBSD

ftp.jpg
 
That's a virtual drive, right, not a physical drive?

The seek times other than sequential don't mean terribly much on virtual drives unless they are occupy the majority of an underlying physical drive (or are the only thing active on the physical drive). 0.1ms sequential seeks is not particularly fast. I get 0.07ms sequential seeks out of a Hitachi Deskstar SATA from 2008.

What is the underlying physical disk system?
How busy is it with activity from other virtual hosts?

The same goes for the client system sending the file. What sort of disk subsystem does it have, and how busy is it with other activity at the time?

Apologies if any of these seem like daft questions, but the iperf3 results cause me to question whether the real bottleneck is disk (on either end), rather than network.
 
Murph,

Yes, FreeBSD is a VM with a 40GB virtual drive. The underlying physical disk system is a Raid-5 of 3 SSDs connected to an HP Smart Array P440 Controller.

The host is not busy at all, there is only 2 VMs on this host and the second VM is an idle Windows server

The PC sending the is an idle Windows 7 PC with an SSD drive.

Thanks
 
Well, with it being SSD all round, both client and server, that basically takes the classic seek-contention performance issue out of the picture. You should basically be able to get the full performance of the drives (or at least very close to it).

The next obvious performance sink is the classic RAID 5 write performance hit. With 3 drives, each write can turn into 1 read and 2 writes. If the P440's write cache is fully enabled, that should greatly help to mitigate the R5 overhead (which is already mitigated quite a bit by virtue of being on a decent hardware RAID controller and using SSD drives).

Is the Flash Backed Write Cache (FBWC) enabled on the P440?

So, after all that, 20MByte/s / 200Mbit/s does seem like it could be unreasonably slow. Have you double checked everything about the ESX host's configuration, particularly the resource sharing? Is there anything there that could be reducing priority or putting some sort of cap on the throughput?

Unless there's something to be found in the above, I think we're back to looking at Samba and sysctl(8). The fact that iperf3 is hitting essentially full wire speed strongly points in the direction of Samba.
 
here is the diagram of the set up, seeing 115MB/s from Windows 7 to Windows 2012 in both directions gives me a clue that hardware is more than capable of transferring at wire speed.

There is no resource allocations configured on the host.

I did play with sysctl (), but reverted everything back to defaults as I didn't see any improvements.

diag.jpg
 
Code:
# $FreeBSD: releng/10.3/etc/sysctl.conf 112200 2003-03-13 18:43:50Z mux $
#
#  This file is read when going to multi-user and its contents piped thru
#  ``sysctl'' to adjust kernel values.  ``man 5 sysctl.conf'' for details.
#

# Uncomment this to prevent users from seeing information about processes that
# are being run under another UID.
# security.bsd.see_other_uids=0


# net.inet.tcp.rfc1323=1
# kern.ipc.maxsockbuf=16777216

# net.inet.tcp.sendspace=1048576
# net.inet.tcp.recvspace=1048576


# set to at least 16MB for 10GE hosts
# kern.ipc.maxsockbuf=16777216

# set autotuning maximum to at least 16MB too
# net.inet.tcp.sendbuf_max=16777216
# net.inet.tcp.recvbuf_max=16777216

# enable send/recv autotuning
# net.inet.tcp.sendbuf_auto=1
# net.inet.tcp.recvbuf_auto=1

# increase autotuning step size
# net.inet.tcp.sendbuf_inc=16384
# net.inet.tcp.recvbuf_inc=524288

# turn off inflight limiting
# net.inet.tcp.inflight.enable=0

# set this on test/measurement hosts
# net.inet.tcp.hostcache.expire=1
 
I'm going to suggest tinkering with Samba's socket options. The size of socket buffers in particular can have a dramatic impact on TCP performance. FreeBSD 10.3 does have automatic sizing of buffers, but I suppose there's a chance that some interaction with Samba is not letting them grow to optimal size.

sysctl(8) on defaults should be good for this.

Restart Samba after changing config each time.

First, if you have any socket options = … in the config, try commenting it out and see how it performs with no special socket options.

Next, try socket options = TCP_NODELAY SO_SNDBUF=65536 SO_RCVBUF=65536, and see how it performs.

Next, try socket options = TCP_NODELAY SO_SNDBUF=262144 SO_RCVBUF=262144, and see how it performs.

Now, keep doubling the SO_SNDBUF and SO_RCVBUF, testing performance at each step, until you no longer see a useful gain. The max is kern.ipc.maxsockbuf, which should default to 2097152 (2MB).

You should keep the Samba log level at 2 or less. Higher levels can eat performance.
 
you are talking about Samba client, not the server right?

The socket options are for the SAMBA server.

Yup, I'm talking about the Samba server's socket options in its main config file. It's now recognised as a bad thing to set buffer size in there (since it is generally better for the OS to handle that automatically), but for the purposes of this diagnosis it is something that should be tried. On a low latency 1Gbit/s LAN, it shouldn't really need to break past the kernel's default 2MB max buffer size, and we're looking at what happens as it increases, and if the gain levels off at some point.
 
as far as I know I am using Samba client on FreeBSD to connect to a Windows share, so perhaps making the Samba server config changes would not get me anything
 
as far as I know I am using Samba client on FreeBSD to connect to a Windows share, so perhaps making the Samba server config changes would not get me anything
Oh, ok. I thought this was about running the FreeBSD 10.3 VM as a Samba server for a Windows client.

So, what SMB client are you using on FreeBSD? smbclient(1) from the Samba distribution? It is possible that client has not had much effort put into making it a high performance tool, as most of the focus with Samba is on making it work well as a file/print server and/or domain controller.

Yes, the socket options stuff I was suggesting is for Samba server.
 
So, what SMB client are you using on FreeBSD? smbclient(1) from the Samba distribution?

Yes, SMB client that comes with FreeBSD.

I simply type SMB://192.168.20.100 in the file manager to authenticate to a Windows PC then copy the test file and paste it in the /tmp folder on FreeBSD VM which by they way is using ZFS. (not sure if that matters)
 
Yes, SMB client that comes with FreeBSD.

I simply type SMB://192.168.20.100 in the file manager to authenticate to a Windows PC then copy the test file and paste it in the /tmp folder on FreeBSD VM which by they way is using ZFS. (not sure if that matters)

Ahh, ok, that's something quite different from directly using smbclient(1), the command line tool supplied as part of the Samba server distribution. It also doesn't sound like something which is part of the base FreeBSD distribution, more likely a bunch of stuff from ports(7).

Which file manager? Is it part of one of the many different desktop environments? Do you know what it is actually using as a SMB client?
 
Back
Top