Network speeds

Hi All,

I am new(ish) to FreeBSD, but a long time user and administrator of other unixes. I have a simple home network, 4 clients, 1 fileserver. Everything is on gig ethernet. The server has 8x2TB disks in Raidz and when I am local, I write at ca 350 MB/sec.

With FreeBSD client and Macs, I never get more speed than 25 MB/sec using scp between machines, except between two clients, one mac and one FreeBSD with intel 1000 network card, I get 32 MB/sec. Isn't this quite slow? What can I do to improve this?

Here is an output of one of the network controller on mac:

Code:
en0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
	options=2b<RXCSUM,TXCSUM,VLAN_HWTAGGING,TSO4>
	ether 3c:07:54:33:aa:1b 
	inet6 fe80::3e07:54ff:fe33:aa1b%en0 prefixlen 64 scopeid 0x4 
	inet 10.0.0.17 netmask 0xffffff00 broadcast 10.0.0.255
	media: autoselect (1000baseT <full-duplex,flow-control>)
	status: active

and on my file server

Code:
re0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
	options=389b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,WOL_UCAST,WOL_MCAST,WOL_MAGIC>
	ether 00:25:22:b4:da:1b
	inet 10.0.0.67 netmask 0xffffff00 broadcast 10.0.0.255
	inet6 fe80::225:22ff:feb4:da1b%re0 prefixlen 64 scopeid 0x6 
	nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
	media: Ethernet autoselect (1000baseT <full-duplex>)
	status: active

The switch between these is a Cisco one, SB200 model, Gig switch with 16 ports, unmanaged.
 
What version of FreeBSD are you using?

One of the new features of 9.0 is High Performance SSH (HPN-SSH).
 
And the server has a Quad core Intel CPU and the laptop I am using has i7 cpu with quad core + HT.
So I don't think the problem is with CPU usage, though it does go up a bit
 
Try using more SCP instances, until CPU usage reaches more than 75%, eitehr on the client, or server.
I assume your client and/or server processes aren't multi threaded, so a CPU usage higher than 25% on a quad-core client means that that the workload could eat an entire CPU core.
If you still believe CPU usage is not a problem, try the forcing CPU affinity of the client process to 1 or two cores and see what happens.

I had similar problems, a FTP client wasn't capable of more than ~25MBytes/sec, but the client computer (a single-core 3GHz laptop) was fully loaded. Changing MTU to something higher than 4000 solved the problem for me, so I could reach more than 70MBytes/s (maximum hard disk speed of both client and server).

Conclusion: The server may be capable of sending and receiving more than 200Mbits/sec, but your client/server applications may not be fully multithreaded, ending up eating the assigned CPU core. Increasing MTU will help by lowering processing overhead at network level.
 
I have been trying almost everything, but the fastest times I get is after I enable "jumbo frames" with ifconfig re0 mtu 9000

Code:
Client connecting to 10.0.0.67, TCP port 5001
TCP window size: 65.0 KByte (default)
------------------------------------------------------------
[  3] local 10.0.0.94 port 30690 connected with 10.0.0.67 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec   657 MBytes   551 Mbits/sec

with sysctl.conf
Code:
net.inet.tcp.sendbuf_max=4194304
net.inet.tcp.recvbuf_max=4194304
net.inet.tcp.delayed_ack=1
net.inet.tcp.path_mtu_discovery=0
net.inet.tcp.recvbuf_inc=524288
net.inet.tcp.recvbuf_max=16777216
net.inet.tcp.sendbuf_inc=524288
net.inet.tcp.sendbuf_max=16777216
net.inet.tcp.sendspace=65536
net.inet.udp.maxdgram=57344
net.inet.udp.recvspace=65536
net.local.stream.recvspace=65536
net.local.stream.sendspace=65536

#net.inet.tcp.sendbuf_auto=1
#net.inet.tcp.recvbuf_auto=1
#net.inet.tcp.sendbuf_inc=16384
#net.inet.tcp.recvbuf_inc=524288
#net.inet.tcp.inflight.enable=0
#net.inet.tcp.hostcache.expire=1


#net.inet.tcp.sendbuf_max=16777216 # TCP send buffer space
#net.inet.udp.blackhole=1          # drop any UDP packets to closed ports
#net.inet.tcp.sendspace=131072
net.local.stream.sendspace=65536
net.local.stream.recvspace=65536
##net.inet.tcp.local_slowstart_flightsize=10
#net.inet.tcp.nolocaltimewait=1

#net.inet.tcp.delayed_ack=1
#net.inet.tcp.delacktime=100

#net.inet.tcp.mssdflt=1460
#net.inet.tcp.sendspace=78840
#net.inet.tcp.recvspace=78840
#net.inet.tcp.slowstart_flightsize=54

#kern.polling.burst_max=1000

Note that many of the parameters are commented out, I have tried these quite a few times, in different on or off.
I have no problems decreasing the performance down to a single 1 Mbit, but only once did I see 667 Mbits, but that was only once.

I am now just going up and down 5 Mbits with different settings

Any assistance and help with what these parameters actually mean would help a lot.
It just appears that I am getting 55% of a gig net, when I should be getting at least 80-90%.

For a file server this is a life or death so to speak, but all extra speed is much appreciated.

Just for fun, here is my loader.conf file:

Code:
vm.kmem_size="7g"
vm.kmem_size_max="7g"
vfs.zfs.arc_min="512m"
vfs.zfs.arc_max="5376M"
vfs.zfs.vdev.min_pending="1"
vfs.zfs.vdev.max_pending="1"
vfs.zfs.zil_disable="1" 
net.inet.tcp.tcbhashsize="4096"
net.inet.tcp.hostcache.hashsize="1024"
vfs.zfs.prefetch_disable=0


This machine is dual core AMD machine,

Code:
hw.machine: amd64
hw.model: AMD E-350 Processor
hw.ncpu: 2
hw.machine_arch: amd64

and it has 4 GB memory
Code:
Virtual Memory:		(Total: 1074672492K Active: 828144K)
Real Memory:		(Total: 3347560K Active: 36380K)
Shared Virtual Memory:	(Total: 30112K Active: 10036K)
Shared Real Memory:	(Total: 9804K Active: 8120K)
Free Memory Pages:	4266732K
 
Q: Are your hard drives able to (theoretically) saturate 1 gigabit? If its a laptop (5400rpm), or even a single desktop drive in either end, i would not have thought so?

Is the copy many small files or one large file? Many smaller files will involve a lot more disk seeking, which may reduce the capacity of the client to send data fast enough.
 
traustitj said:
I have been trying almost everything, but the fastest times I get is after I enable "jumbo frames" with ifconfig re0 mtu 9000

Code:
Client connecting to 10.0.0.67, TCP port 5001
TCP window size: 65.0 KByte (default)
------------------------------------------------------------
[  3] local 10.0.0.94 port 30690 connected with 10.0.0.67 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec   657 MBytes   551 Mbits/sec

Execute 'top' during test to see if iperf is limited by the CPU, both on server and client.

550 Mbit/s is a typical transfer rate for a laptop hard disk and more than half of a gigabit ethernet connection, so it looks that MTU=9000 can fix your problem.
 
Yes, this did improve things a little, but when running iperf, I am not reading or writing any disks and my laptop is not a $500 one, neither is my server. I would really like to increase 20%-30%, then I am happy and I am also learning these things.

I think that the default settings for Freebsd FreeBSD are way too low, everywhere.
 
7200 rpm drives won't go any faster, pretty much, and cache will have a negligble impact on throughput.

Jumbo frames appear to be getting you at or close to their max throughput. Jumbo frames aren't turned on by default because not all networking gear supports them.

In general FreeBSD is tuned to be "safe" out of the box. Which is preferable in an enterprise environment to "fast but unreliable".


Maybe try setting up a ramdisk on both ends and repeat the test?
 
So in short, using just network with iperf, no disk involved, is 500-600 Mbits/sec the maximum I can see and anything above that just some dream?
 
traustitj said:
... using just network with iperf is 500-600 Mbits/sec the maximum I can see... ?

I'm not sure. If you run multiple iperf instances [or a multithreaded iperf test] you will probably see either higher throughput, either 100% CPU usage somewhere on client or server.
 
Can you give output of:

# dd if=/dev/mirror(graidX)/your-array of=/dev/null count=1024 bs=1M

if you are using software raid, or /dev/daX if you are using HW raid controller.

I had issue like this some weeks ago, and the problem was the raid controller firmware and driver.
Anyway you can just test the read speed, or create file and fill it with zeros from /dev/zero to test the write speed of your disk array.
Like this

# touch /some/file.file
# dd if=/dev/zero of=/some/file.file count=(some number) bs=(some block size)

Just show us the output. And if you are using software raid, you can also check the dmesg right after that. If there is delay write on some of the disks, you will notice it for sure in the dmesg.
And as it is a fileserver, I think it will be proper to test the disks speed on the clients also.

Thank you.
 
Instead of doing a straight scp from source to destination, try eliminating a lot of the duplicated or serial disk I/O. See if this speeds things up for you:

# gzip -c /your/file | ssh [email=user@your.destination.server]user@your.destination.server[/email] "gunzip -c - > /your/file"

This usually speeds things up a bit, especially if your moving files that are already compressed. You can avoid duplicate read/writes on both sides of the transfer, and do the compression on the fly.
 
Back
Top