Slow performance with Intel X540-T2 10Gb NIC

Hello,

I am experiencing slower than expected performance from the Intel X540-T2 NIC I installed in a new FreeBSD 10.3 server. It "felt" slow on some basic file transfers so I did some testing with iperf3. If the FreeBSD box runs as the iperf3 server, I see transfer speeds of only ~1.6 Gb/s. If the FreeBSD box runs as the iperf3 client, I see transfer speeds of ~2.75 Gb/s. In both cases I am using a Windows 10 PC as the other side of the iperf3 test and it also has an Intel X540-T2 NIC. I installed CentOS 7 on the server and repeated the iperf3 tests and observed speeds approaching 8 Gb/s which is what I would expect from this hardware.

I am new to FreeBSD and have a few vague ideas of how to proceed, but would really appreciate some guidance. The driver included in 10.3 (and the latest 11.0 snapshot) is 3.1.13-k. The latest listed on Intel's web site is 3.1.14. I'm willing to install it if anybody feels strongly that might help (and can help me figure out how to do it), but that seems like a long shot. I also realize there are a bevy of tuning options I might try, but I am just beginning to investigate those.

Thank you for any assistance.

Regards,
Chris
 
I feel a little like Frankenstein in a china shop, but I managed to test the latest Intel driver (v3.1.14). Unfortunately that did not produce noticeably different results. I am still getting about 2.75 Gb/s using iperf3.

For those who are interested, I found this article on creating a custom kernel:

https://www.digitalocean.com/commun...ize-and-recompile-your-kernel-on-freebsd-10-1.

I followed the steps and then copied /usr/src/sys/amd64/conf/GENERIC to a new file name and commented out the device entries for the Intel PRO/10GbE PCIE NICs. That allowed me to compile, install, and boot into a kernel that did not have the slightly older 3.1.13 drivers already loaded. I couldn't figure out how to unload them so this was my workaround. :rolleyes: Finally, I ran make load in the
ix-3.1.14/src
directory (previously downloaded from Intel). That loaded the new driver and I was able to use ifconfig to configure the interface. I know that was a compressed explanation so if anybody cares for more details, please let me know.

So, I'm still stuck with sluggish performance if anybody has some other ideas.

Regards,
Chris
 
I am experiencing slower than expected performance from the Intel X540-T2 NIC I installed in a new FreeBSD 10.3 server. It "felt" slow on some basic file transfers so I did some testing with iperf3.
[...]Thank you for any assistance.
This is between a pair of 10-STABLE (r303122) boxes, using X540-T1 cards (the single-port version of what you have):
Code:
(0:3) rz1:/sysprog/terry# iperf -c rz2
------------------------------------------------------------
Client connecting to rz2, TCP port 5001
TCP window size: 35.0 KByte (default)
------------------------------------------------------------
[  3] local 10.20.30.40 port 13682 connected with 10.20.30.41 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  11.5 GBytes  9.88 Gbits/sec
Absolutely no tuning of TCP parameters, network-related /boot/loader.conf settings, or anything else. The CPUs are Xeon E5620's, not particularly high-end.

When I've helped other people with this issue, I've always said that single-thread, un-tuned performance should be close to wire speed. If it isn't, something is wrong and tuning may help hide it, but there's still something wrong.

It may help to get the network cards and cabling out of the equation. Use # iperf -s -B 127.0.0.1 to have iperf(1) listen on the loopback address, and then try a test on the same system:
Code:
(0:4) rz1:/sysprog/terry# iperf -c localhost
------------------------------------------------------------
Client connecting to localhost, TCP port 5001
TCP window size: 47.8 KByte (default)
------------------------------------------------------------
[  3] local 127.0.0.1 port 55789 connected with 127.0.0.1 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  34.2 GBytes  29.3 Gbits/sec
That will show you the maximum speed your system can push packets to itself. If that number is below 10 Gbits/sec, adding a network card isn't going to make it any faster.
 
Back
Top