Low Bandwidth - Intel I350 Gigabit Network on 14.1-RELEASE-p3

Dear @ll,

I experience very low TX bandwidth on my Intel I350 Gigabit Network Connection, meaning by disable TSO and LRO I get max 212 Mbit/s, but I would expect to see at least 600Mbit/s, which I get when I launch Alma Linux without disabling TSO and LRO.

RX bandwidth, however is fine with 615Mbit/s on average.

The issue occurs using the kernel driver as well as the intel-em-kmod-7.7.8 package.

Anyone experiencing the same issue ?

Following the information I could gather so far:

Code:
igb0@pci0:197:0:0:      class=0x020000 rev=0x01 hdr=0x00 vendor=0x8086 device=0x1521 subvendor=0x1043 subdevice=0x853b
    vendor     = 'Intel Corporation'
    device     = 'I350 Gigabit Network Connection'
    class      = network
    subclass   = ethernet

Code:
igb0: flags=1008843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST,LOWER_UP> metric 0 mtu 1500
    options=4e120bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,WOL_MAGIC,VLAN_HWFILTER,RXCSUM_IPV6,TXCSUM_IPV6,HWSTATS,MEXTPG>
   
    media: Ethernet autoselect (1000baseT <full-duplex>)
    status: active
    nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>

Code:
Name     Mtu Network            Address                                         Ipkts Ierrs Idrop     Ibytes    Opkts Oerrs     Obytes  Coll  Drop
igb0    1.5K <Link#2>           xx:xx:xx:xx:xx:xx                                 45M     0     0       2.8G      86M     0        48G     0     0
igb0       - xxxxxxxxxxxxxxxxx  xxxxxxxx.                                         37M     -     -       1.8G      72M     -        38G     -     -

Code:
dev.igb.0.iflib.driver_version: 2.5.19-fbsd

Code:
intel-em-kmod-7.7.8            Gigabit FreeBSD Base Drivers for Intel(R) Ethernet

Code:
# /boot/loader.conf
if_em_updated_load="YES"

Code:
FreeBSD hostname 14.1-RELEASE-p3 FreeBSD 14.1-RELEASE-p3 GENERIC amd64
 
Thx, for moving, and please accept my apologies for posting in the wrong place. ?
 
What are you using for those 'benchmarks' and what kind of system is that?
If you are using iperf: it is *heavily* CPU-restricted, so on weak CPUs this won't get anywhere near the actual forwarding bandwidt that system is able to push through. Congestion control isn't your problem here if this is running in a (small) LAN...

You might also want to try disabling all offloading on the NIC, as this often interferes with any kind of virtualization (e.g. vnet jails).
 
CPU is AMD EPYC 7502P on 10.0.0.1, and on the 10.0.0.2 it's an Intel equivalent.

For benchmarking I use the following:

Code:
# 10.0.0.1/24
iperf3 -s

# 10.0.0.2/24
# RX:
iperf3 -c 10.0.0.1 -b 0 -O 10 --time 30
# TX:
iperf3 -c 10.0.0.1 -b 0 -R -O 10 --time 30

For monitoring I use on 10.0.0.1 prometheus node_exporter and:

Code:
vnstat -i igb0 -l

Both hosts are co-located, got 1 Gbit/s access unmetered, no firewalls, but both are at least 2500 km apart over the open Internet.

Disabling on igb0@10.0.0.1 LRO,TSO,TSO4,TSO6,RXCSUM,TXCSUM,RXCSUM6,TXCSUM6 improved the situation with a peak bandwidth at an avg. of 440 Mbit/s, however it hasn't been stable. I tested several different MTU settings as well, but in the end settled on the defaults as I couldn't detect any improvement.

So far the best result I managed to get by using tcp_bbr and reseting igb0 to default, which means keeping all that offloading enabled. Doing so I get an avg. stable bandwidth on RX and TX of at least 530 Mbit/s using intel-em-kmod-7.7.8.

I will try to test tcp_bbr using just kernel driver for Intel I350 later.
 
All tests have been conducted without VPN.

The final setup will involve encryption, but this is about raw throughput on TX side before any layer of encryption.

As it seems no one is experiencing the same issue, most likely the issue is on my side between table and chair. ??
 
Back
Top