vmxnet3 poor sending performance

Reks

New Member


Messages: 4

Hey guys,

I have Freebsd 12.0-RELEASE FreeBSD 12.0-RELEASE r341666 GENERIC amd64
Working as a quest on esxi 6.7 with hardware Dell R330, 16Gb DDR4, Xeon E3-1235L v5, Chelsio T520-SO No other vm/quests in this machine.

When i do iperf3 when client is sending, cant get more than 4Gbit/s but if i use revers mode (server is sending) i getting about 9Gbit/s (link is 10Gbit/s fibre)
I check the Htop and everythink is OK.

netstat -an giving me:
Recv-Q about 134664 and 9Gbit/s
Send-Q about 525600 and 4Gbit/s

I use vmx driver

Any ideas? I read about it many pages but dont find solution (change driver to vmxf, setting -tso -lro, etc)

Changing MTU is not a solution for me.

Code:
iperf3 -c XX.XX.XXX.125 -p 5001 -w1024k -i1
Connecting to host XX.XX.XXX.125, port 5001
[  5] local XX.XX.XXX.3 port 46104 connected to XX.XX.XXX.125 port 5001
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec   495 MBytes  4.15 Gbits/sec    0   1.66 MBytes       
[  5]   1.00-2.00   sec   487 MBytes  4.09 Gbits/sec    2    750 KBytes       
[  5]   2.00-3.00   sec   469 MBytes  3.93 Gbits/sec    8    249 KBytes       
[  5]   3.00-4.00   sec   472 MBytes  3.96 Gbits/sec    0    590 KBytes       
[  5]   4.00-5.00   sec   481 MBytes  4.03 Gbits/sec    4   2.00 MBytes       
[  5]   5.00-6.00   sec   514 MBytes  4.31 Gbits/sec   18    660 KBytes       
[  5]   6.00-7.00   sec   491 MBytes  4.12 Gbits/sec    8    719 KBytes       
[  5]   7.00-8.00   sec   475 MBytes  3.99 Gbits/sec   23    726 KBytes       
[  5]   8.00-9.00   sec   475 MBytes  3.99 Gbits/sec    0   1.32 MBytes       
[  5]   9.00-10.00  sec   502 MBytes  4.21 Gbits/sec    3    448 KBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  4.75 GBytes  4.08 Gbits/sec   66             sender
[  5]   0.00-10.00  sec  4.75 GBytes  4.08 Gbits/sec                  receiver

My ifconfnig settings
Code:
root@bgp1:/usr/home/konrad # ifconfig vmx1
vmx1: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
    options=60039b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,TSO4,TSO6,RXCSUM_IPV6,TXCSUM_IPV6>
    ether 00:50:56:83:ec:3a
    media: Ethernet autoselect
    status: active
    nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
root@bgp1:/usr/home/konrad # ifconfig vmx1.992
vmx1.992: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
    description: EPIX_WAR-OpenPeering-EPIX
    options=600003<RXCSUM,TXCSUM,RXCSUM_IPV6,TXCSUM_IPV6>
    ether 00:50:56:83:ec:3a
    inet XX.XX.XXX.3 netmask 0xfffff800 broadcast XX.XX.XXX.255
    inet6 XXXXX vmx1.992 prefixlen 64 scopeid 0xc
    groups: vlan
    vlan: 992 vlanpcp: 0 parent interface: vmx1
    media: Ethernet autoselect
    status: active
    nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>

My sysctl file:
Code:
kern.ipc.maxsockbuf=16777216    # (wscale  9)
kern.ipc.nmbclusters=16687532 #reks
net.inet.tcp.recvbuf_inc=4194304 # reks 65536    # (default 16384)
net.inet.tcp.recvbuf_max=16777216 #reks 4194304  # (default 2097152)
net.inet.tcp.recvspace=4194304 #reks 65536      # (default 65536)
net.inet.tcp.sendbuf_inc=4194304 #reks 65536    # (default 8192)
net.inet.tcp.sendbuf_max=16777216 #reks 4194304  # (default 2097152)
net.inet.tcp.sendspace=4194304 #reks 65536      # (default 32768)
net.link.ether.inet.log_arp_movements=0 # reks to dodal
net.inet.tcp.cc.algorithm=cdg  # (default newreno)
net.inet.tcp.cc.cdg.alpha_inc=1  # (default 0, experimental mode disabled)
net.inet.tcp.mssdflt=1448 #reks 1460   # Option 1 (default 536)
net.inet.tcp.minmss=536  # (default 216)
net.inet.tcp.cc.abe=1  # (default 0, disabled)
net.inet.tcp.rfc6675_pipe=1  # (default 0)
net.inet.tcp.syncache.rexmtlimit=0  # (default 3)
net.inet.ip.maxfragpackets=0     # (default 63474)
net.inet.ip.maxfragsperpacket=0  # (default 16)
net.inet6.ip6.maxfragpackets=0   # (default 507715)
net.inet6.ip6.maxfrags=0         # (default 507715)
net.inet.tcp.abc_l_var=44   # (default 2) if net.inet.tcp.mssdflt = 1460
net.inet.tcp.initcwnd_segments=44            # (default 10 for FreeBSD 11.2) if net.inet.tcp.mssdflt = 1460
net.inet.tcp.syncookies=0  # (default 1)
net.inet.tcp.isn_reseed_interval=4500  # (default 0, disabled)
net.inet.tcp.tso=0  # (default 1)
dev.igb.0.fc=0  # (default 3)
dev.igb.0.iflib.rx_budget=65535  # (default 0, which is 16 frames)
dev.igb.1.iflib.rx_budget=65535  # (default 0, which is 16 frames)
kern.random.fortuna.minpoolsize=128  # (default 64)
kern.random.harvest.mask=65887  # (default 66047, FreeBSD 12 with Intel Secure Key RNG)
hw.kbd.keymap_restrict_change=4   # disallow keymap changes for non-privileged users (default 0)
kern.ipc.shm_use_phys=1           # lock shared memory into RAM and prevent it from being paged out to swap (default 0, disabled)
kern.msgbuf_show_timestamp=1      # display timestamp in msgbuf (default 0)
kern.randompid=1                  # calculate PIDs by the modulus of an integer, set to one(1) to auto random (default 0)
net.bpf.optimize_writers=1        # bpf is write-only unless program explicitly specifies the read filter (default 0)
net.inet.icmp.drop_redirect=1     # no redirected ICMP packets (default 0)
net.inet.ip.check_interface=1     # verify packet arrives on correct interface (default 0)
net.inet.ip.portrange.first=32768 # use ports 32768 to portrange.last for outgoing connections (default 10000)
net.inet.ip.portrange.randomcps=9999 # use random port allocation if less than this many ports per second are allocated (default 10)
net.inet.ip.portrange.randomtime=1 # seconds to use sequental port allocation before switching back to random (default 45 secs)
net.inet.ip.random_id=1           # assign a random IP id to each packet leaving the system (default 0)
net.inet.ip.redirect=0            # do not send IP redirects (default 1)
net.inet.sctp.blackhole=2         # drop stcp packets destined for closed ports (default 0)
net.inet.tcp.blackhole=2          # drop tcp packets destined for closed ports (default 0)
net.inet.tcp.drop_synfin=1        # SYN/FIN packets get dropped on initial connection (default 0)
net.inet.tcp.fast_finwait2_recycle=1 # recycle FIN/WAIT states quickly, helps against DoS, but may cause false RST (default 0)
net.inet.tcp.fastopen.client_enable=0 # disable TCP Fast Open client side, enforce three way TCP handshake (default 1, enabled)
net.inet.tcp.fastopen.server_enable=0 # disable TCP Fast Open server side, enforce three way TCP handshake (default 0)
net.inet.tcp.finwait2_timeout=1000 # TCP FIN_WAIT_2 timeout waiting for client FIN packet before state close (default 60000, 60 sec)
net.inet.tcp.icmp_may_rst=0       # icmp may not send RST to avoid spoofed icmp/udp floods (default 1)
net.inet.tcp.keepcnt=2            # amount of tcp keep alive probe failures before socket is forced closed (default 8)
net.inet.tcp.keepidle=62000       # time before starting tcp keep alive probes on an idle, TCP connection (default 7200000, 7200 secs)
net.inet.tcp.keepinit=5000        # tcp keep alive client reply timeout (default 75000, 75 secs)
net.inet.tcp.msl=2500             # Maximum Segment Lifetime, time the connection spends in TIME_WAIT state (default 30000, 2*MSL = 60 sec)
net.inet.tcp.path_mtu_discovery=0 # disable for mtu=1500 as most hosts drop ICMP type 3 packets, but keep enabled for mtu=9000 (default 1)
net.inet.udp.blackhole=1          # drop udp packets destined for closed sockets (default 0)
security.bsd.hardlink_check_gid=1 # unprivileged processes may not create hard links to files owned by other groups, DISABLE for mailman (default 0)
security.bsd.hardlink_check_uid=1 # unprivileged processes may not create hard links to files owned by other users,  DISABLE for mailman (default 0)
security.bsd.see_other_gids=0     # groups only see their own processes. root can see all (default 1)
security.bsd.see_other_uids=0     # users only see their own processes. root can see all (default 1)
security.bsd.stack_guard_page=1   # insert a stack guard page ahead of growable segments, stack smashing protection (SSP) (default 0)
security.bsd.unprivileged_proc_debug=0 # unprivileged processes may not use process debugging (default 1)
security.bsd.unprivileged_read_msgbuf=0 # unprivileged processes may not read the kernel message buffer (default 1)
net.inet.ip.process_options=0
vfs.zfs.delay_min_dirty_percent=96  # write throttle when dirty "modified" data reaches 96% of dirty_data_max (default 60%)
vfs.zfs.dirty_data_max=12884901888  # dirty_data can use up to 12GB RAM, equal to dirty_data_max_max (default, 10% of RAM or up to 4GB)
vfs.zfs.dirty_data_sync=12348030976 # force commit Transaction Group (TXG) if dirty_data reaches 11.5GB (default 67108864, 64MB)
vfs.zfs.min_auto_ashift=12          # newly created pool ashift, set to 12 for 4K and 13 for 8k alignment, zdb (default 9, 512 byte, ashift=9)
vfs.zfs.top_maxinflight=128         # max number of outstanding I/Os per top-level vdev (default 32)
vfs.zfs.trim.txg_delay=2            # delay TRIMs by up to this many TXGs, trim.txg_delay * txg.timeout ~= 180 secs (default 32, 32*5secs=160 secs)
vfs.zfs.txg.timeout=90              # force commit Transaction Group (TXG) at 90 secs, increase to aggregated more data (default 5 sec)
vfs.zfs.vdev.aggregation_limit=1048576 # aggregated eight(8) TXGs into a single sequential TXG, make divisible by largest pool recordsize (default 131072, 128KB)
vfs.zfs.vdev.write_gap_limit=0      # max gap between any two aggregated writes, 0 to minimize frags (default 4096, 4KB)
#hw.hn.enable_udp4cs=1              # Offload UDP/IPv4 checksum to network card (default 1)
#hw.hn.enable_udp6cs=1              # Offload UDP/IPv6 checksum to network card (default 1)
#hw.ixl.enable_tx_fc_filter=1       # filter out Ethertype 0x8808, flow control frames (default 1)
#net.bpf.optimize_writers=0         # bpf are write-only unless program explicitly specifies the read filter (default 0)
#net.bpf.zerocopy_enable=0          # zero-copy BPF buffers, breaks dhcpd ! (default 0)
#net.inet.icmp.bmcastecho=0         # do not respond to ICMP packets sent to IP broadcast addresses (default 0)
#net.inet.icmp.log_redirect=0       # do not log redirected ICMP packet attempts (default 0)
#net.inet.icmp.maskfake=0           # do not fake reply to ICMP Address Mask Request packets (default 0)
#net.inet.icmp.maskrepl=0           # replies are not sent for ICMP address mask requests (default 0)
#net.inet.ip.accept_sourceroute=0   # drop source routed packets since they can not be trusted (default 0)
#net.inet.ip.portrange.randomized=1 # randomize outgoing upper ports (default 1)
#net.inet.ip.process_options=1      # process IP options in the incoming packets (default 1)
#net.inet.ip.sourceroute=0          # if source routed packets are accepted the route data is ignored (default 0)
#net.inet.ip.stealth=0              # do not reduce the TTL by one(1) when a packets goes through the firewall (default 0)
#net.inet.tcp.always_keepalive=1    # tcp keep alive detection for dead peers, keepalive can be spoofed (default 1)
#net.inet.tcp.ecn.enable=1          # Explicit Congestion Notification (ECN) allowed for incoming and outgoing connections (default 2)
#net.inet.tcp.keepintvl=75000       # time between tcp.keepcnt keep alive probes (default 75000, 75 secs)
#net.inet.tcp.maxtcptw=50000        # max number of tcp time_wait states for closing connections (default ~27767)
#net.inet.tcp.nolocaltimewait=0     # remove TIME_WAIT states for the loopback interface (default 0)
#net.inet.tcp.reass.maxqueuelen=100 # Max number of TCP Segments per Reassembly Queue (default 100)
#net.inet.tcp.rexmit_min=30         # reduce unnecessary TCP retransmissions by increasing timeout, min+slop (default 30 ms)
#net.inet.tcp.rexmit_slop=200       # reduce the TCP retransmit timer, min+slop (default 200ms)
#net.inet.udp.maxdgram=16384        # Maximum outgoing UDP datagram size to match MTU of localhost (default 9216)
#net.inet.udp.recvspace=262144      # UDP recieve space, HTTP/3 webserver, "netstat -sn -p udp" and increase if full socket buffers (default 42080)
#net.inet.tcp.functions_default=rack  # (default freebsd)
#net.inet.tcp.rack.tlpmethod=3  # (default 2, 0=no-de-ack-comp, 1=ID, 2=2.1, 3=2.2)

#net.inet.tcp.rack.data_after_close=0  # (default 1)
#net.inet.tcp.cc.algorithm=htcp  # (default newreno)
#net.inet.tcp.cc.htcp.adaptive_backoff=1  # (default 0 ; disabled)
#net.inet.tcp.cc.htcp.rtt_scaling=1  # (default 0 ; disabled)
#net.inet.tcp.cc.algorithm=cubic  # (default newreno)

#net.inet.ip.forwarding=1      # (default 0)
#net.inet.ip.fastforwarding=1  # (default 0)  FreeBSD 11 enabled fastforwarding by default
#net.inet6.ip6.forwarding=1    # (default 0)
#net.inet.raw.maxdgram=16384       # (default 9216)
#net.inet.raw.recvspace=16384      # (default 9216)
#net.local.stream.sendspace=16384  # (default 8192)
#net.local.stream.recvspace=16384  # (default 8192)
# net.inet.tcp.persmax=60000 # (default 60000)
# net.inet.tcp.persmin=5000  # (default 5000)
# net.inet.tcp.rexmit_drop_options=0  # (default 0)
# net.inet.tcp.do_tcpdrain=1 # (default 1)
#hw.mxge.max_slices="1"  # (default 1, which uses a single cpu core)
#hw.mxge.rss_hash_type="4"  # (default 4)
#hw.mxge.flow_control_enabled=0  # (default 1, enabled)
net.inet.ip.intr_queue_maxlen=2048 #reks ##  # (default 256)
net.route.netisr_maxqlen=2048 #reks ##       # (default 256)
#dev.igb.0.rx_processing_limit=-1  # (default 100)
#dev.igb.1.rx_processing_limit=-1  # (default 100)
#dev.igb.0.eee_disabled=1  # (default 0, enabled)
#dev.igb.1.eee_disabled=1  # (default 0, enabled)
#net.inet.ip.rtexpire=10      # (default 3600)
#net.inet.ip.rtminexpire=10  # (default 10  )
#net.inet.ip.rtmaxcache=128  # (default 128 )
kern.ipc.soacceptqueue=256 #reks 1024  # (default 128 ; same as kern.ipc.somaxconn)
#net.inet.tcp.rfc1323=1  # (default 1)
#net.inet.tcp.rfc3042=1  # (default 1)
#net.inet.tcp.rfc3390=1  # (default 1)
#net.inet.icmp.icmplim=1  # (default 200)
#net.inet.icmp.icmplim_output=0  # (default 1)
#net.inet.tcp.sack.enable=1  # (default 1)
#net.inet.tcp.hostcache.expire=3900  # (default 3600)
net.inet.tcp.delayed_ack=1 #reks 0   # (default 1)
#net.inet.tcp.delacktime=20   # (default 100)
#security.jail.allow_raw_sockets=1       # (default 0)
#security.jail.enforce_statfs=2          # (default 2)
#security.jail.set_hostname_allowed=0    # (default 1)
#security.jail.socket_unixiproute_only=1 # (default 1)
#security.jail.sysvipc_allowed=0         # (default 0)
#security.jail.chflags_allowed=0         # (default 0)
#kern.sched.interact=5 # (default 30)
#kern.sched.slice=3    # (default 12)
#kern.threads.max_threads_per_proc=9000
#kern.coredump=1             # (default 1)
#kern.sugid_coredump=1        # (default 0)
#kern.corefile="/tmp/%N.core" # (default %N.core)
#net.inet.tcp.keepidle=10000     # (default 7200000 )
#net.inet.tcp.keepintvl=5000     # (default 75000 )
#net.inet.tcp.always_keepalive=1 # (default 1)
#vfs.read_max=128
#kern.ipc.maxsockets = 25600
#net.inet.tcp.per_cpu_timers = 0
#kern.random.yarrow.gengateinterval=10  # default 10 [4..64]
#kern.random.yarrow.bins=10             # default 10 [2..16]
#kern.random.yarrow.fastthresh=192      # default 192 [64..256]
#kern.random.yarrow.slowthresh=256      # default 256 [64..256]
#kern.random.yarrow.slowoverthresh=2    # default 2 [1..5]
#kern.random.sys.seeded=1               # default 1
#kern.random.sys.harvest.ethernet=1     # default 1
#kern.random.sys.harvest.point_to_point=1 # default 1
#kern.random.sys.harvest.interrupt=1    # default 1
#kern.random.sys.harvest.swi=0          # default 0 and actually does nothing when enabled
#net.inet6.icmp6.nodeinfo=0
#net.inet6.ip6.use_tempaddr=1
#net.inet6.ip6.prefer_tempaddr=1
#net.inet6.icmp6.rediraccept=0
##net.inet6.ip6.accept_rtadv=0
##net.inet6.ip6.auto_linklocal=0
My loader.conf file
Code:
pf_load="YES"
pflog_load="YES"
if_igb_load="YES"
cc_cdg_load="YES"
vfs.zfs.dirty_data_max_max="12884901888"  # (default 4294967296, 4GB)
net.inet.tcp.hostcache.cachelimit="0"
kern.geom.label.disk_ident.enable="0" # (default 1) diskid/DISK-ABC0123...
kern.geom.label.gptid.enable="0"      # (default 1) gptid/123abc-abc123...
machdep.hyperthreading_allowed="0"  # (default 1, allow Hyper Threading (HT))
net.inet.tcp.soreceive_stream="1"  # (default 0)
net.isr.maxthreads="-1"  # (default 1, single threaded)
net.isr.bindthreads="1"  # (default 0, runs randomly on any one cpu core)
net.pf.source_nodes_hashsize="1048576"  # (default 32768)
 

SirDice

Administrator
Staff member
Administrator
Moderator

Reaction score: 7,770
Messages: 30,912

Try turning off TSO on vmx1.
 

pos

Member

Reaction score: 10
Messages: 51

Do you have a router or a server? TSO and LRO are useless on a router, but not on a server. Theorectically... It looks like it is a server as you have only one interface. Why did you turn off the Segement Offload and the receive offload? Just as a test? Turning them of shouldn´t get you any performance boosts...

And increase MTU will probably just give you a few percent.

Don't just change a lot of stuff at the same time without knowing what they do (I see your sysctl.conf). It can have wierd side effects. At least if you want a performance gain. As an example, an increase of a certain buffer could give you more latency. Sometimes it is not easy to predict the side effects.

I get 9.2Gbits it both directions over 7 router hops. I have maybe just 4-5 changes from stock FreeBSD 12. Stock FreeBSD 12 is very good!

B t w..
Turning off flow control on the interface gave me better performance.
And as you have a chelsio, if you do not need the features, turn off:
hw.cxgbe.toecaps_allowed="0"
hw.cxgbe.rdmacaps_allowed=0"
hw.cxgbe.iscsicaps_allowed="0"
hw.cxgbe.fcoecaps_allowed="0"

As they eat resources from your nic.... It will give better performance and eat less interrupts during load. It can if you are lucky give 10-20% performance boost (at least on a fw throughput). If you can do that in vmware :)

You could also try to increase Chelsio NIC drivers queues if number of cores are greater than 8 (use a number of queue = power of 2).

/Peo
 
OP
OP
R

Reks

New Member


Messages: 4

Hi Peo,

This isn't server, it is my BGP router
Vmware use cxl driver to Chelsio T520.
After some days and night testing many options with loader and sysctl i found some interesting things:

vmx0 is interface to my network (only ip_forwarding without nat etc)[i use VMXNET3 virtual interface - intel I350 real HW 1Gb/s]
vmx1 is external network (i use quagga) [ i use VMXNET3 virtual interface - Chelsio T520 real HW 10Gb/s]

If my vmx1 interface has options options=60009b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,RXCSUM_IPV6,TXCSUM_IPV6>
i have about 6Gb/s from world
Code:
root@bgp1:/usr/home/konrad # iperf3 -c 11.125 -p 5001 -R
Connecting to host 11.125, port 5001
Reverse mode, remote host 11.125 is sending
[  5] local 11.3 port 51760 connected to 11.125 port 5001
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-1.00   sec   645 MBytes  5.41 Gbits/sec                 
[  5]   1.00-2.00   sec   735 MBytes  6.16 Gbits/sec                 
[  5]   2.00-3.00   sec   731 MBytes  6.13 Gbits/sec                 
[  5]   3.00-4.00   sec   742 MBytes  6.23 Gbits/sec                 
[  5]   4.00-5.00   sec   736 MBytes  6.18 Gbits/sec                 
[  5]   5.00-6.00   sec   741 MBytes  6.22 Gbits/sec                 
[  5]   6.00-7.00   sec   748 MBytes  6.28 Gbits/sec                 
[  5]   7.00-8.00   sec   748 MBytes  6.28 Gbits/sec                 
[  5]   8.00-9.00   sec   728 MBytes  6.11 Gbits/sec                 
[  5]   9.00-10.00  sec   726 MBytes  6.09 Gbits/sec                 
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  7.14 GBytes  6.13 Gbits/sec  36911             sender
[  5]   0.00-10.00  sec  7.11 GBytes  6.11 Gbits/sec                  receiver

iperf Done.
and about 4Gb/s to world
Code:
root@bgp1:/usr/home/konrad # iperf3 -c 11.125 -p 5001
Connecting to host 11.125, port 5001
[  5] local 11.3 port 49199 connected to 11.125 port 5001
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec   439 MBytes  3.68 Gbits/sec    0    780 KBytes       
[  5]   1.00-2.00   sec   446 MBytes  3.74 Gbits/sec    0   1.82 MBytes       
[  5]   2.00-3.00   sec   463 MBytes  3.88 Gbits/sec    0    655 KBytes       
[  5]   3.00-4.00   sec   459 MBytes  3.85 Gbits/sec    0    299 KBytes       
[  5]   4.00-5.00   sec   458 MBytes  3.84 Gbits/sec    0    409 KBytes       
[  5]   5.00-6.00   sec   460 MBytes  3.86 Gbits/sec    0   3.74 MBytes       
[  5]   6.00-7.00   sec   428 MBytes  3.59 Gbits/sec    0   3.37 MBytes       
[  5]   7.00-8.00   sec   449 MBytes  3.77 Gbits/sec    0    997 KBytes       
[  5]   8.00-9.00   sec   437 MBytes  3.67 Gbits/sec    0   2.09 MBytes       
[  5]   9.00-10.00  sec   448 MBytes  3.76 Gbits/sec    0    640 KBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  4.38 GBytes  3.77 Gbits/sec    0             sender
[  5]   0.00-10.00  sec  4.37 GBytes  3.76 Gbits/sec                  receiver

iperf Done.
About performance computers from my company i can get ~930mbit/s in and out [1gb/s network in use]

but when i change vmx1 options to LRO options=60049b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,LRO,RXCSUM_IPV6,TXCSUM_IPV6>

i have about 9Gb/s from world
Code:
root@bgp1:/usr/home/konrad # iperf3 -c 11 -p 5001 -R
Connecting to host 11.125, port 5001
Reverse mode, remote host 89.46.144.125 is sending
[  5] local 11.3 port 52885 connected to 11.125 port 5001
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-1.00   sec   903 MBytes  7.58 Gbits/sec                 
[  5]   1.00-2.00   sec  1.06 GBytes  9.07 Gbits/sec                 
[  5]   2.00-3.00   sec  1.06 GBytes  9.15 Gbits/sec                 
[  5]   3.00-4.00   sec  1.07 GBytes  9.20 Gbits/sec                 
[  5]   4.00-5.00   sec  1.06 GBytes  9.08 Gbits/sec                 
[  5]   5.00-6.00   sec  1.09 GBytes  9.37 Gbits/sec                 
[  5]   6.00-7.00   sec  1.08 GBytes  9.30 Gbits/sec                 
[  5]   7.00-8.00   sec  1.08 GBytes  9.30 Gbits/sec                 
[  5]   8.00-9.00   sec  1.06 GBytes  9.14 Gbits/sec                 
[  5]   9.00-10.00  sec  1.08 GBytes  9.30 Gbits/sec                 
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  10.5 GBytes  9.06 Gbits/sec  2880             sender
[  5]   0.00-10.00  sec  10.5 GBytes  9.05 Gbits/sec                  receiver

iperf Done.
And still 4Gb/s to world
Code:
root@bgp1:/usr/home/konrad # iperf3 -c 11.11.111.125 -p 5001
Connecting to host 11.11.111.125, port 5001
[  5] local 11.11.11.3 port 38879 connected to 11.1111.11.125 port 5001
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec   470 MBytes  3.94 Gbits/sec    0   1.32 MBytes       
[  5]   1.00-2.00   sec   482 MBytes  4.05 Gbits/sec    0   2.75 MBytes       
[  5]   2.00-3.00   sec   447 MBytes  3.75 Gbits/sec    0    389 KBytes       
[  5]   3.00-4.00   sec   492 MBytes  4.13 Gbits/sec    0    581 KBytes       
[  5]   4.00-5.00   sec   462 MBytes  3.88 Gbits/sec    0    518 KBytes       
[  5]   5.00-6.00   sec   477 MBytes  4.00 Gbits/sec    0    306 KBytes       
[  5]   6.00-7.00   sec   455 MBytes  3.81 Gbits/sec    0   1.71 MBytes       
[  5]   7.00-8.00   sec   449 MBytes  3.76 Gbits/sec    0   1.67 MBytes       
[  5]   8.00-9.00   sec   462 MBytes  3.88 Gbits/sec    0    776 KBytes       
[  5]   9.00-10.00  sec   463 MBytes  3.89 Gbits/sec    0    407 KBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  4.55 GBytes  3.91 Gbits/sec    0             sender
[  5]   0.00-10.00  sec  4.54 GBytes  3.90 Gbits/sec                  receiver

iperf Done.
Other interesting thing is that my local network performance has big drop
From local computers i can get only about 500mbit/s from and 900mbit/s to world
The opposite situation is when i add LRO to vmx0 - i get drop performance in upload (to world) from local network

Conclusion:
LRO with VMXNET3 is very good think if you dont do IP forwarding, but in my case enabling it will do huge drop in forwarding speed. Now question is what is responsible for only 4Gbit/s to world from BGP machine??
Btw: I don't see much difference in cpu usage with or without LRO enable
 
Top