LAGG - Bonding [FreeNAS]

Guys,

I have a FreeNAS box and I have been trying to round-robin bond with it:

Code:
uname -a
FreeBSD host.domain 11.1-STABLE FreeBSD 11.1-STABLE #0 r321665+e0c4ca60dfc(freenas/11.1-stable): Wed May 30 14:18:20 EDT 2018     root@nemesis.tn.ixsystems.com:/freenas-11-releng/freenas/_BE/objs/freenas-11-releng/freenas/_BE/os/sys/FreeNAS.amd64  amd64

I can get the bonds to work great. I just want to do a static round robin bond. I have tried other bond modes successfully.

I have done some round-robin bonding in linux. Is FreeBSD just 100% different? I went as far as putting both machines right by each other and matching cable to cable, port to port, nic to nic.

Out of 4x gigabit nics...I get 13-25 mbits out of iperf.

Is round-robin broken in FreeBSD?
 
I am allowed to post about it in this forum though:

If you still decide to post a thread about any of these derived FreeBSD products, make sure to mention it in the topic title (e.g. [FreeNAS], [PC-BSD]) or in the first post!

The reason I am asking here. This is not the expected behavior right?
 
So just to be clear. I do not need resolution to my issue. I just would like confirmation from someone that has done these recently in FreeBSD.

IE, lagg bond more then 1 gbit ethernet together, round robin style, and had success w/ increased bandwidth.

I understand packet hashing, and out of order limitations. But I am getting max 25 mbit w/ this configuration.

Is this round robin different then linux round robin bonding? I have had a lot of success with that.
 
I'll have a go tonight, but one system is 10.2 and has no capability to install iPerf3, so I'll just have to flood ping big packets and see where that gets me.
 
# time ping -f -s 10240 -c 10240 192.168.254.1
PING 192.168.254.1 (192.168.254.1): 10240 data bytes
.
--- 192.168.254.1 ping statistics ---
10240 packets transmitted, 10240 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.280/0.487/0.909/0.073 ms
5.15 real 0.13 user 0.30 sys


My math says that's 10240 packets * 10240 bytes * 8 bits / 5.15 seconds / 1024 (k) / 1024 (m) = 155 mbits/sec

Next test yielded 322mb/s (I doubled the packet size).

Configuration on the 10.2 box:

Code:
# ifconfig lagg0
lagg0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
    options=403bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,VLAN_HWTSO>
    ether 0c:c4:7a:01:e3:9e
    inet 192.168.254.2 netmask 0xffffff00 broadcast 192.168.254.255
    nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
    media: Ethernet autoselect
    status: active
    laggproto roundrobin lagghash l2,l3,l4
    laggport: igb3 flags=4<ACTIVE>
    laggport: igb2 flags=4<ACTIVE>
Configuration on the 11.2 box was identical, except the two NIC's were em(4)'s.

Hope that's of some help.
 
roundrobin Distributes outgoing traffic using a round-robin scheduler
through all active ports and accepts incoming traffic from
any active port. Using roundrobin mode can cause unordered
packet arrival at the client. Throughput might be limited
as the client performs CPU-intensive packet reordering.

Have you tried lacp instead?
 
Datapanic, I was trying to do what leebrown66 had posted. I am doing this to bridge two sans together.

leebrown66 can you post your configuration lines? Can you also test using a single TCP stream somehow or does that ping command accomplish this? Can you use the old iperf?

I will give everyone some background:

This was my last test that I performed. It required me to pull one box from one building and put it in the same building:
  • 1 quad port intel gigabit NIC on each server
  • I directly connected em0 to em0, em1 to em1, em2 to em2, and em3 to em3 via patch cables
  • Setup the roundrobin lagg identical on each server
I would get 25 mbits

First Test:
  • I have a seperate vlan for each em0, em1, em2, em3
  • I had the vlans floating across some pre configured laggs switch to switch (3 switches in total between the boxes)
  • I have a 10gbit link between two of the switches
  • Setup the roundrobin lag identical on each server
I would get 25 mbits

Second test. I eliminated the laggs because I did not know if the single mac (built by the round robin lagg) across all of them was forcing all 4 to a single lagg port.:
  • I have a seperate vlan for each em0, em1, em2, em3
  • I patched in directly to the switches with 10gbit SFP+ fiber eliminating the laggs
  • Setup the roundrobin lag identical on each server
I would get 25 mbits

I tested the VLAN configuration. Configuring seperate IP addresses on each NIC and doing iperf tests one at a time between them. No mixing of layer 3 or layer 2 traffic confirmed.

I even went as far as pulling each link, one at a time, and running iperf across the configured round robin lagg. That is I would remove em3 on both servers, run a test. Do the same with em2, em1, until I just had em0 left. I would always get 25 mbits until I just had one link.

IE, if I had a round robin lag, with just em0 to em0. I would get the expected gigabit wire speed.

I have another post ( https://forums.freenas.org/index.ph...nding-different-then-linux.69402/#post-477316 ) in the FreeNAS forum also.

I am not trying to double post, and my goal is to still figure out if round robin bonding on FreeBSD is supposed to 'sum up link speeds'. I know I would never get 4 gbit, but I expect that.
 
I always understood you will never be able to exceed the speed of a single interface with a single session, regardless of how many interfaces. So if the lagg(4) consist of two gigabit interfaces a single session (connection) would never exceed the limit of a single gigabit interface. However, you will be able to run two sessions (connections) at gigabit speeds concurrently.
 
webdawg: right out of the man page:
# ifconfig lagg0 create
# ifconfig lagg0 laggproto roundrobin laggport igb0 laggport igb1 192.168.254.1 netmask 255.255.255.0


I can't install anything on my 10.2 box.

Running two nc sessions in TCP I got 250mbit/s.
Running two nc sessions in UDP I got 1960mbit/s.

Roundrobin has to be TCP's worst nightmare though.

I redid the lagg using the lacp protocol:
ifconfig lagg0 destroy
ifconfig lagg0 create
ifconfig lagg0 laggproto lacp laggport igb2 laggport igb3 192.168.254.1/24


which was not much better. Referring back to the manpage indicates the hash may be sub-optimal, so:
sysctl net.link.lagg.0.use_flowid=0 (from lagg(4))
And I was able to get over 1900mb/s with TCP.
 
It folds back to the same question I have been trying to get an answer to that caused me to start this thread:

http://louwrentius.com/achieving-450-mbs-network-file-transfers-using-linux-bonding.html
http://louwrentius.com/linux-network-interface-bonding-trunking-or-how-to-get-beyond-1-gbs.html
https://45drives.blogspot.com/2015/07/how-to-achieve-20gb-and-30gb-bandwidth.html
https://serverfault.com/questions/341702/does-linux-balance-rr-bond-mode-0-work-with-all-switches

Is the freebsd bonding not designed to do this? The description for balance-rr in the linux bond driver is:

Code:
balance-rr
This mode is the only mode that will permit a single TCP/IP connection to stripe
traffic across multiple interfaces. It is therefore the only mode that will allow a
single TCP/IP stream to utilize more than one interface's worth of throughput.
This comes at a cost, however: the striping often results in peer systems receiving
packets out of order, causing TCP/IP's congestion control system to kick in, often by
retransmitting segments.

That seems almost the same description as the freebsd driver.

I just think that lagg != linux bonding here, and I just need to confirm it. I know what lagg is, I know how switches use it etc, etc. I just did not know that there was a lagg round robin mode.
 
Back
Top