LAGG lacp between 2 FreeBSD hosts

Hi guys,

Yesterday I tried connecting 2x FreeBSD 9-STABLE machines between each other in a lagg lacp configuration using 2x 1GB igb (igb1 & igb3) interfaces from each machine (the idea is to get the throughput of both connections - the machines are connected to a NAS).

What I did on both machines was:
Code:
ifconfig igb1 up
ifconfig igb3 up
ifconfig lagg0 create
ifconfig lagg0 up laggproto lacp laggport igb1 laggport igb3
ifconfig lagg0 10.1.0.16x (x= 0 and 2 -> on the first and on the second machine)

Then, when I tried pinging ... nothing. The interface is up, active and both ports are working, according to ifconfig:
Code:
lagg0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
	options=401bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,TSO4,VLAN_HWTSO>
	ether 00:25:90:1a:2e:73
	inet 10.1.0.162 netmask 0xff000000 broadcast 10.255.255.255
	nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
	media: Ethernet autoselect
	status: active
	laggproto lacp
	laggport: igb3 flags=1c<ACTIVE,COLLECTING,DISTRIBUTING>
	laggport: igb1 flags=1c<ACTIVE,COLLECTING,DISTRIBUTING>

I read the docs and in there, there was described how-to configure a lacp lagg interface but it was pointed out that the other end of the cable connects to a switch, and therefor my question, do I need to connect each machine to a switch and then configure lacp on the switch or should this simply work if I have 2 boxes back-to-back?

PS: We have HP ProCurve 2910al-48g switches there.
 
To answer my own question, it works (broadcast address was wrong for 1 thing).

Now, the question I'm faced is why does it send traffic only via one interface while the other one stays idle? It can be either of the two interfaces but the other one will always be idle.

Checked with:
[cmd=]systat -if 1[/cmd]
 
da1 said:
Now, the question I'm faced is why does it send traffic only via one interface while the other one stays idle? It can be either of the two interfaces but the other one will always be idle.
802.3ad (LACP) does that to avoid out-of-sequence packets (consider what would happen with a longer path between the two systems, which can travel through different intermediate devices which have different delays). For TCP, out-of-order packets just increase overhead as it can reassemble the packets in the correct sequence. For certain UDP applications, out-of-sequence can break things completely.

To avoid this, link aggregation groups traffic onto individual physical ports. Depending on the aggregation method, this can be done based on IP address (optionally including the TCP/UDP port number), MAC address, and so on.

Since you're exchanging traffic between a pair of systems, each session is going to use only one of the physical ports. Note that data might be received on one port but replies sent via a different port, since the choice of which port to use is performed by the sender.

Therefore, the traffic of a single data stream between the systems is going to be limited to the bandwidth of a single port.
 
Evidently, I was expecting the wrong thing in this case but then how do I go about getting more throughput if I only have Gbit interfaces (I have 3x 1Gbit available interfaces).
 
da1 said:
Evidently, I was expecting the wrong thing in this case but then how do I go about getting more throughput if I only have Gbit interfaces (I have 3x 1Gbit available interfaces).
If your application can be split into multiple simultaneous connections, they should each give you a Gbit or so. If the load balancing algorithm considers TCP/UDP ports, fiddling with the destination port number should help. If it only balances based on IP addresses, you may need to add additional IP addresses to each system to get each additional stream to prefer a different physical port.

LACP works best with one-to-many connections. In studies, getting a 70/30 load balance in those cases should be easy. Getting it cloder to 50/50 requires an ever-increasing amount of work.

In the case of a box that (only / mostly) communicates with another system using the same lagg group, the only easy way out is to go to 10Gbit. I'm looking at prices for new / used 10Gb cards on eBay and they do seem to be coming down. Of course, it is important to pick ones that have good FreeBSD driver support.
 
Then 10Gbit seems the way to go since this box receives, via Ethernet, zfs snapshots (it's a backup box for the first box which serves NFS - lots of read/writes).
 
Back
Top