LACP Doesn't aggregate Bandwidth

Ok i have a strange issue where i have two interfaces setup as follows:

Code:
lagg0: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 1500
        options=209b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,WOL_MAGIC>
        ether 00:1c:c0:2a:c6:e1
        media: Ethernet autoselect
        status: active
        groups: lagg
        laggproto lacp
        lag id: [(8000,00-1C-C0-2A-C6-E1,0130,0000,0000),
                 (FFFF,00-13-46-3C-8F-18,0011,0000,0000)]
        laggport: re0 flags=1c<ACTIVE,COLLECTING,DISTRIBUTING> state=3D
                [(8000,00-1C-C0-2A-C6-E1,0130,8000,0002),
                 (FFFF,00-13-46-3C-8F-18,0011,00FF,0001)]
        laggport: em0 flags=1c<ACTIVE,COLLECTING,DISTRIBUTING> state=3D
                [(8000,00-1C-C0-2A-C6-E1,0130,8000,0001),
                 (FFFF,00-13-46-3C-8F-18,0011,00FF,0002)]

Fail over works well & the other host is pleased with the connection as-well, however what is happening is that the link does NOT aggregate the speeds of each of the Gigabit Links, rather it seems for the most part to send traffic mainly across one of the two NIC's and then occasionaly swopping to preferring the other.
But the collective speed is NEVER beyond the capabilities of 1Gb NIC.


Anybody have input?
 
Oh yes almost forgot!

Code:
[root@gw2 /home/thavinci]# uname -a
FreeBSD gw2 8.2-RELEASE-p2 FreeBSD 8.2-RELEASE-p2 #8: Wed Jul 20 20:37:02 SAST 2011     thavinci@gw2:/usr/src/sys/amd64/compile/thavin   ci  amd64
 
No matter which bonding protocol you use, a single connection between two hosts will never exceed the throughput of a single interface. This is due to the way the bonding protocols work, where all traffic between two hosts goes over 1 interface.

The benefits to using bonding are two-fold:
  1. you can have multiple clients connect to the server, each getting the full throughput of a single NIC (as in, 4 GigE NICs bonded together on server means 4 clients can each connect at 1 Gbps)
  2. connections fail-over between interfaces if one dies

You will never get 1 connection between 1 client and the server to use more than 1 interface.
 
Misunderstood

Damn then i must have misunderstood cause i picked LACP because it claimed to allow aggregated bandwidth as apposed to load balancing!

So theirs no way i can achieve this?!

I have 1 Windows machine with 2x 1G NIC connected via crossover to two cards on BSD, windows is using Intel utility to allow LACP and bsd the normal setup.
 
Back
Top