Hello-
I've set up a LAGG connection between a FreeBSD 9.2-RELEASE-p3 machine and a Cisco SG300-52 switch. Indications from both sides report that the LAGG connection has been made and is functioning. However, I am seeing network speeds that are much slower than when I had a single NIC connection.
I do daily overnight backups using different backup protocols. One robocopy backup via smbd that usually takes about 3 hours now takes about 5 hours to complete over the network. Another remote backup that uses rsync also takes more time (almost twice as long) to complete when using LAGG.
Here's the network hardware configuration:
I'm wondering of the issue is due to different NIC hardware? The first one is built into the motherboard while the other is a PCI card.
My LAGG config:
I see that em0 contains two additional options that em1 doesn't have. TS04 and VLAN_HWTSO.
I've played with the options in /etc/sysctl.conf but am unable to get a clean working LAGG connection. On another FreeBSD server using two embedded NIC motherboard connections in a LAGG configuration, I'm seeing excellent networking conditions.
Admittedly, this issue is on a much older machine- one that is roughly 8 years old. It functions quite well for its age but I would be surprised if it's due to age.
Are the NICs too dissimilar hardware-wise to function well as a LAGG pair?
~Doug
I've set up a LAGG connection between a FreeBSD 9.2-RELEASE-p3 machine and a Cisco SG300-52 switch. Indications from both sides report that the LAGG connection has been made and is functioning. However, I am seeing network speeds that are much slower than when I had a single NIC connection.
I do daily overnight backups using different backup protocols. One robocopy backup via smbd that usually takes about 3 hours now takes about 5 hours to complete over the network. Another remote backup that uses rsync also takes more time (almost twice as long) to complete when using LAGG.
Here's the network hardware configuration:
Code:
root@test:/root # dmesg | g em0
em0: <Intel(R) PRO/1000 Network Connection 7.3.8> port 0x2000-0x201f mem 0xf8900000-0xf891ffff irq 17 at device 0.0 on pci3
em0: Using an MSI interrupt
em0: Ethernet address: 00:13:20:b0:48:1d
root@test:/root # dmesg | g em1
em1: <Intel(R) PRO/1000 Legacy Network Connection 1.0.6> port 0x1200-0x123f mem 0xf8820000-0xf883ffff,0xf8800000-0xf881ffff irq 18 at device 2.0 on pci4
em1: Ethernet address: 90:e2:ba:15:9a:ba
[12]root@test:/root #
I'm wondering of the issue is due to different NIC hardware? The first one is built into the motherboard while the other is a PCI card.
My LAGG config:
Code:
# LAGG configuratiton
ifconfig_em0="up"
ifconfig_em1="up"
cloned_interfaces="lagg0"
ifconfig_lagg0="laggproto lacp laggport em0 laggport em1 192.168.xxx.xx/24"
Code:
root@test:/root # ifconfig
em0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
options=4219b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,TSO4,WOL_MAGIC,VLAN_HWTSO>
ether 00:13:20:b0:48:1d
inet6 fe80::213:20ff:feb0:481d%em0 prefixlen 64 scopeid 0x1
nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
media: Ethernet autoselect (1000baseT <full-duplex>)
status: active
em1: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
options=209b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,WOL_MAGIC>
ether 00:13:20:b0:48:1d
inet6 fe80::92e2:baff:fe15:9aba%em1 prefixlen 64 scopeid 0x7
nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
media: Ethernet autoselect (1000baseT <full-duplex>)
status: active
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384
options=600003<RXCSUM,TXCSUM,RXCSUM_IPV6,TXCSUM_IPV6>
inet6 ::1 prefixlen 128
inet6 fe80::1%lo0 prefixlen 64 scopeid 0x8
inet 127.0.0.1 netmask 0xff000000
nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
lagg0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
options=209b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,WOL_MAGIC>
ether 00:13:20:b0:48:1d
inet 192.168.xxx.xx netmask 0xffffff00 broadcast 192.168.xxx.255
inet6 fe80::213:20ff:feb0:481d%lagg0 prefixlen 64 scopeid 0x9
nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
media: Ethernet autoselect
status: active
laggproto lacp lagghash l2,l3,l4
laggport: em1 flags=1c<ACTIVE,COLLECTING,DISTRIBUTING>
laggport: em0 flags=1c<ACTIVE,COLLECTING,DISTRIBUTING>
root@test:/root #
I see that em0 contains two additional options that em1 doesn't have. TS04 and VLAN_HWTSO.
Code:
root@test:/root # cat /etc/sysctl.conf
# $FreeBSD: releng/9.1/etc/sysctl.conf 112200 2003-03-13 18:43:50Z mux $
#
# This file is read when going to multi-user and its contents piped thru
# ``sysctl'' to adjust kernel values. ``man 5 sysctl.conf'' for details.
#
# Uncomment this to prevent users from seeing information about processes that
# are being run under another UID.
#security.bsd.see_other_uids=0
# tunable zfs parameters
kern.maxvnodes=250000
#vfs.zfs.write_limit_override=268435456
#kern.maxfiles=16384
kern.maxfiles=204800
#kern.maxfilesperproc=16384
kern.maxfilesperproc=200000
#kern.ipc.maxsockbuf=2097152 # kernel socket buffer space
kern.ipc.nmbclusters=262144 # kernel mbuf space raised 275MB of kernel dedicated ram
kern.ipc.somaxconn=4096 # size of the listen queue for accepting new TCP connections
kern.ipc.maxsockets=204800 # increase the limit of the open sockets
# added 12/16/09 to improve system performance when writing millions of files
#vfs.read_max=256
#vfs.hirunningspace=2097152
# the following sysctl variables were suggested by this web site at
# Calomel.org. https://calomel.org/network_performance.html
#
#kern.ipc.maxsockbuf=2097152 # kernel socket buffer space
#kern.ipc.nmbclusters=262144 # kernel mbuf space raised 275MB of kernel dedicated ram
#kern.ipc.somaxconn=32768 # size of the listen queue for accepting new TCP connections
#kern.ipc.maxsockets=204800 # increase the limit of the open sockets
# kern.randompid=348 # randomized processes id's
# net.inet.icmp.icmplim=50 # reply to no more than 50 ICMP packets per sec
# net.inet.ip.process_options=0 # do not processes any TCP options in the TCP headers
# net.inet.ip.redirect=0 # do not allow ip header redirects
# net.inet.ip.rtexpire=2 # route cache expire in two seconds
# net.inet.ip.rtminexpire=2 # "
#net.inet.ip.rtmaxcache=256 # route cache entries increased
# net.inet.icmp.drop_redirect=1 # drop icmp redirects
net.inet.tcp.blackhole=2 # drop any TCP packets to closed ports
# net.inet.tcp.delayed_ack=0 # no need to delay ACK's
# net.inet.tcp.drop_synfin=1 # drop TCP packets which have SYN and FIN set
# net.inet.tcp.msl=7500 # close lost tcp connections in 7.5 seconds (default 30)
# net.inet.tcp.nolocaltimewait=1 # do not create TIME_WAIT state for localhost
# net.inet.tcp.path_mtu_discovery=0 # disable MTU path discovery
#net.inet.tcp.recvbuf_max=2097152 # TCP receive buffer space
#net.inet.tcp.recvbuf_max=16777216 # TCP receive buffer space
#net.inet.tcp.recvspace=8192 # decrease buffers for incoming data
#net.inet.tcp.sendbuf_max=16777216 # TCP send buffer space
#net.inet.tcp.sendbuf_max=2097152 # TCP send buffer space
#net.inet.tcp.sendspace=16384 # decrease buffers for outgoing data
net.inet.udp.blackhole=1 # drop any UDP packets to closed ports
# security.bsd.see_other_uids=0 # keeps users segregated to their own processes list
# security.bsd.see_other_gids=0 # "
#net.isr.direct=1
#net.isr.direct_force=1
#dev.em.0.rx_int_delay=200
#dev.em.0.tx_int_delay=200
#dev.em.0.rx_abs_int_delay=4000
#dev.em.0.tx_abs_int_delay=4000
#dev.em.0.rx_processing_limit=4096
##dev.em.0.rx_processing_limit=-1
#dev.em.1.rx_int_delay=200
#dev.em.1.tx_int_delay=200
#dev.em.1.rx_abs_int_delay=4000
#dev.em.1.tx_abs_int_delay=4000
#dev.em.1.rx_processing_limit=4096
##dev.em.1.rx_processing_limit=-1
root@test:/root #
I've played with the options in /etc/sysctl.conf but am unable to get a clean working LAGG connection. On another FreeBSD server using two embedded NIC motherboard connections in a LAGG configuration, I'm seeing excellent networking conditions.
Admittedly, this issue is on a much older machine- one that is roughly 8 years old. It functions quite well for its age but I would be surprised if it's due to age.
Are the NICs too dissimilar hardware-wise to function well as a LAGG pair?
~Doug