redundant network card configuration

Hey guys,

I have a FreeBSD 8.1 machine. The machine has two network cards both connected to a switch. The first network card is configured with an ip address and accessicle through the network. The socond one not.

Is it possible to configure the second network card as a standby card? And when the first fails that the second becomes active?

Does someone know how I can do that?

Thank
Jan
 
Hey, thanks for the link.

But I'm really not sure if I configured my network cards the right way.

I have network interfaces em0 and em1. emo is configured with an IP address and reachable through the network. I haven't touch em1 at all.

And then I configured the following:

# ifconfig lagg0 create
# ifconfig lagg0 up laggproto lacp laggport em0 laggport em1

Is that right? When em0 goes down em1 comes up with the same IP address em0 was up?
 
I'm sorry, I configured:

# ifconfig lagg0 create
# ifconfig lagg0 up laggproto failover laggport em0 laggport em1
 
The IP address needs to be set on the lagg0 interface, not the individual members.
 
Ok, i tried:

# ifconfig lagg0 create
# ifconfig lagg0 up laggproto failover laggport em0 laggport em1 1.2.3.4 netmask 255.255.255.224

This is how you can read it through the man-help.

But after a reboot the lagg0 interface is gone.

I'm sure, I'm doing just a little thing wrong ;).
 
You need to add some options to /etc/rc.conf:
Code:
ifconfig_em1="up"
ifconfig_em0="up"
cloned_interfaces="lagg0"
ifconfig_lagg0="laggproto failover laggport em0 laggport em1 inet 1.2.3.4 netmask 255.255.255.224"
 
Next Problem:

When I shut down the switchport to which em0 ist connected, em1 takes over and becomes the active network interface. that's the way it should be. But if I bring the switchport back up, the server is not reachable through the network. if i do a "clear mac address-table" on the switch, the link is back up.

I know this is a switch problem but perhaps I can configure something on the server to fix this problem.
 
doublejay said:
Next Problem:

When I shut down the switchport to which em0 ist connected, em1 takes over and becomes the active network interface. that's the way it should be. But if I bring the switchport back up, the server is not reachable through the network. if i do a "clear mac address-table" on the switch, the link is back up.

I know this is a switch problem but perhaps I can configure something on the server to fix this problem.

Double-check if your switch supports LACP (Link Aggregation Control Protocol). If it does, then configure the 2 switch ports used by em0/em1 as an LACP trunk. And configure the lagg0 interface to use lacp instead of fail-over.

That way, you get both automatic fail-over and link aggregation for overall higher throughput (still limited to 1 Gbps per connection, but you can run 2 connections at once).
 
phoenix said:
That way, you get both automatic fail-over and link aggregation for overall higher throughput (still limited to 1 Gbps per connection, but you can run 2 connections at once).

I dunno, I ran LACP on a server I've got but when one of the NIC's (sorta) died the the connection died as well. So I'm a little skeptic sometimes when I see people talk about it. I know how its suppose to work, but it didn't in my case so I guess.. blah blah.. :/

And yes I know this post was sorta useless, but I just felt like complaining a little :p


on a side note, when you say LACP is limited to 1 Gbps per connection. Do you mean to say that a connection can max have 1 Gbps. Or are we just saying a 1 Gbps link is still just a 1 Gbps link... ?
 
Duh. :) It's physically impossible to run a 1 Gbps ethernet adapter at speeds above 1 Gbps. :)

Even if you combine multiple 1 Gbps NICs into a single trunk, each connection is still limited to 1 Gbps. A trunk just allows you to run multiple 1 Gbps connections at the same time, using only a single IP.
 
phoenix said:
Duh. :) It's physically impossible to run a 1 Gbps ethernet adapter at speeds above 1 Gbps. :)

Even if you combine multiple 1 Gbps NICs into a single trunk, each connection is still limited to 1 Gbps. A trunk just allows you to run multiple 1 Gbps connections at the same time, using only a single IP.

Of course, what I was saying was that a physical link at 1Gbps will run at 1Gbps.. and when you bond multiple physical links, the effective speed is 1Gbps * number of physical links. And if you were to send something thou the bonded link that something will be transfered at the effective speed... :)

I just confused some of what you said... when people say connection I usually think information and when people say link I think of the physical wire.. ;)
 
Bobbla said:
Of course, what I was saying was that a physical link at 1Gbps will run at 1Gbps.. and when you bond multiple physical links, the effective speed is 1Gbps * number of physical links.

Yes, correct.

And if you were to send something thou the bonded link that something will be transfered at the effective speed... :)

No, incorrect.

A single TCP connection between two systems will be limited to 1 Gbps, as it will only use 1 physical link. However, you can create "number of physical links" TCP connections, in order to get an aggregate throughput of "effective speed".
 
phoenix said:
A single TCP connection between two systems will be limited to 1 Gbps, as it will only use 1 physical link. However, you can create "number of physical links" TCP connections, in order to get an aggregate throughput of "effective speed".

Is portnumber part of the calculation in FreeBSD? The (brief) description of the distribution algorithm in the handbook doesn't mention ports.

Another thing to note is that regardless of algorithm two connections might very well end up going to the same interface, leaving other interfaces unused.
 
It depends on the bonding algorithm used. Some use MAC addresses, some use host+dst IPs, others use the full local:port+dst:port tuples. You'd have to either dig into the source or the docs on each algorithm to find out exactly how it works.
 
error in post, should not use "inet" keyword

SirDice said:
You need to add some options to /etc/rc.conf:
Code:
ifconfig_em1="up"
ifconfig_em0="up"
cloned_interfaces="lagg0"
ifconfig_lagg0="laggproto failover laggport em0 laggport em1 [B]inet[/B] 1.2.3.4 netmask 255.255.255.224"

There is a severe error here, that cost me some time. If you put the "inet" keyword in there, failovers don't seem to ever happen. I tried it with "failover", "lacp" and "loadbalance", and with "inet" in there, it always failed. Without "inet" in there, I was happy with the result. (Can a moderator edit the above post?)

See this thread, which is full of examples and never uses the "inet" keyword. http://forums.freebsd.org/showthread.php?t=16718

If I included the word "inet" in there, it would always do something like this:

Starts up saying one (failover) or more (lacp/loadbalance) laggports are active. The ip address shown in ifconfig shows up as 67.215.77.132 (which is not what I set, but it matches the OpenDNS annoying lie value returned by any bogus dns query... not sure why this ip is there). But I can ping the IP I configured (to my surprise, since it is not listed in ifconfig). Then if I unplug the network cable, I can no longer ping, even when other interfaces are shown to be active in ifconfig.

When I remove "inet", it seems to work fine, although no matter which laggproto I use, there is a delay of a few seconds in between the disconnection and the next successful attempt. (I don't know if that is normal)

So I would correct the above to:

Code:
ifconfig_em1="up"
ifconfig_em0="up"
cloned_interfaces="lagg0"
ifconfig_lagg0="laggproto failover laggport em0 laggport em1 1.2.3.4 netmask 255.255.255.224"

It is a shame that the handbook page http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/network-aggregation.html does not have any static ip examples.
 
The rc.d scripts might be grepping for "inet" for something else.

As far as the Handbook, missing content can be submitted with a PR.
 
Back
Top