nic recommendation

Intel NICs are better supported than Broadcom. There's a full-time Intel engineer that works on the FreeBSD driver, for example. :)
 
Is there a specific model I should look at? should I prefer pci express over pci? will 1000 vs 100 make a difference for a webserver?
 
Faster is generally better. :) But it all depends on the size up the pipe(s) upstream. For example, if you only have a 10 Mbps Internet connection, than anything over 10 Mbps will be overkill. :) If you have a 10 Gbps Internet connection, than a 100 Mbps NIC may not be adequate. :)

It also depends on the speed of your storage system. Having a gigabit NIC and a gigabit network connection won't really help too much if you are stuck with a single SATA disk that can't sustain more than 50 MBps of random reads.

It also depends on the number of simultaneous connections you are expecting. If this is a workgroupd web server for 10 people to use, you don't need a skookum system. But, if this is going to be a public web server that you expect to get slashdotted on a regular basis, then there's no such thing as overkill. :D

IOW, look at the big picture (Internet connection, number of connections, speed of disk, amount of RAM, etc) to determine how fast of a NIC to get.
 
In the future, Genitals Tracy has finally ended.

UNIXgod said:
will 1000 vs 100 make a difference for a webserver?

Not on the public side, unless you have a pretty serious connexion (& that you expect to be hammered). I don't think it'll hurt either, though if you're planning on upgrading in the future (when flying cars will have television wristwatches at 100Pbit wireless).
 
This is what we have in most servers here
Intel Corporation 80003ES2LAN Gigabit Ethernet Controller (Copper) (rev 01)

If you are not doing any critical stuff, pick up lower or mid range.
 
I also noticed some cards are using the igb() driver. Price tends to be around the same as the em() card I linked above. any reason to consider one intel card over the other (based on drivers)? AFAIK features look the same.
 
Code:
em0@pci0:1:0:0: class=0x020000 card=0x125f8086 chip=0x105f8086 rev=0x06 hdr=0x00
    vendor     = 'Intel Corporation'
    device     = 'PRO/1000 PF Family'
    class      = network
    subclass   = ethernet

em2@pci0:0:25:0:        class=0x020000 card=0x281e103c chip=0x10bd8086 rev=0x02 hdr=0x00
    vendor     = 'Intel Corporation'
    device     = 'Intel 82566DM Gigabit Ethernet Adapter (82566DM)'
    class      = network
    subclass   = ethernet

em0@pci0:4:0:0:	class=0x020000 card=0x01d11028 chip=0x109a8086 rev=0x01 hdr=0x00
    vendor     = 'Intel Corporation'
    device     = 'Intel PRO/1000 PL Network Adaptor (82573L)'
    class      = network
    subclass   = ethernet

em0@pci0:10:1:0:        class=0x020000 card=0x11798086 chip=0x10798086 rev=0x03 hdr=0x00
    vendor     = 'Intel Corporation'
    device     = 'Dual Port Gigabit Ethernet Controller (82546EB)'
    class      = network
    subclass   = ethernet

em0@pci0:0:25:0:	class=0x020000 card=0x30c5103c chip=0x10498086 rev=0x03 hdr=0x00
    vendor     = 'Intel Corporation'
    device     = 'Gigabit Network Connection Interface Controller (82566MM NIC)'
    class      = network
    subclass   = ethernet
Just a few examples of em cards working fine here.
 
em(4) is the old-style gigabit driver, igb(4) is the new-style driver. Eventually, all Intel NICs will use the igb driver, as very few new chipsets are released that are supported by em. All the fancy features go into the igb driver.

IOW, chipsets supported by em will be less expensive and considered legacy, but will still work fine (we use all em-based Intel NICs in our servers and firewalls).
 
looking at the ET series please excuse my dumb question. There is a dual port and quad port version. Both use the same igb driver.

Could one assume that the quad port version 'doubles' the nics performance over the dual port version?

Example where several natted jails and pf on the same machine with nth amount of ifconfig aliases are used. Would the quad nic over the dual nic make any difference?

Intel markets optimization of server virtualization on their info page:
http://www.intel.com/Products/Server/Adapters/Gb-ET-Dual-Port/Gb-ET-Dual-Port-overview.htm
 
phoenix said:
There's a full-time Intel engineer that works on the FreeBSD driver, for example. :)
Thats really important thing..

Im using intel nics many times on many servers, they are the best for freebsd ;) I heard many times 'em' and 'igb' drivers are very successed. By my experience they support definitely all what you may have looking for! (vlans,polling,link0,hw chksums,bigframes.. totally hardware they free your cpu from stupid job)
 
Intel NICs working really good for me too. The only little thing i can complain about is:

There are known performance issues with this driver when running UDP traffic
with Jumbo Frames.

/usr/src/sys/dev/e1000/README
 
vadim64 said:
What about pci-e nics? Which better for FreeBSD 7.x? Which need less driver installation bodymoves?

The intel card I got was a igb(4)() which:

HISTORY
The igb device driver first appeared in FreeBSD 7.1.

It is working quite well for me btw. (though I am on 8.1)
 
We're using dual-port and quad-port Intel NICs (PCIe and PCI-X versions) without issues. These all use the em(4) driver.

We haven't been fortunate enough (yet) to get any NICs that use the igb(4) driver.
 
phoenix said:
We're using dual-port and quad-port Intel NICs (PCIe and PCI-X versions) without issues. These all use the em(4) driver.

We haven't been fortunate enough (yet) to get any NICs that use the igb(4) driver.

Well, did you see the plethora of problems regarding Intel nics on FreeBSD? We have at a German BSD forum a similar discussion, but with a quiet opposite outcome. Intel builds still great hardware, but lousy drivers at the moment.
 
Depends on the chipset. All of our NICs use the older xx574 chipsets, and there are no issues with these. Most of the issues with the newer drivers is for the xx575 and newer chipsets.
 
when polling was turned on I go a bunch of messages like this:

Code:
+igb0: Watchdog timeout -- resetting
+igb0: Queue(2) tdh = 8, hw tdt = 8
+igb0: TX(2) desc avail = 1022,Next TX to Clean = 6
+igb0: link state changed to DOWN
+igb0: Watchdog timeout -- resetting
+igb0: Queue(1) tdh = 5, hw tdt = 5
+igb0: TX(1) desc avail = 1019,Next TX to Clean = 0
+igb0: link state changed to DOWN
+igb0: link state changed to UP
+igb0: Watchdog timeout -- resetting
+igb0: Queue(1) tdh = 1, hw tdt = 1
+igb0: TX(1) desc avail = 1023,Next TX to Clean = 0
+igb0: link state changed to DOWN
+igb0: link state changed to UP
+igb0: Watchdog timeout -- resetting
+igb0: Queue(1) tdh = 2, hw tdt = 2
+igb0: TX(1) desc avail = 1022,Next TX to Clean = 0

Killed ssh and apache until I rebooted with polling turned off.
 
Back
Top