82598EB 10 Gigabit AT CX4 no carrier

Hi all,

I have a server with the Intel 10Gbit network card 82598EB AT CX4, and with an embedded Intel 1Gbit. The embedded network port works correctly, but I cannot use the 10Gbit one. It seems that FreeBSD correctly recognizes the card, with pciconf -l -v I have:

Code:
ix0@pci0:9:0:0:	class=0x020000 card=0xaf8015d9 chip=0x10dd8086 rev=0x01 hdr=0x00
    vendor     = 'Intel Corporation'
    device     = '82598EB 10 Gigabit AT CX4 Network Connection'
    class      = network
    subclass   = ethernet

But ifconfig gives me:

Code:
ix0: flags=8802<BROADCAST,SIMPLEX,MULTICAST> metric 0 mtu 1500
	options=1bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,TSO4>
	ether 00:25:90:35:3e:ac
	media: Ethernet autoselect
	status: [B]no carrier[/B]

But the cable is connected! The card's hardware works, and so do the switch and the cable, I have verified with a Linux distribution, so I guess it is a software problem. I have also downloaded and installed the latest FreeBSD driver from the Intel site, but with no luck. The leds are blinking, but I cannot change the status of the interface, so even if I give it an IP number it does not work.

What else I can try? Thank you from a FreeBSD newbie...
 
I HAD exactly the same problem. Testing the 10gbit ports always fails. Ping doesn't work, for example. While looking for the cause, I found the "no carrier" status message in the ifconfig output.


The problem situation (default install of FreeBSD 8.2-STABLE [rather than -RELEASE]):


pciconf -l -v output:
Code:
ix0@pci0:2:0:0: class=0x020000 card=0xaf8015d9 chip=0x10dd8086 rev=0x01 hdr=0x00
    vendor     = 'Intel Corporation'
    device     = '82598EB 10 Gigabit AT CX4 Network Connection'
    class      = network
    subclass   = ethernet
ix1@pci0:2:0:1: class=0x020000 card=0xaf8015d9 chip=0x10dd8086 rev=0x01 hdr=0x00
    vendor     = 'Intel Corporation'
    device     = '82598EB 10 Gigabit AT CX4 Network Connection'
    class      = network
    subclass   = ethernet

ifconfig output:
Code:
igb0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
        options=1bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,TSO4>
        ether 00:25:90:0e:db:ca
        inet 10.3.0.17 netmask 0xffffff00 broadcast 10.3.0.255
        media: Ethernet autoselect (1000baseT <full-duplex>)
        status: active
igb1: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
        options=1bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,TSO4>
        ether 00:25:90:0e:db:cb
        media: Ethernet autoselect
        status: no carrier
ix0: flags=8803<UP,BROADCAST,SIMPLEX,MULTICAST> metric 0 mtu 1500
        options=1bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,TSO4>
        ether 00:25:90:0b:53:8a
        inet 10.3.0.18 netmask 0xffffff00 broadcast 10.3.0.255
        media: Ethernet autoselect
        status: no carrier
ix1: flags=8802<BROADCAST,SIMPLEX,MULTICAST> metric 0 mtu 1500
        options=1bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,TSO4>
        ether 00:25:90:0b:53:8b
        media: Ethernet autoselect
        status: no carrier

uname -a output:
Code:
FreeBSD bcnas1.bc.local 8.2-STABLE-201105 FreeBSD 8.2-STABLE-201105 #0: Tue May 17 05:18:48 UTC 2011     
root@mason.cse.buffalo.edu:/usr/obj/usr/src/sys/GENERIC  amd64


grep "igb0" /var/log/dmesg.today output (the working 1gbit NIC):
Code:
igb0: <Intel(R) PRO/1000 Network Connection version - 2.2.3> port 0xdc00-0xdc1f mem 0xfade0000-0xfadfffff,0xfadc0000-
0xfaddffff,0xfad9c000-0xfad9ffff irq 28 at device 0.0 on pci1
igb0: Using MSIX interrupts with 9 vectors
igb0: [ITHREAD]
igb0: [ITHREAD]
igb0: [ITHREAD]
igb0: [ITHREAD]
igb0: [ITHREAD]
igb0: [ITHREAD]
igb0: [ITHREAD]
igb0: [ITHREAD]
igb0: [ITHREAD]
igb0: Ethernet address: 00:25:90:0e:db:ca
igb0: link state changed to UP


grep "ix[0-9]" /var/log/dmesg.today output (the 2 non-working 10gbit NICs):
Code:
ix0: <Intel(R) PRO/10GbE PCI-Express Network Driver, Version - 2.3.10> port 0xec00-0xec1f mem 0xfaf60000-
0xfaf7ffff,0xfafc0000-0xfaffffff,0xfaf5c000-0xfaf5ffff irq 24 at device 0.0 on pci2
ix0: Using MSIX interrupts with 9 vectors
ix0: [ITHREAD]
ix0: [ITHREAD]
ix0: [ITHREAD]
ix0: [ITHREAD]
ix0: [ITHREAD]
ix0: [ITHREAD]
ix0: [ITHREAD]
ix0: [ITHREAD]
ix0: [ITHREAD]
ix0: Ethernet address: 00:25:90:0b:53:8a
ix0: PCI Express Bus: Speed 2.5Gb/s Width x8
ix1: <Intel(R) PRO/10GbE PCI-Express Network Driver, Version - 2.3.10> port 0xe800-0xe81f mem 0xfaea0000-0xfaebffff,0xfaf00000-
0xfaf3ffff,0xfaf58000-0xfaf5bfff irq 34 at device 0.1 on pci2
ix1: Using MSIX interrupts with 9 vectors
ix1: RX Descriptors exceed system mbuf max, using default instead!
ix1: [ITHREAD]
ix1: [ITHREAD]
ix1: [ITHREAD]
ix1: [ITHREAD]
ix1: [ITHREAD]
ix1: [ITHREAD]
ix1: [ITHREAD]
ix1: [ITHREAD]
ix1: [ITHREAD]
ix1: Ethernet address: 00:25:90:0b:53:8b
ix1: PCI Express Bus: Speed 2.5Gb/s Width x8
ix0: Could not setup receive structures
ix0: Could not setup receive structures

The most interesting thing to me here was "ix0: Could not setup receive structures". ix0 is the one with the cable attached. Also we see "RX Descriptors exceed system mbuf max, using default instead", but only for ix1, not ix0.


vmstat -z | grep -v 0\$ output:
Code:
ITEM                     SIZE     LIMIT      USED      FREE  REQUESTS  FAILURES

64 Bucket:                536,        0,      353,        4,      353,       92
128 Bucket:              1048,        0,     2114,        1,     2445,      235
mbuf_packet:              256,        0,    16376,     1928,   871107,        2
mbuf_cluster:            2048,    25600,    18304,     7296,    59520,        4

sysctl -a | grep ix.[01] | grep -v ": 0" output:
Code:
dev.ix.0.%desc: Intel(R) PRO/10GbE PCI-Express Network Driver, Version - 2.3.10
dev.ix.0.%driver: ix
dev.ix.0.%location: slot=0 function=0
dev.ix.0.%pnpinfo: vendor=0x8086 device=0x10dd subvendor=0x15d9 subdevice=0xaf80 class=0x020000
dev.ix.0.%parent: pci2
dev.ix.0.flow_control: 3
dev.ix.0.enable_aim: 1
dev.ix.0.rx_processing_limit: 128
dev.ix.0.mac_stats.fc_crc: 3735928495
dev.ix.0.mac_stats.fc_last: 3735928559
dev.ix.1.%desc: Intel(R) PRO/10GbE PCI-Express Network Driver, Version - 2.3.10
dev.ix.1.%driver: ix
dev.ix.1.%location: slot=0 function=1
dev.ix.1.%pnpinfo: vendor=0x8086 device=0x10dd subvendor=0x15d9 subdevice=0xaf80 class=0x020000
dev.ix.1.%parent: pci2
dev.ix.1.flow_control: 3
dev.ix.1.enable_aim: 1
dev.ix.1.rx_processing_limit: 128
dev.ix.1.mac_stats.fc_crc: 3735928495
dev.ix.1.mac_stats.fc_last: 3735928559

The 82598EB chip is clearly listed in:
http://www.freebsd.org/releases/8.2R/hardware.html#ETHERNET

and:
http://www.freebsd.org/cgi/man.cgi?query=ixgbe&sektion=4&manpath=FreeBSD+8.2-RELEASE



The solution

Permanent fix:
Code:
cat << EOF >> /etc/sysctl.conf
kern.ipc.nmbclusters=262144
kern.ipc.nmbjumbop=262144
kern.ipc.nmbjumbo16=32000
kern.ipc.nmbjumbo9=64000
EOF

Temporary fix (untested):
Code:
sysctl kern.ipc.nmbclusters=262144
sysctl kern.ipc.nmbjumbop=262144
sysctl kern.ipc.nmbjumbo16=32000
sysctl kern.ipc.nmbjumbo9=64000
EOF

I have no idea what I should set the jumbo numbers to, so I just multiplied by 10. Likely the numbers don't matter at all until you change the MTU on the interface to use jumbo frames.

Interesting things to read:

The ixgbe driver readme:
http://downloadmirror.intel.com/14688/eng/README.txt
Attempting to configure larger MTUs with a large numbers of processors may
generate the error message "ix0:could not setup receive structures"
--------------------------------------------------------------------------
When using the ixgbe driver with RSS autoconfigured based on the number of
cores (the default setting) and that number is larger than 4, increase the
memory resources allocated for the mbuf pool as follows:

Add to the sysctl.conf file for the system:

kern.ipc.nmbclusters=262144
kern.ipc.nmbjumbop=262144

About jumbo frames:
http://freebsd.1045724.n5.nabble.co...-setup-receive-structures-quot-td4303005.html
If you get this message its only for one reason, you don't have enough mbufs to
fill your rings. You must do one of two things, either reduce the number of queues,
or increase the relevant mbuf pool.

Increase the 9K mbuf cluster pool.
 
These values actually need to be set in /boot/loader.conf as /etc/sysctl.conf loads to late for the driver init.
 
In addition if your not pushing the interface hard you can reduce the number of queues allocated with hw.ixgbe.num_queues=X e.g. hw.ixgbe.num_queues=2 which will reduce your mbuf usage.
 
Finally it seems the ix driver doesn't always init link until the interface is configured / up so if you're seeing an unconfigured interface showing as "status: no carrier" then a simple ifconfig ix0 up should be enough to kick it into action.
 
Back
Top