Intel i350t4v2 OEM Cisco

I have had a really hard time finding info on this topic. Any help appreciated.

I purchased an intel i350t4v2 but when i received it I found it was an OEM Cisco card manufactured by Intel. I couldn't enable SR-IOV on the card. SR-IOV allows you to bind a "Virtual Function" or VF to a physical port. I should be able to split each 1GBE port on my card into four Virtual devices but I can't get the devices to show up in /dev/iov.

When I put the i350 card into the server I get igb devices, but they all say "SR-IOV disabled" when I run the command: pciconf -lvc

Eventually I gave up and bought an Intel x550t2. I had the same problems until I installed the intel driver via
pkg install intel-ix-kmod

I had added a file at /etc/iovctl.conf
Code:
PF {
        device: ix0;
        num_vfs: 4;
}
DEFAULT {
        passthrough: true;
}
VF-0 {
        passthrough: false;
}
I had also added the following lines to /etc/rc.conf
Code:
iovctl_files="/etc/iovctl.conf"
if_ix_updated_load="YES"
after a quick reboot it showed four virtual devices and "SR-IOV enabled" when I ran: pciconf -lvc
This card got ix devices and the /dev/iov folder appeared.

Now that this new card gave me "SR-IOV disabled" until I had the correct driver and then it showed "enabled", I'm wondering if there is a driver I'm not finding. When I gave up I thought it was an unavailable feature on this card despite Intel documentation saying it was there. Now I have hope it's just missing software.

My thinking right now is I wasted my money on the i350T4V2 ($300!) If anyone has ever gotten that card to work I'd appreciate a heads up. Thanks!
 
Last edited by a moderator:
Update. Right after I posted the above I realized that although I got a /dev/iov folder, I didn't get any devices in ifconfig.

I re-installed and used ports instead of packages to install intel-ix-kmod and now I have devices in ifconfig as well.

I then set up a jail with /etc/jail.conf
Code:
sysvmsg = new;          # prevents security issues with sysv memory being shared across jails and to host
sysvsem = new;
sysvshm = new;          # prevents problems with postgres starting
exec.clean;             # default jails will enherit environment vars from parent. this stops that.

#setup limits on CPU and RAM
exec.created="rctl -a 'jail:$name:pcpu:deny=200'";              # happens after jail is created but before jai>
exec.created+="rctl -a 'jail:$name:vmemoryuse:deny=7520MB'";
exec.release="rctl -r 'jail:$name:pcpu:'";                      # happens after jail is destroyed.
exec.release+="rctl -r 'jail:$name:vmemoryuse'";

exec.start="sh /etc/rc";
exec.stop="sh /etc/rc.shutdown";
mount.devfs;            # minimum devices list
allow.nomount;          # prevent jails from mounting filesystems

sriovtest {
    host.hostname = "net.-----.com";
    path="/zroot/jails/net";
    vnet;
    vnet.interface = "ixv0";
}

and then added this to /etc/rc.conf

Code:
hostname="host.----.com"
ifconfig_igb0="DHCP"
sshd_enable="YES"
# Set dumpdev to "AUTO" to enable crash dumps, "NO" to disable
dumpdev="AUTO"
zfs_enable="YES"
jail_enable="YES"

# SR-IOV
iovctl_files="/etc/iovctl.conf"

with the jail off, I set /zroot/jails/jailname/etc/rc.conf

Code:
sshd_enable="YES"
# Set dumpdev to "AUTO" to enable crash dumps, "NO" to disable
dumpdev="AUTO"

rtsold_enable="YES"
ifconfig_ixv0="inet 192.168.33.200 accept_rtadv"
defaultrouter="192.168.33.1"

And it all seems to work.

$500 later, I now think I should have bought a 700 series card.
-
Unfortunately I have no idea how well the 550 card will work since I had to use the intel-ix-kmod port to get it working.
https://www.freshports.org/net/intel-ix-kmod/
The port documentation points out that using net/intel-ix-kmod could lead to instability.
I feel like the instablity of the xl driver using the SR-IOV feature should have been documented in the ix driver man page and also the iovctl man page.
If that were documented then I would have gone to the ixl driver man page and realized that if I wanted full SR-IOV support then I need to get a 700 series card. Problem solved.

I verified that the cards that I bought had SR-IOV and that Intel supplies a driver for FreeBSD. It was not obvious that my use case would not be supported. Not sure how I could have prevented this mistake without better docs.
Unfortunately, when purchasing cards I was not able to find many due to the ongoing chip shortage.
I thought I was being smart by researching the least expensive card that would do what I want, then shopping for it. I did not realize the more expensive cards could be had for similar prices right now (and would be fully supported).

Also,
Intel® Ethernet Network Adapter X710 has a suggested retail of $195 and SR-IOV is fully supported by the ixl driver. Though good luck finding it for less than $400.
The ixl driver fully supports SR-IOV on 700 series cards. If you are going to buy a card, first see the ixl man page for details.
 
I saw some info on Chelsio as I was researching forum posts but why would I believe them? I believed Intel and look what it got me. If some Chelsio cards support SR-IOV then shouldn't I be able to see that in the docs? Because Chelsio and Intel will tell me the card supports it but the driver that FreeBSD publishes is what matters. Without confirmation from the docs I could end up in the same situation again where the manufacturer says the card supports it and here we publish an official FreeBSD driver so you can have confidence that it works but oh wait, once you buy the card, oops! SR-IOV isn't supported.

Consider this addition to man iovctl:
----------------
iovctl -- PCI SR-IOV configuration utility

SR-IOV DRIVER SUPPORT
ix driver - Default no support. The ix driver supports all 10 gigabit network connections based on 82599, 82598EB, X520, X540, and X550 series controllers but the SR-IOV feature is not enabled by default due to potential instability. To enable 'Experimental' support see net/intel-ix-kmod port for more details.
ixl driver - Full support, Intel Ethernet 700 Series Driver
cxgb(4) - Suported?Unsupported? Chelsio T3 10 Gigabit Ethernet adapter driver
cxgbe(4) - Suported?Unsupported? Chelsio T4-, T5-, and T6-based 100Gb, 40Gb, 25Gb, 10Gb, and 1Gb Ethernet adapter driver
cxgbev(4) -Suported?Unsupported? Chelsio T4-, T5-, and T6-based 100Gb, 40Gb, 25Gb, 10Gb, and 1Gb Ethernet VF driver
----------------

That would be a very welcome paragraph for those who want to use SR-IOV! I can't see any manufacturer being offended so long as the info is accurate. It seems to me this info could be published without controversy and would save a lot of headache.
 
Anyway, that wasn't what I came here to say.

I flashed the Intel x550t2 with the latest EFI firmware to make it boot faster and I have it pinging in a jail with the same config I posted above. I used Intel's BOOTUTIL on a PC. It was booting really slow with the "Combo" UEFI, PXE Enabled firmware. 10/10, would recommend.

However, I noticed an error in messages:

ixv0: set address: WARNING: network mask should be specified; using historical default

In the jail's /etc/rc.conf I altered the ifconfig to include the netmask and it went away:
ifconfig_ixv0="inet 192.168.33.200/24 accept_rtadv"

So I just wanted to say that.

Also, I have more servers to build. If anyone knows where i can find authoritive info on which SR-IOV capable cards are supported by iovctl from Chelsio then I would love to give them a try.

In researching cards I had seen another driver, for the intel 800-series cards. The "ice" driver. But I don't know where to get that and it doesn't seem there is a man page for it yet.
 
In researching cards I had seen another driver, for the intel 800-series cards. The "ice" driver. But I don't know where to get that and it doesn't seem there is a man page for it yet.
The ice driver is in the GENERIC kernel.

Current version is 1.37.7-k. The latest version available from Intel is 1.37.11.

There is indeed no man page in source for the driver. When introduced back in 2020-05-26, the commit note says that "A man page for this driver will be forthcoming.", but it looks like it has been forgotten.

I looked for a online edition but couldn't find one. However, the man page can be created from Intels driver tarball easily, just follow the instruction in the README.
 
Good to know T-Daemon. Thank you.

Yesterday I had gone through a very thorough procedure to try all flash options on the Intel x550t2 to make it boot faster. While in the midst of this procedure the motherboard reset the SR-IOV options and I had to turn them back on. It occurred to me while I slept that more than one test may have been invalidated before I caught this problem so I started over today. I turned off Flash on the card and put it back in the server. It booted with IOV no problem.

Then I turned off the server by the power supply for a bit and restarted. With flash off it booted right up, IOV worked, jail started at boot and took its virtual interface. jail pings and everything. So I should have no surprises during boot.

I found a post that suggested I turn off hardware accelleration to solve instability issues.
in the jail's /etc/rc.conf
ifconfig_ixv0="inet 192.168.33.200/24 accept_rtadv -tso4 -tso6 -lro -vlanhwtso"
ifconfig_ixv0_ipv6="inet6 xxx accept_rtadv -tso4 -tso6 -lro -vlanhwtso"

It makes some sense that if you're bypassing a lot of hardware with SR-IOV, then hardware accelleration isn't going to help you. So if it causes instability you may as well turn it off. But I'm no expert. I haven't had a problem yet so i just stuck the commented out solution in my jail's rc.conf with an explanation.

If anyone has current info on 13.2 let me know.
 
I installed a second jail today and not only did it not work, it also knocked out networking on the first jail. There seems to be some kind of conflict between the two VFs, which leads me to believe I may be wasting my time with this card.

I have an ASRock X570 Taichi motherboard with an AMD Ryzen 9 5950X 16-Core Processor. The BIOS seems to have all the right settings but I won't know for sure until i have something that works.

Is there another way to share an interface to more than one jail but have each jail not see traffic from other jails on the same interface?

Any leads at all would be appreciated.
 
I spent another day on this and now I have another expensive card that is not working that I can no longer return or use.

So I looked up the Intel X710 card. Turns out there is a firmware check that forces you to use an Intel SFP+ adapter and there is no Intel RJ-45 adapter that is compatible with that card. That pretty much doubles the cost because I would need to get a switch that translates the fiber cables to 1Gb ethernet.

So it looks like I have to scrap this entire build plan and start over. Very unfortunate.
 
I am moving on with VNET and netgraph.

jail.conf:
Code:
net{
    host.hostname = "net.example.com";
    path="/zroot/jails/net";
    vnet;
    vnet.interface = ng0_net;
    exec.prestart="jng bridge net ix0";
    exec.prestop  = "ifconfig ng0_net -vnet net";
    exec.poststop = "jng shutdown net";
    devfs_ruleset = "11"; # rule to unhide bpf for DHCP
}(second jail same as first)

jail /etc/rc.conf
Code:
ifconfig_ng0_net="SYNCDHCP"

The jails got networking through DHCP no problem. I need to figure out fixed IPs next.

Well, I'm trying to have two jails share an interface but not see traffic from other jails. So once they were up and working I checked to see the damage.
I tried pinging:
from jail1 to host: nothing heard in jail2
from jail1 to jail2: heard in jail2
from host to jail1: nothing heard in jail2
from host to jail2: heard in jail2

I set up bpf which allows dns to work. I thought this would be a problem and leak all kinds of neighboring traffic into the jail. The jails see all kinds of ip6 and broadcast traffic but as far as i can tell, they can't see anything on the host or the other jails that they aren't supposed to see.

My impression from reading the mastery jail book and other resources was that this was going to be difficult or not performant. Boy was that wrong. This was way easier than SR-IOV and seems to perform very well. I didn't have to firewall or NAT or anything.
 
sko, they say they tested it on the x710 card and I have no reason to doubt them. Thank you very much for the recommendation.

I was going by the freebsd ixl man page.
HARDWARE
Most adapters in the Intel Ethernet 700 Series with SFP+/SFP28/QSFP+
cages have firmware that requires that Intel qualified modules are used;
these qualified modules are listed below. This qualification check can-
not be disabled by the driver.

I've run some speed tests with iPerf3 and I'm at 7.5% cpu per iperf3 process in the jails and 3.5% in the host. I had hoped SR-IOV to give me host-like cpu numbers but I've run out of budget to get SR-IOV working. If anyone can verify the numbers on SR-IOV are better than netgraph (lower cpu effort, I assume throughput will be the same) then i would of course buy a card and pursue this but without confidence in a better outcome I can't justify further effort.

Thank you everyone who replied.
VladiBG, I didn't get a notice that you posted. I had thirty days to return the x550 card. I got it working in a week or so, but only with one jail. Then I took a break to get caught up on other stuff. When I got back to it a couple days ago, I set up a second jail and it disconnected the first jail. But it was too late to return it at that point. A couple more days of effort yielded no progress, so i went the netgraph route.
 
Back
Top