Enabling SR-IOV on intel driver

This is a new server I just installed. It has around 10 bhyve vm:s on it right now. I use latest Intel ix driver from intel-ix-kmod-3.3.6_1 package...

As the network performance went down on the VMs when leaving KVM om Linux due to not optimal bridge code in FreeBSD I want to use SR-IOV on the host.

Obviously I am missing something :) Therefor I ask you pro:s...


Code:
13:05:20 server3:/etc # cat /etc/iovctl.conf 
PF {
        device : "ix3"; 
        num_vfs : 4;
}

DEFAULT {
        passthrough : true;
}
13:05:29 server3:/etc #
Code:
13:05:05 server3:/etc # iovctl -f /etc/iovctl.conf -C
iovctl: Could not open device '/dev/iov/ix3': No such file or directory
13:05:15 server3:/etc #
It says SR-IOV is disabled.... See below...

From pciconf -lvc
Code:
ix3@pci0:5:0:1:    class=0x020000 card=0x061115d9 chip=0x10fb8086 rev=0x01 hdr=0x00
    vendor     = 'Intel Corporation'
    device     = '82599ES 10-Gigabit SFI/SFP+ Network Connection'
    class      = network
    subclass   = ethernet
    cap 01[40] = powerspec 3  supports D0 D3  current D0
    cap 05[50] = MSI supports 1 message, 64 bit, vector masks
    cap 11[70] = MSI-X supports 64 messages, enabled
                 Table in map 0x20[0x0], PBA in map 0x20[0x2000]
    cap 10[a0] = PCI-Express 2 endpoint max data 256(512) FLR NS
                 link x8(x8) speed 5.0(5.0) ASPM disabled(L0s)
    cap 03[e0] = VPD
    ecap 0001[100] = AER 1 0 fatal 0 non-fatal 1 corrected
    ecap 0003[140] = Serial 1 ac1f6bffff2df35e
    ecap 000e[150] = ARI 1
    ecap 0010[160] = [B]SR-IOV 1 IOV disabled,[/B] Memory Space disabled, ARI disabled
                     0 VFs configured out of 64 supported
                     First VF RID Offset 0x0180, VF RID Stride 0x0002
                     VF Device ID 0x10ed
                     Page Sizes: 4096 (enabled), 8192, 65536, 262144, 1048576, 4194304

What am I missing?


Thanks in advance

/Peo
 
Last edited by a moderator:
Hm... It seems the newer intel driver from package intel-ix-kmod-3.3.6_1 package does not support SR-IOV. Backing out this driver and using the system driver and devices show up under /dev/iov. But the problem is that The system built in intel driver is not trustworthy with the AOC-STGN-i2S (Intel 82599ES SFP+) card. The gbic does not work when rebooting in FreeBSD. The gbic only work at first boot when you get power OR if hot-re-plugged. The newer driver works always with the gbic, but lacks SR-IOV :)

So I replaced the 82599ES SFP+ card with a Chelsio T520-BT card I had on the shelf. It also show the devices under /dev/iov... But.... The chelsio card does not support to set MAC or set promiscious. ONLY passthrough.

Gaaaahhh!
 
13:05:20 server3:/etc # cat /etc/iovctl.conf
PF {
device : "ix3";
num_vfs : 4;
}
DEFAULT {
passthrough : true;
}

I think it is a good idea to define each one of the PF's you enable with num_vfs:<value>. For example at bottom of file:
PF0{passthrough:true;}
PF1{passthrough:true;}
PF2{passthrough:true;}
PF3{passthrough:true;}

It is back to the source for my reason:
https://github.com/freebsd/freebsd/blob/master/sys/sys/iov.h
On line 145 iov.h defines passthrough as false. If false this may fail silently.
So explicitly define each PF and leave out DEFAULT{} in the iovctl.conf or equivalent file.
.
That is my take on it.
I still have not tackled interfaces inside the VM (on my second try at this).
From the excellent Chelsio tip here:
The t4vf0 comes from the cxgbev driver, after loading that i would expect to see virtual interfaces show in the host.
I think I should be in good shape. What I did was passthrough t4nex0 's PCI address in /boot/loader.conf
Once I did that I then had 16 new ppt Virtual devices listed in pciconf.
I can't try them on host because {passthrough:false} crashes my computer.
Now I have to try the cxgbev driver inside the VM.
 
It also just occured to me that I must passthrough one of these new VF ppt devices/pci address to my bhyve startup script too.
Duh. No wonder i didn't see any new interfaces inside a VM. Need to add device and driver.
 
I only have the following in /boot/loader.conf.local
Code:
#For the driver itself
if_cxgbe_load="YES"
# If using VFs in the host itself 
# Shows up automatically as cxlv<num> in host (VF0 and VF1 below)
if_cxgbev_load="YES"

13:37:01 server3:~ # grep iov /etc/rc.conf
iovctl_files="/etc/iov/cxl0.conf"
13:37:06 server3:~ #
pciconf -vl correctly shows according to the
Code:
13:30:45 server3:~ # more /etc/iov/cxl0.conf 
PF {
        device : "cxl0"; 
        num_vfs : 8;
}
DEFAULT {
        passthrough : true;
}
VF-0 {
        passthrough : false;
}
VF-1 {
        passthrough : false;
}
VF-0 and VF-1 NICs show up correctly automatically on the hypervisor host as I have loaded cxgbev in the loader.conf


If I assign one of the ppt VF devices from the "pciconf -vl" output to a FreeBSD VM it shows up ok in the VM if I
load "if_cxgbev_load="YES" in the VM.

But if a device is assigned to a linux VM I got errors as seen in https://forums.freebsd.org/threads/sr-iov-chelsio-error-in-guest.70653/

So I at least think I got it all right on the host side. I am not a FreeBSD pro, so please enlighten me if I got something wrong.
 
Last edited by a moderator:
I found this in the intel source and enabled it yesterday. After that I talked to the maintainer of the port. He listened to my request and added it (i.e Sergey Kozlov) so the port can be used instead .Thanks Sergey! So yes, I knew this :)
 
The chelsio card does not support to set MAC
Word up. Same result here. No mac-addr in config on Chelsios.

AOC-STGN-i2S (Intel 82599ES SFP+) card.
I have this exact card I believe. I am testing it out tonight..
Frustrated with Chelsio and 16 VF's but only 4 are actually assignable in a VM.
May 7 21:27:46 freebsd2 kernel: command 0x3 in mailbox 0 timed out
May 7 21:27:46 freebsd2 kernel: mbox: 0320000000000001 0000000000000000 40c77182ffffffff
7dd36382ffffffff ce1a3b63bdecf566 5900000000000000 f0c67182ffffffff 4000000000000000
May 7 21:27:46 freebsd2 kernel: t4vf1: encountered fatal error, adapter stopped.
Anything over 4 VF's goes from t4vf0 to the next t4vf1 for the next set of 4 VF's.
They all fail with the above warning. So 4 VF's max??

So I had this $40 dollar Intel OEM card and figured I would use my newfound knowledge.
So far it worked exactly as it should. I have 64 VF's available too.
No silly ppt device passthrough trick needed.
Made a iovctl configuration file for ix0 and rebooted. pciconf is populated with the 16 VF's I assigned for testing.
So far so good.
Code:
ix0@pci0:2:0:0:    class=0x020000 card=0xffffffff chip=0x10fb8086 rev=0x01 hdr=0x00
    vendor     = 'Intel Corporation'
    device     = '82599ES 10-Gigabit SFI/SFP+ Network Connection'
    class      = network
    subclass   = ethernet
    bar   [10] = type Memory, range 64, base 0xfb280000, size 524288, enabled
    bar   [18] = type I/O Port, range 32, base 0xe020, size 32, enabled
    bar   [20] = type Memory, range 64, base 0xfb304000, size 16384, enabled
    cap 01[40] = powerspec 3  supports D0 D3  current D0
    cap 05[50] = MSI supports 1 message, 64 bit, vector masks
    cap 11[70] = MSI-X supports 64 messages, enabled
                 Table in map 0x20[0x0], PBA in map 0x20[0x2000]
    cap 10[a0] = PCI-Express 2 endpoint max data 256(512) FLR NS
                 link x4(x8) speed 5.0(5.0) ASPM disabled(L0s)
    cap 03[e0] = VPD
    ecap 0001[100] = AER 1 0 fatal 1 non-fatal 1 corrected
    ecap 0003[140] = Serial 1 000babfffff190f2
    ecap 000e[150] = ARI 1
    ecap 0010[160] = SR-IOV 1 IOV enabled, Memory Space enabled, ARI enabled
                     16 VFs configured out of 64 supported
                     First VF RID Offset 0x0080, VF RID Stride 0x0002
                     VF Device ID 0x10ed
                     Page Sizes: 4096 (enabled), 8192, 65536, 262144, 1048576, 4
 
I have had great luck with that new driver. The problem I have is I cannot get an IP with DHCP.
If I enable my em0 passthru network card it works fine.
Something is not getting passed thru right. No firewall in use. Internal network.
When I configure static IP it goes nowhere.
Code:
root@freebsd1:~ # ifconfig ixv0
ixv0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 options=e507bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,LRO,VLAN_HWFILTER,VLAN_HWTSO,RXCSUM_IPV6,TXCSUM_IPV6>
        ether 9a:45:bf:9b:0c:aa
        inet 192.168.1.60 netmask 0xffffff00 broadcast 192.168.1.255
        media: Ethernet autoselect (10Gbase-T <full-duplex>)
        status: active
        nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>

root@freebsd1:~ # ping 192.168.1.1
PING 192.168.1.1 (192.168.1.1): 56 data bytes
ping: sendto: Host is down
ping: sendto: Host is down
^C
root@freebsd1:~ # netstat -rn
Routing tables

Internet:
Destination        Gateway            Flags     Netif Expire
default            192.168.1.1        UGS        ixv0
127.0.0.1          link#5             UH          lo0
192.168.1.0/24     link#1             U          ixv0
192.168.1.60       link#1             UHS         lo0

I do have 24 VF's on 4 VM's.
The Intel card does allow both interfaces IOV interfaces to be used.
So 128 VF's are possible.
 
I don't know if I should make a new post, but my issue is completely the same as this topic discusses. So I made it here.
I cannot make SR-IOV work on intel 82599ES NIC.
I will summarize all needed information related this issue:
- According to this, SR-IOV is supported on such NICs.
- System on which everything running is "FreeBSD 12.2-RELEASE-p1 GENERIC amd64"
- pciconv -lvc ix0
Code:
ix1@pci0:5:0:1: class=0x020000 card=0x000c8086 chip=0x10fb8086 rev=0x01 hdr=0x00
    vendor     = 'Intel Corporation'
    device     = '82599ES 10-Gigabit SFI/SFP+ Network Connection'
    class      = network
    subclass   = ethernet
    cap 01[40] = powerspec 3  supports D0 D3  current D0
    cap 05[50] = MSI supports 1 message, 64 bit, vector masks
    cap 11[70] = MSI-X supports 64 messages, enabled
                 Table in map 0x20[0x0], PBA in map 0x20[0x2000]
    cap 10[a0] = PCI-Express 2 endpoint max data 128(512) FLR NS
                 link x4(x8) speed 5.0(5.0) ASPM disabled(L0s)
    cap 03[e0] = VPD
    ecap 0001[100] = AER 1 0 fatal 0 non-fatal 1 corrected
    ecap 0003[140] = Serial 1 00e0edffff9eba54
    ecap 000e[150] = ARI 1
    ecap 0010[160] = SR-IOV 1 IOV disabled, Memory Space disabled, ARI disabled
                     0 VFs configured out of 64 supported
                     First VF RID Offset 0x0180, VF RID Stride 0x0002
                     VF Device ID 0x10ed
                     Page Sizes: 4096 (enabled), 8192, 65536, 262144, 1048576, 4194304
- Motherboard Supermicro X9SCM-F with latest bios, CPU: Intel(R) Xeon(R) CPU E3-1220 V2 @ 3.10GHz (3100.09-MHz K8-class CPU), VT-e enabled, VT-d enabled.

I have tried net/intel-ix-kmod driver from ports (with SR-IOV option enabled, of course), Driver works good for PF but pciconf shows "SR-IOV 1 IOV disabled", but I can read iov configuration schema from device:
Code:
root@gate:~ # cat /etc/iov/ix0.conf
PF {
    device: "ix0";
    num_vfs: 4;
}

DEFAULT {
    passthrough: false;
}
Code:
root@gate:~ # iovctl -S -f /etc/iov/ix0.conf
The following configuration parameters may be configured on the PF:
        num_vfs : uint16_t (required)
        device : string (required)

The following configuration parameters may be configured on a VF:
        passthrough : bool (default = false)
        mac-addr : unicast-mac (optional)
        mac-anti-spoof : bool (default = true)
        allow-set-mac : bool (default = false)
        allow-promisc : bool (default = false)
But during enabling SR-IOV I got an error:
Code:
root@gate:~ # iovctl -C -f /etc/iov/ix0.conf
iovctl: Failed to configure SR-IOV: No space left on device

Then I've downloaded latest freebsd ix driver source directly from intel site, as well as ixv. As stated it is newer than in intel-ix-kmod package (3.3.18 vs 3.3.14). Enable SR-IOV in makefile, and built it (with fixing of couple minor compile errors in process). Move new binaries to /boot/kernel with renaming them to if_ix_updated.ko and if_ixv_updated.ko. Place
if_ix_updated_load="YES"
if_ixv_updated_load="YES"
to loader.conf.

And got nothing. I mean nothing new. Same "No space left on device". I can confirm that new driver is here:
Code:
...
ix0: <Intel(R) PRO/10GbE PCI-Express Network Driver, Version - 3.3.18> port 0xe020-0xe03f mem 0xdd080000-0xdd0fffff,0xdd504000-0xdd507fff irq 19 at device 0.0 on pci3
ix0: Using MSI-X interrupts with 5 vectors
ix0: Ethernet address: 00:e0:ed:9e:ba:54
ix0: PCI Express Bus: Speed 5.0GT/s Width x4
ix1: <Intel(R) PRO/10GbE PCI-Express Network Driver, Version - 3.3.18> port 0xe000-0xe01f mem 0xdd000000-0xdd07ffff,0xdd500000-0xdd503fff irq 16 at device 0.1 on pci3
ix1: Using MSI-X interrupts with 5 vectors
ix1: Ethernet address: 00:e0:ed:9e:ba:55
ix1: PCI Express Bus: Speed 5.0GT/s Width x4
...

What is strange that there are configured devices number in pciconf after iovctl -C run:
Code:
ix0@pci0:5:0:0: class=0x020000 card=0x000c8086 chip=0x10fb8086 rev=0x01 hdr=0x00
    vendor     = 'Intel Corporation'
    device     = '82599ES 10-Gigabit SFI/SFP+ Network Connection'
    class      = network
    subclass   = ethernet
    cap 01[40] = powerspec 3  supports D0 D3  current D0
    cap 05[50] = MSI supports 1 message, 64 bit, vector masks
    cap 11[70] = MSI-X supports 64 messages, enabled
                 Table in map 0x20[0x0], PBA in map 0x20[0x2000]
    cap 10[a0] = PCI-Express 2 endpoint max data 128(512) FLR NS
                 link x4(x8) speed 5.0(5.0) ASPM disabled(L0s)
    cap 03[e0] = VPD
    ecap 0001[100] = AER 1 0 fatal 0 non-fatal 1 corrected
    ecap 0003[140] = Serial 1 00e0edffff9eba54
    ecap 000e[150] = ARI 1
    ecap 0010[160] = SR-IOV 1 IOV disabled, Memory Space disabled, ARI disabled
                     4 VFs configured out of 64 supported
                     First VF RID Offset 0x0180, VF RID Stride 0x0002
                     VF Device ID 0x10ed
                     Page Sizes: 4096 (enabled), 8192, 65536, 262144, 1048576, 4194304

What else can I try? Thanks in advance.
 
Back
Top