hyper-v Intel X710-T2L SR-IOV support in Hyper-V guest VM (FreeBSD 12.4 and 13.2)

I have an Intel X710-T2L installed in my Windows server running FreeBSD in a Hyper-V VM. SR-IOV is enabled on both the LAN and WAN virtual switches. If I configure the VM to enable SR-IOV, I get the following:

Code:
pcib0: <Hyper-V PCI Express Pass Through> on vmbus0
pci0: <PCI bus> on pcib0
pci0: <network, ethernet> at device 2.0 (no driver attached)

...and I have no network access. Disabling SR-IOV restores the network functionality.

I did a 'make' and 'make install' of the iavf FreeBSD Virtual Function Driver version 3.0.31 from Intel under FreeBSD 12.4 and 13.2 both failed in the same manner.

I'm wondering if I'm doing something wrong. Is there a way to get SR-IOV working with this NIC with a FreeBSD guest?

Thanks!
 
I can't test this setup myself so I can only provide hint. Why did you have to compile it in the first place ? I do see iavf(4) in the base. Did you verify module was loaded ? I see it actually built in in GENERIC: kldstat -v|grep iavf.
What does pciconf -lv say about that device?
 
I compiled the Intel driver I because I thought the built in driver wasn't working.

This is the output of pciconf -lv run on the FreeBSD 13.2 VM while SR-IOV is enabled in the virtual NIC:

Code:
none0@pci1:0:2:0:       class=0x020000 rev=0x02 hdr=0x00 vendor=0x8086 device=0x1571 subvendor=0x8086 subdevice=0x0001
    vendor     = 'Intel Corporation'
    device     = 'Ethernet Virtual Function 700 Series'
    class      = network
    subclass   = ethernet

I just installed it fresh, and did not compile the Intel iavf driver.

When SR-IOV is enabled, it looks like I have access to the WAN (I can download using pkg and ping external sites), but can't ping hosts on my LAN, nor can I connect to the VM with SSH.

Here is the output of ifconfig:

Code:
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384
        options=680003<RXCSUM,TXCSUM,LINKSTATE,RXCSUM_IPV6,TXCSUM_IPV6>
        inet6 ::1 prefixlen 128
        inet6 fe80::1%lo0 prefixlen 64 scopeid 0x1
        inet 127.0.0.1 netmask 0xff000000
        groups: lo
        nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
hn0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
        options=8051b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,TSO4,LRO,LINKSTATE>
        ether 00:15:5d:01:0e:08
        inet6 fe80::215:5dff:fe01:e08%hn0 prefixlen 64 scopeid 0x2
        inet 192.168.1.18 netmask 0xffffff00 broadcast 192.168.1.255
        media: Ethernet autoselect (10Gbase-T <full-duplex>)
        status: active
        nd6 options=23<PERFORMNUD,ACCEPT_RTADV,AUTO_LINKLOCAL>
 
I compiled the Intel driver I because I thought the built in driver wasn't working.
If module is built into kernel you won't be able to load any other version of it (it will complain that module is already loaded).
It's a bit strange that it shows no driver attached even though hr device is created.

Any chance Windows host is blocking traffic on firewall ?
 
I just tried compiling a custom 13.2-RELEASE kernel, commenting out the iavf driver from GENERIC and copying that config into MYKERNEL.. I then build using:

Code:
sudo make -j8 buildkernel KERNCONF=MYKERNEL WITHOUT_MODULES=iavf
sudo make -j8 installkernel KERNCONF=MYKERNEL WITHOUT_MODULES=iavf

I could then properly install the Intel iavf driver after building it. After confirming that the compiled module was loaded on the next reboot, I was still getting 'no driver attached' in the console with SR-IOV enabled for the virtual NIC. So it looks like the Intel virtual function driver doesn't work either.
 
Back
Top