Using AMD CPU, PF driver gives error when loading for more than 62 VFs

I am running on a Supermicro server which has 2xAMD EPYC 9374F 32-Core Processor and amd64 architecture. I have written a driver to use my own PCIe device in SR-IOV environment. This device supports up to 64 VFs.
I have set num_vfs to 62 in my iovctl.conf is below:
Code:
PF {
    device : "dre_drv0";
    num_vfs : 62;
} 
DEFAULT {
    passthrough : true;
}
After this, sudo iovctl -C -f /etc/iovctl.conf will load PF driver successfully.
dmesg is as below:
Code:
dre_drv0: DRE_drvIovInit: Called with num_vfs 62.
dre_drv0: DRE_drvIovAddVf: Called for vfnum 0.
dre_drv0: DRE_drvIovAddVf: Called for vfnum 1.
dre_drv0: DRE_drvIovAddVf: Called for vfnum 2.
dre_drv0: DRE_drvIovAddVf: Called for vfnum 3.
dre_drv0: DRE_drvIovAddVf: Called for vfnum 4.
dre_drv0: DRE_drvIovAddVf: Called for vfnum 5.
dre_drv0: DRE_drvIovAddVf: Called for vfnum 6.
dre_drv0: DRE_drvIovAddVf: Called for vfnum 7.
dre_drv0: DRE_drvIovAddVf: Called for vfnum 8.
dre_drv0: DRE_drvIovAddVf: Called for vfnum 9.
dre_drv0: DRE_drvIovAddVf: Called for vfnum 10.
dre_drv0: DRE_drvIovAddVf: Called for vfnum 11.
dre_drv0: DRE_drvIovAddVf: Called for vfnum 12.
dre_drv0: DRE_drvIovAddVf: Called for vfnum 13.
dre_drv0: DRE_drvIovAddVf: Called for vfnum 14.
dre_drv0: DRE_drvIovAddVf: Called for vfnum 15.
dre_drv0: DRE_drvIovAddVf: Called for vfnum 16.
dre_drv0: DRE_drvIovAddVf: Called for vfnum 17.
dre_drv0: DRE_drvIovAddVf: Called for vfnum 18.
dre_drv0: DRE_drvIovAddVf: Called for vfnum 19.
dre_drv0: DRE_drvIovAddVf: Called for vfnum 20.
dre_drv0: DRE_drvIovAddVf: Called for vfnum 21.
dre_drv0: DRE_drvIovAddVf: Called for vfnum 22.
dre_drv0: DRE_drvIovAddVf: Called for vfnum 23.
dre_drv0: DRE_drvIovAddVf: Called for vfnum 24.
dre_drv0: DRE_drvIovAddVf: Called for vfnum 25.
dre_drv0: DRE_drvIovAddVf: Called for vfnum 26.
dre_drv0: DRE_drvIovAddVf: Called for vfnum 27.
dre_drv0: DRE_drvIovAddVf: Called for vfnum 28.
dre_drv0: DRE_drvIovAddVf: Called for vfnum 29.
dre_drv0: DRE_drvIovAddVf: Called for vfnum 30.
dre_drv0: DRE_drvIovAddVf: Called for vfnum 31.
dre_drv0: DRE_drvIovAddVf: Called for vfnum 32.
dre_drv0: DRE_drvIovAddVf: Called for vfnum 33.
dre_drv0: DRE_drvIovAddVf: Called for vfnum 34.
dre_drv0: DRE_drvIovAddVf: Called for vfnum 35.
dre_drv0: DRE_drvIovAddVf: Called for vfnum 36.
dre_drv0: DRE_drvIovAddVf: Called for vfnum 37.
dre_drv0: DRE_drvIovAddVf: Called for vfnum 38.
dre_drv0: DRE_drvIovAddVf: Called for vfnum 39.
dre_drv0: DRE_drvIovAddVf: Called for vfnum 40.
dre_drv0: DRE_drvIovAddVf: Called for vfnum 41.
dre_drv0: DRE_drvIovAddVf: Called for vfnum 42.
dre_drv0: DRE_drvIovAddVf: Called for vfnum 43.
dre_drv0: DRE_drvIovAddVf: Called for vfnum 44.
dre_drv0: DRE_drvIovAddVf: Called for vfnum 45.
dre_drv0: DRE_drvIovAddVf: Called for vfnum 46.
dre_drv0: DRE_drvIovAddVf: Called for vfnum 47.
dre_drv0: DRE_drvIovAddVf: Called for vfnum 48.
dre_drv0: DRE_drvIovAddVf: Called for vfnum 49.
dre_drv0: DRE_drvIovAddVf: Called for vfnum 50.
dre_drv0: DRE_drvIovAddVf: Called for vfnum 51.
dre_drv0: DRE_drvIovAddVf: Called for vfnum 52.
dre_drv0: DRE_drvIovAddVf: Called for vfnum 53.
dre_drv0: DRE_drvIovAddVf: Called for vfnum 54.
dre_drv0: DRE_drvIovAddVf: Called for vfnum 55.
dre_drv0: DRE_drvIovAddVf: Called for vfnum 56.
dre_drv0: DRE_drvIovAddVf: Called for vfnum 57.
dre_drv0: DRE_drvIovAddVf: Called for vfnum 58.
dre_drv0: DRE_drvIovAddVf: Called for vfnum 59.
dre_drv0: DRE_drvIovAddVf: Called for vfnum 60.
dre_drv0: DRE_drvIovAddVf: Called for vfnum 61.
ppt0 at device 0.128 numa-domain 0 on pci9
ppt1 at device 0.129 numa-domain 0 on pci9
ppt2 at device 0.130 numa-domain 0 on pci9
ppt3 at device 0.131 numa-domain 0 on pci9
ppt4 at device 0.132 numa-domain 0 on pci9
ppt5 at device 0.133 numa-domain 0 on pci9
ppt6 at device 0.134 numa-domain 0 on pci9
ppt7 at device 0.135 numa-domain 0 on pci9
ppt8 at device 0.136 numa-domain 0 on pci9
ppt9 at device 0.137 numa-domain 0 on pci9
ppt10 at device 0.138 numa-domain 0 on pci9
ppt11 at device 0.139 numa-domain 0 on pci9
ppt12 at device 0.140 numa-domain 0 on pci9
ppt13 at device 0.141 numa-domain 0 on pci9
ppt14 at device 0.142 numa-domain 0 on pci9
ppt15 at device 0.143 numa-domain 0 on pci9
ppt16 at device 0.144 numa-domain 0 on pci9
ppt17 at device 0.145 numa-domain 0 on pci9
ppt18 at device 0.146 numa-domain 0 on pci9
ppt19 at device 0.147 numa-domain 0 on pci9
ppt20 at device 0.148 numa-domain 0 on pci9
ppt21 at device 0.149 numa-domain 0 on pci9
ppt22 at device 0.150 numa-domain 0 on pci9
ppt23 at device 0.151 numa-domain 0 on pci9
ppt24 at device 0.152 numa-domain 0 on pci9
ppt25 at device 0.153 numa-domain 0 on pci9
ppt26 at device 0.154 numa-domain 0 on pci9
ppt27 at device 0.155 numa-domain 0 on pci9
ppt28 at device 0.156 numa-domain 0 on pci9
ppt29 at device 0.157 numa-domain 0 on pci9
ppt30 at device 0.158 numa-domain 0 on pci9
ppt31 at device 0.159 numa-domain 0 on pci9
ppt32 at device 0.160 numa-domain 0 on pci9
ppt33 at device 0.161 numa-domain 0 on pci9
ppt34 at device 0.162 numa-domain 0 on pci9
ppt35 at device 0.163 numa-domain 0 on pci9
ppt36 at device 0.164 numa-domain 0 on pci9
ppt37 at device 0.165 numa-domain 0 on pci9
ppt38 at device 0.166 numa-domain 0 on pci9
ppt39 at device 0.167 numa-domain 0 on pci9
ppt40 at device 0.168 numa-domain 0 on pci9
ppt41 at device 0.169 numa-domain 0 on pci9
ppt42 at device 0.170 numa-domain 0 on pci9
ppt43 at device 0.171 numa-domain 0 on pci9
ppt44 at device 0.172 numa-domain 0 on pci9
ppt45 at device 0.173 numa-domain 0 on pci9
ppt46 at device 0.174 numa-domain 0 on pci9
ppt47 at device 0.175 numa-domain 0 on pci9
ppt48 at device 0.176 numa-domain 0 on pci9
ppt49 at device 0.177 numa-domain 0 on pci9
ppt50 at device 0.178 numa-domain 0 on pci9
ppt51 at device 0.179 numa-domain 0 on pci9
ppt52 at device 0.180 numa-domain 0 on pci9
ppt53 at device 0.181 numa-domain 0 on pci9
ppt54 at device 0.182 numa-domain 0 on pci9
ppt55 at device 0.183 numa-domain 0 on pci9
ppt56 at device 0.184 numa-domain 0 on pci9
ppt57 at device 0.185 numa-domain 0 on pci9
ppt58 at device 0.186 numa-domain 0 on pci9
ppt59 at device 0.187 numa-domain 0 on pci9
ppt60 at device 0.188 numa-domain 0 on pci9
ppt61 at device 0.189 numa-domain 0 on pci9
I change iovctl.conf to use num_vfs to 64. But after i execute sudo iovctl -C -f /etc/iovctl.conf, it is failing with below error message:
Code:
dre_drv0: DRE_drvIovInit: Called with num_vfs 64.
dre_drv0: 0x2000000 bytes of rid 0x264 res 3 failed (0, 0xffffffffffffffff).
dre_drv0: DRE_drvIovUnInit: Called.
Seems like some memory is not able to be allocated.

I am using the driver code on another server which has Intel CPU and everything works fine with PF driver able to load with 64 VFs on FreeBSD 14.

Can you help to check why it is unable to load with 64 VFs on AMD CPU (I have also tried on AMD 9274F).
 
If you want a developer / some developers to look at this, the relevant mailing list(s) are a better bet than this forum. It is mostly users hanging out here (but the small number of devs is growing. Slowly, but still growing. And I think that it is a good thing.)
 
Back
Top