FreeBSD Thunderbolt 3 support for Networking, Raid arrays, and GPUs

Does FreeBSD support Thunderbolt 3 networking yet? Can I connect my MacBook Pro to a FreeBSD server that has a Thunderbolt 3 connection, and get things like an IP address so I can SSH or HTTP to it, or mount NFS shares from it?

Also can the FreeBSD system connect to an external Thunderbolt 3 RAID enclosure, like https://eshop.macsales.com/shop/thunderbay-4/thunderbolt-3-raid-5 and still be able to use ZFS with the drives in the RAID enclosure? Not sure what kind of SATA adapters are in the RAID enclosure...

And finally, can we use Thunderbolt 3 eGPU enclosures as well?
 
This is an old thread but I'm wondering if there has been any progress here. I have Intel NUCs (NUC7i7BNH and NUC10i7FNH) with Thunderbolt 3 ports and I'm considering getting an external Thunderbolt to PCIe enclosure for 10Gbit network cards and a SAS adapter card. The cards are supported by FreeBSD, but I'm not sure if it will work via TB3. Anyone know?
 
search for it. If you can't find any reports that it is working, it is not yet working (and you will be the guinea pig if you try it).
 
I'd like to report that Thunderbolt is working with external storage and monitor.

I have an OWC Express 4M2 external enclosure connected to GIGABYTE BRIX PRO GB-BSi5-1135G7-BWUS.A which provides a Displayport port for a monitor and a ZFS pool for /usr/obj builds. The enclosure is populated with four (4) Intel 670p 1TB drives.

Storage and external monitor worked on 13.2 and is working with 14.0p3.

latest success on:
Code:
FreeBSD host.network.net 14.0-RELEASE-p3 FreeBSD 14.0-RELEASE-p3 #4 20fae1e16:

dmesg:
Code:
pcib10: <ACPI PCI-PCI bridge> at device 7.2 on pci0
pci10: <ACPI PCI bus> on pcib10
pcib11: <ACPI PCI-PCI bridge> at device 7.3 on pci0
pci11: <ACPI PCI bus> on pcib11
xhci0: <Intel Tiger Lake-LP Thunderbolt 4 USB controller> mem 0x607f2b0000-0x607f2bffff at
 device 13.0 on pci0
xhci0: 32 bytes context size, 64-bit DMA
usbus0 on xhci0
usbus0: 5.0Gbps Super Speed USB v3.0
pci0: <serial bus, USB> at device 13.2 (no driver attached)
pci0: <serial bus, USB> at device 13.3 (no driver attached)
pcib12: <Intel Volume Management Device> mem 0x607c000000-0x607dffffff,0x50000000-0x51ffff
ff,0x607f100000-0x607f1fffff at device 14.0 on pci0
pci12: <PCI bus> on pcib12
xhci1: <Intel Tiger Lake-LP USB 3.2 controller> mem 0x607f2a0000-0x607f2affff at device 20.0 on pci0
xhci1: 32 bytes context size, 64-bit DMA
usbus1 on xhci1
usbus1: 5.0Gbps Super Speed USB v3.0

dmesg of storage devices:
Code:
nda0 at nvme0 bus 0 scbus2 target 0 lun 1
nda0: <INTEL SSDPEKNU020TZ 002C PHKA303200######>
nda0: Serial Number PHKA303200######
nda0: nvme version 1.4
nda0: 1953514MB (4000797360 512 byte sectors)
nda1 at nvme1 bus 0 scbus3 target 0 lun 1
nda1: <INTEL SSDPEKNU010TZ 002C PHKA22200######>
nda1: Serial Number PHKA22200######
nda1: nvme version 1.4
nda1: 976762MB (2000409264 512 byte sectors)
nda2 at nvme2 bus 0 scbus4 target 0 lun 1
nda2: <INTEL SSDPEKNU010TZ 002C PHKA223000######>
nda2: Serial Number PHKA223000######
nda2: nvme version 1.4
nda2: 976762MB (2000409264 512 byte sectors)
nda3 at nvme3 bus 0 scbus5 target 0 lun 1
nda3: <INTEL SSDPEKNU010TZ 002C PHKA22220######>
nda3: Serial Number PHKA22220######
nda3: nvme version 1.4
nda3: 976762MB (2000409264 512 byte sectors)
nda4 at nvme4 bus 0 scbus6 target 0 lun 1
nda4: <INTEL SSDPEKNU010TZ 002C PHKA22300######>
nda4: Serial Number PHKA22300######
nda4: nvme version 1.4
nda4: 976762MB (2000409264 512 byte sectors)
...
GEOM_ELI: Device nda0p4.eli created.
GEOM_ELI: Encryption: AES-XTS 256
GEOM_ELI:     Crypto: accelerated software
Trying to mount root from zfs:zroot/ROOT/default []...


Let me know if you need the complete dmesg or more hardware information. I never had to troubleshoot device detection as long as it was powered on before FreeBSD boot.
 
How are you using the four nvme?(UFS/ZFS/VM's) How does the throughput look? My 670p 2T M.2 drive is quick.
I just bought six M.2 PM983a for some ZFS experimentation.
I see zroot so we know you are using ZFS.
Pair of Mirrors or one big zdev?
 
I have one 2TB 670p which was a fresh 13.2-BETA ZFS-native install. Then I re-purposed the OWC 4M2 from my Mac environment and added the 4 1TB 670p since SSDs were really cheap earlier in 2023. Currently the external drive enclosure is one RAID-Z pool.

Code:
# zpool status
  pool: scratch
 state: ONLINE
config:

        NAME        STATE     READ WRITE CKSUM
        scratch     ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            nda1    ONLINE       0     0     0
            nda2    ONLINE       0     0     0
            nda3    ONLINE       0     0     0
            nda4    ONLINE       0     0     0

errors: No known data errors

  pool: zroot
 state: ONLINE
config:

        NAME          STATE     READ WRITE CKSUM
        zroot         ONLINE       0     0     0
          nda0p4.eli  ONLINE       0     0     0

errors: No known data errors

Code:
# df -g
Filesystem         1G-blocks Used Avail Capacity  Mounted on
zroot/ROOT/default      1779   28  1750     2%    /
devfs                      0    0     0     0%    /dev
/dev/gpt/efiboot0          0    0     0     2%    /boot/efi
scratch                 2661    0  2661     0%    /scratch
zroot/var/mail          1750    0  1750     0%    /var/mail
zroot/var/log           1750    0  1750     0%    /var/log
zroot                   1750    0  1750     0%    /zroot
zroot/tmp               1750    0  1750     0%    /tmp
zroot/var/crash         1750    0  1750     0%    /var/crash
zroot/var/audit         1750    0  1750     0%    /var/audit
zroot/usr/home          1768   17  1750     1%    /usr/home
zroot/usr/ports         1751    0  1750     0%    /usr/ports
zroot/var/tmp           1750    0  1750     0%    /var/tmp
scratch/obj             2668    7  2661     0%    /scratch/obj
zroot/usr/src           1751    0  1750     0%    /usr/src

I have lots of space now for VMs so I can setup a homelab. Creating a ZFS volume for each VM is a feature I look forward to using. Here's a some zpool history for the pool on the external enclosure:

Code:
# zpool history
History for 'scratch':
2023-03-12.19:34:22 zpool create scratch raidz nvd1 nvd2 nvd3 nvd4
2023-03-12.19:35:43 zfs create scratch/obj
2023-03-16.21:53:05 zpool import -c /etc/zfs/zpool.cache -a -N
...
2023-03-26.22:40:02 zfs create -V8G -o volmode=dev scratch/haikudisk0
....
2023-11-26.22:53:57 zpool import -c /etc/zfs/zpool.cache -a -N
2023-11-26.23:10:25 zpool upgrade scratch

I haven't tested throughput, but I'm sure I won't be complaining about slow VM disk IO.
 
Back
Top