USB 3.1

As of today, USB3 support is not available (AFAIK) on FreeBSD. Recently, I was looking at a USB 3.1 UVC camera that could offload much of the processor overhead via the USB 3.1 DMA capability. When using a medium powered SoC/SBC, this could be quite a performance advantage. But then I stumbled upon this description that exposes the (potentially) dark side of USB3/3.1:

https://security.stackexchange.com/questions/118854/attacks-via-physical-access-to-usb-dma

Hopefully, when the FreeBSD developers eventually get around to fixing us up with USB 3/3.1 support - they can find a way to mitigate these pitfalls ...
 
USB 3.0 (aka USB 3.1 gen 1) works great on FreeBSD 9+, at least when it comes to mass storage. I've been using it for doing ZFS backups to an external drive on our home server for at least a year now. That server is currently running 10.3, but it started with 9.2.

You can search the source commit history for references to xhci(4) to see when the original support for USB 3.0 was added.

No idea about USB 3.1 (aka USB 3.1 gen 2), as I don't have any chipsets that support that, nor devices that need that.
 
Thanks,

If that's the case, then maybe it pays me to look further into the new camera options I was considering, since I currently do the cameras with FreeBSD and didn't want to run Linux for the task. I get conflicting reads about the exact version of USB needed for DMA-improved camera performance. Then there's the issue of the UVC, PWC, and whatever other drivers needed and how well they would play.
 
Hello Everyone,

Does Freebsd supports the USB standards below?

USB2.0
USB3.0 / USB3.1 Gen 1
USB3.1 Gen 2

ANx
 
I can not prove that FreeBSD is fully standard-conforming. But what I do know: I can plug USB 2.0 devices in, and they work, at the higher speed. I have one USB 3.0 device (an external disk) which I can plug in, and it runs at full speed (report 400 MB/s, and I've seen it run in excess of 100 MByte/s using dd, which is faster than USB 2.0 can run). This is on FreeBSD 11.2, with a pretty generic Atom motherboard.
 

Hi SirDice,

I understand the meaning of the words in xhci(4), but, in fact, while connecting an USB 3.0 SSD disk: on Ubuntu 18.04/ext4, I managed to reach a write speed of 355.17MB/s (~2840Mb/s); and, while on FreeBSD 11.2/UFS2, I only managed to reach 146MB/s (1168Mb/s).

The XHCI controller supports USB connection speeds up to 5.0Gbps when using a USB 3.0 compliant device.

Tested using a single huge file and rsync:

$ dd if=/dev/urandom of=hugefile.20g bs=1G count=20)
$ rsync -a --progress hugefile.20g /mnt/SanDiskHD/

Does anyone knows if the FS (ext4 vs UFS2) may the critical factor here?
 
Tested using a single huge file and rsync:

$ dd if=/dev/urandom of=hugefile.20g bs=1G count=20)
$ rsync -a --progress hugefile.20g /mnt/SanDiskHD/
This is a really bad way to test disk speeds.
 
The problem with using file copies as a benchmark is that your figures are skewed by things like file system caches for example. You need to use a tool like benchmarks/bonnie++ with good parameters to get some figures that are actually meaningful.
 
Does anyone knows if the FS (ext4 vs UFS2) may the critical factor here?
It obviously is.

What are you trying to accomplish: Measure the speed of the USB interface? Or the read/write speed of the USB stick (which depends on the IO pattern)? Or the read/write speed of the file system on the USB stick (which depends heavily on the workload)?
 
It obviously is.

What are you trying to accomplish: Measure the speed of the USB interface? Or the read/write speed of the USB stick (which depends on the IO pattern)? Or the read/write speed of the file system on the USB stick (which depends heavily on the workload)?

I’m using external USB disk, not a stick... In simple words, I´m trying to see how fast I can dump large amount of data to an USB disk under FreeBSD. I do understand that I will never reach the 5Gbps because of whatever losts caused by several things, but I need to know which combination of hardware/settings allows me to write the fastest I can to an USB/removable device.
 
In that case, the appropriate benchmark is actually all the way through the file system. So if your external USB disk is /dev/da0p1, and it is mounted at /mnt/ext (I simplified your example to make it easier to type), then the correct command for testing the bandwidth of one single file would be dd if=/dev/zero of=/mnt/ext/bigfile.test bs=1M count=16384, which creates a 16GiB file. Why did I copy from /dev/zero and not from a pre-existing file? Because I'm trying to test the speed of the USB port, the external disk, and the file system on the external disk, not the speed of the source file system.

Note that this test partially tests the buffer cache of the file system on the external disk: when dd finishes, part of the output file is still in RAM cache and not on disk. I think this is a realistic test. If your copy program writes to the external disk with an fsync() call at the end, then you need to write a little C or Python etc. program that mimics the fsync, or call the fsync(1) command after running dd (and time the whole assembly of dd + fsync).

I think in reality, you might not dump one single huge file to the external disk, but probably multiple directories with small and medium sized files. That's much harder to benchmark; you may have to simply time the actual execution.

Now, if it isn't fast enough, the question is: what is the bottleneck? That's much harder to answer. For fun, I just ran the test on my home machine. I have an external Seagate disk drive (the cheap 2.5" 4TB drive you can get for about $70 at Costco) connected via USB3. Reading directly from the raw disk drive (dd to /dev/da0p1) runs at about 110 MB/s, which is probably limited by the platter speed. Reading a single very large file through the ZFS file system runs at ~70 MB/s (which is probably limited by CPU overhead and doing smaller IOs, I have a very slow CPU, 1GHz Atom). Writing a similar large file through ZFS is even slower, about 34 MB/s; but then, on my machine the write speed of ZFS is always pretty bad (not a surprise, and doesn't bother me at all). After the write of a 16GiB file finishes, running the fsync command adds another 0.27 seconds, which tells me that ZFS didn't have much dirty data in the buffer cache.
 
Don't use /dev/zero especially when using ZFS. A lot of filesystems, like ZFS, will ignore the zeros and write sparse files to disk or compress them away to nothing.

The original dd command with random data is better, but write it to a file on disk first. Then reboot to clear the caches. Then dd or copy or rsync the file to the USB device.

You'll be limited to the slowest part of the USB port, the USB-to-SATA bridge, the internal SATA port, and the drive speed. And the filesystem used will also affect throughout (ZFS will be slower than UFS, for example).
 
Back
Top