He points out the obvious: ZFS has on average very good write performance, even for small files and metadata updates (for example directory operations). He recommends it for web servers, that often write many small files. The flipside is not so good read performance, in particular for large files, which are not sequential on disk.
That's interesting, as my experience has been somewhat opposite.
For instance, one of the servers I administrate records HD video, from about two dozen security cameras, 24/7. I originally ran Linux on it with ext4, not because of any love of Linux (would much rather have run FreeBSD), but simply because the recording software, ZoneMinder, didn't run on FreeBSD. Some time later, and after many headaches of updating ZoneMinder, I switched to other recording software that thankfully runs under FreeBSD. I knew ZFS was probably a bad idea, but I wanted to try it anyway, so I gave it a try for a bit. Sure enough, after a few weeks, performance became abysmal, especially when it automatically deletes old footage to make room for new footage.
Note that this server
never shared physical disks between the OS and the footage, and has a
very high write:read ratio, well above 1:1. In fact, looking at it today, it's over 1100:1 for B/s and around 2650:1 Op/s. Yes, much of the recorded footage is never read back, as it's typically watched live. Additionally, few if any ZFS features were used on the footage disks; no compression, no snapshots, no atime--nothing beyond fletcher4 and zpool mirroring.
Since I had kind of expected this problem, I just switched to the UFS2 w/ gmirror, and all the problems disappeared, as if by magic. Old footage is deleted in a fraction if a second instead of half a minute, and when we do need to pull up recorded footage to view it, it happens much more quickly. So as much as I like ZFS, I know it's not a tool for every situation, which is why I'm glad FreeBSD continues to support UFS.