Having automated/tested backups in place is the only way to squeeze out peak performance without making regrettable choices. You also need to consider your tolerance to data loss. If you back up once a day, are you OK potentially losing data from one day (if things go south, like a power supply failing at the wrong point) in order to get more performance? Can you recreate the data if it is lost? (Compilation outputs or downloads, for example.)
That said, there are a few reasonably simple things you can do with ZFS to squeeze more write performance out. For any of these,
be sure to understand the implications for your workload.
If you can tolerate —
almost certainly a bad choice for a mission-critical database — disabling sync (
zfs sync=disabled ...
) on the dataset, you can hide the latency from burst-y writes to any (raidz1/2, raid10) layout; I do this on
/usr/obj
, for example, as the contents can always be recreated if the power goes out. Having a UPS in place (and actively monitored with
sysutils/apcupsd) can mitigate some [1] of the risk here.
There's also the
vfs.zfs.vdev.bio_flush_disable
sysctl to consider. This stops asking the drives to flush data from their on-disk cache to the medium; so long as you
never lose power, this should be OK. Note that your power supply failing counts as losing power even if you have a UPS. Did I mention backups?
If you're mainly doing large writes to big files, bumping up the recordsize can also help (fewer calls up/down the chain per MB written), and also improves compression. I'd put this as near-zero marginal risk tuning option.
Compression itself (typically with lz4 or zstd-fast) can make writes faster if you have compressible data. A faster CPU can come into play in this case, too.
As always, benchmarking (as
ralphbsz mentioned) your workload is always the best indicator as far as performance is concerned, and better than anyone's (including my own) opinion on a forum. Unfortunately, performance of any filesystem changes (for the worse) as the storage is filled and fragmented, so take your benchmark results on a fresh system as "best case" and go from there.
[1] In the event of a kernel crash, you're certainly more exposed to an unexpected (in written user data; not the zpool/zfs filesystem) state with a dataset's sync disabled. While I'd say there's always a chance for corruption / unexpected state after a crash, databases in particular go through a lot of trouble to make sure it doesn't happen, and disabling sync undoes much of that protection. Did I mention backups?