ZFS zfs send receive slow transfer speed

I'm trying to troubleshoot a potential slowdown when doing zfs send receive between two machines.
Both servers are running 13.2 and are pretty fast hardware wise.

When I transfer a file from A -> B via FTP, I'm getting 700MB/sec
When I transfer a file from A -> B via SCP, using Cipher aes256-gcm@openssh.com I get 350-400MB/sec

When I do a zfs send receive using "zfs-autobackup" ( it essentially just manages snapshots and does the SSH connection ), if I stick to SSH transfer I only get 120MB/sec
I can also configure zfs-autobackup to use netcat, like so:

zfs-autobackup -v --strip-path=1 --send-pipe "nc 192.168.1.5 8023" --recv-pipe "nc -l 8023" --ssh-source bkp3n veeam65 backups

Even with netcat, I'm still getting at most 120MB/sec:
>>> Transfer 6% 122MB/s (total 955624MB, 121 minutes left)

On the source server, the process runs as:
csh -c zfs send --large-block --embed --verbose --parsable --props backups/veeam65@veeam65-20230513160609 | nc 192.168.1.5 8023

On the backup server ( which does the pull ), it runs as:
/bin/sh -c nc -l 8023 | zfs recv -u -v -s backups/veeam65

Where's that limit of ~120MB/sec coming from ?

top shows moderate CPU usage

49857 root 1 26 0 13M 2452K select 7 1:06 10.84% nc
49856 root 5 24 0 24M 10M CPU14 14 0:48 8.07% zfs


Is zfs send/receive itself limited somehow to such slow speeds ?
 
I also tried adding mbuffer in the mix

receiving side: nc -l 8000 | mbuffer -s 128k -m 1G | pv | zfs recv -u -v -s backups/veeam65
sending side: zfs send --large-block --embed --verbose --parsable --props backups/veeam65@veeam65-20230513160609 | mbuffer -s 128k -m 1G | nc 192.168.1.5 8000

still getting around 120MB/sec:
summary: 20.5 GiByte in 3min 07.5sec - average of 112 MiB/s
 
Hmmm, tricky.

First suspect with ssh speed issues is always the cipher, but I see you have considered that. [When "-c none" got removed from ssh, I was so mad I wanted to spit.]

I'd collect some more data points.

Step the changes, first eliminating zfs-recv and then the network.

Pipe the data into cat >/dev/null on the receiving host.

Then pipe it into cat >/dev/null on the sending host.
 
Hmmm, tricky.

First suspect with ssh speed issues is always the cipher, but I see you have considered that. [When "-c none" got removed from ssh, I was so mad I wanted to spit.]

I'd collect some more data points.

Step the changes, first eliminating zfs-recv and then the network.

Pipe the data into cat >/dev/null on the receiving host.

Then pipe it into cat >/dev/null on the sending host.

good test.

on the sending side I did this:

zfs send --large-block --embed --verbose --parsable --props backups/veeam65@veeam65-20230513160609 | mbuffer -s 128k -m 1G | pv -rtab | cat > /dev/null

summary: 2235 MiByte in 19.8sec - average of 113 MiB/s

So the source is definitely the bottleneck but I can't figure out why.

One idea: the source is made up of HDDs. Is it possible it's reading from one HDD at a time in sequential order ? This might explain the limitation at ~120MB/sec which is what a single HDD can do.

The source is a 50 mirror vdev server ( 100HDDs ) and files are striped which might explain the high transfer speeds when doing FTP/SCP.
But maybe zfs send/receive reads disks one at a time ? just speculating here
 
Adding a few more performance tests I ran on the hardware:

dd if=/dev/zero of=test1.img bs=5M count=2000
2000+0 records in
2000+0 records out
10485760000 bytes transferred in 4.698187 secs (2231873698 bytes/sec)


cat test1.img | pv -rtab | cat > /dev/null
9.77GiB 0:00:18 [ 546MiB/s] [ 546MiB/s]

The backplane is 6Gbps so 546MB/sec is pretty good keeping in mind that I have other operations going on.

It's just zfs send that's slow for some reason
 
sysctl -ad vfs.zfs.send
the variables are sort of explained in man 4 zfs
you may try to fiddle with those and see what happens
 
keep in mind that zfs send has to gather metadata first - on pools with many datasets and snapshots this can take several minutes during which throughput is very limited (as the sending side simply hasn't gathered enough data to send); also towards the end of a transfer there's also mostly metadata transferred and housekeeping going on, so speeds will drop again.

also just to be safe: as VladiBG already asked - this is over 10Gbit ethernet? I.e. we are *really* talking about MB/sec and not mixing in Mbps figures and this is actually a 1Gbit link? (which would perfectly match ~700Mbps with some overhead and also match ~120MB/sec for a transfer over netcat...)
 
lan speed is irrelevant by now, it is slow sending to /dev/null on localhost
https://zfs-discuss.zfsonlinux.narkive.com/YR1Pc6zp/zfs-send-very-slow-queue-depth-of-1 (this may have changed as the above post is 8 years old but...)

Using /dev/null for filesystem benchmarks with ZFS is bogus anyways, because ZFS will compress that testfile down to what it actually is - zero. At least use /dev/random for generating test payloads to prevent testing compression and optimizations on the filesystem-level...

However, those ~550MB/s avg figures (with peaks >700MB/s) are pretty much what I also get from a 2x2mirror pool of HPE/SanDisk SSDs attached to a simple SAS3008 HBA when sending random datasets/zvols through mbuffer to /dev/null. So this might really be /dev/null restricting those 'benchmarks' (or mbuffer...)
I do actually get higher numbers (up to ~800-900MB/s avg) when sending datasets between hosts, especially from/to nvme-backed pools over their otherwise unutilized mgmt link (10Gbit). But depending on utilization of the pools this will drop rapidly to 1/2 or even lower speeds. It seems ZFS highly favours any local operations over 'zfs send' (which is a good thing IMO).


anyhow - 122MB/s still looks *highly* suspicious like a single-Gbit link... (maybe sending/receiving on the wrong interface of the host?)
 
look at post #4
he zfs sends to /dev/null; there is no network connection involved and speed is 110MB/s (the speed of a single spindle ?)
 
look at post #4
he zfs sends to /dev/null; there is no network connection involved and speed is 110MB/s (the speed of a single spindle ?)

sorry, I must have overlooked/misread that.

But still, /dev/null shouldn't be *that* slow and ZFS is always utilizing all vdevs in parallel (and takes data from whichever provider in a mirror is fastest). I also don't get why there is so much piping going on; as this has a major impact on throughput:
Code:
# zfs send -Rce vms_jails/vms/asa/asa-disk0@test | mbuffer -s 128k -m 1G | pv -rtab | cat > /dev/null
in @  596 MiB/s, out @  596 MiB/s, 3382 MiB total, buffer   0% full^C14GiB 0:00:06 [ 578MiB/s] [ 535MiB/s]
in @  604 MiB/s, out @  604 MiB/s, 3566 MiB total, buffer   0% full
mbuffer: error: outputThread: error writing to <stdout> at offset 0xdee30000: Broken pipe
mbuffer: warning: error during output to <stdout>: Broken pipe
summary: 3566 MiByte in  6.6sec - average of  544 MiB/s

# zfs send -Rce vms_jails/vms/asa/asa-disk0@test | pv -rtab > /dev/null
^C25GiB 0:00:05 [ 836MiB/s] [ 869MiB/s]

The test in #4 (resulting in 113MiB/s) was piping through mbuffer, pv and cat; the test in #5 (resulting in 546MiB/s) ran only through pv (and the second 'cat' could also be eliminated), so those aren't very comparable given that mbuffer is single-threaded and heavily cpu-bound (as I had to find out on a hopelessly underpowered NAS once...).


maybe running zpool iostat -v <poolname> 1 on the 'HDD-host' during the zfs send might also give some insights - e.g. heavy io-load. Gathering ZFS metadata for large ZFS-sends (e.g. lots of snapshots) on HDDs is completely trashing the disks with random IO, so performance during metadata collection is horrible until ZFS can finally move on to actually pushing raw, sequentially written on-disk-bytes through the line.
 
the speculation is that zfs send will serially read data from disks and such it cant exceed the thruput of a single disk
that was indeed the case several years ago (see the narkive link) but im not sure its still the case now
other posts i've seen say that if data is already in arc/l2arc the performance will be a lot better
 
Output of gstat -pao while running? Watch for individual drives hitting >95%, and in what columns. (You might add -I 30s to get a 30s sample — or longer — if you like.)

Other thoughts:

Other load on the system during tests? Scrub isn’t running, right?

Also note that caching (the boon for performance, but bane for benchmarking) will change the behavior at the start of the send when run multiple times (and not run long enough to flush out the data needed at the start), so it is easy to “show” that a particular thing improved performance; try to be consistent and run enough data that you’re not overly impacted by this.

The throughput will fluctuate as it moves between large / continuous blocks (bulk writes) and small (metadata updates).

How fragmented or full (zpool list) is the pool? ZFS send had to read in ~ transaction order, not physical layout on the disk (unlike a modern scrub with coalescing.)

Is compression used on the pool? What kind? If so, sending -c will prevent having to decompress to send.

Is encryption used on the pool? GELI or native?

Does zpool status -s show any “slow” devices?
 
Oh, and what type of IO is used to generate the data saved on the pool. Database transaction? Writing out large files that are rarely modified? BitTorrent writes (famously brutal on ZFS filesystems.)

What is the recordsize set to?
 
For the initial network tests, yes this was over 10Gbps network. The source host is a 100HDD+ server (& expansions chassis) limited to 6Gbps backplane however. Nevertheless, the subsequent tests were done locally by piping to /dev/null so network shouldn't matter.

The server has an Intel S4510 7.68Tb SSD special device for holding metadata.

The data saved was generated by Veeam. There are no backup jobs running in Veeam when I'm doing the tests.

I also ran gstat ad nauseum, I can't see any particular disk being bogged down. The recordsize is the default of 128k, compression is lz4.

I also tried sending as raw for kicks ( there is no encryption ) and the results are the same:

zfs send --verbose --raw backups/veeam@sat | mbuffer -s 128k -m 1G | pv -rtab > /dev/null 09:46:04 3.85G backups/veeam@sat917 MiB total, buffer 0% full3.84GiB 0:00:35 [94.2MiB/s] [ 112MiB/s] 09:46:05 3.96G backups/veeam@sat004 MiB total, buffer 0% full3.94GiB 0:00:36 [93.1MiB/s] [ 111MiB/s]

gstat -pao dT: 1.066s w: 1.000s L(q) ops/s r/s kBps ms/r w/s kBps ms/w o/s ms/o %busy Name 0 20 19 2763 19.0 0 0 0.0 1 0.0 33.9| da10 0 21 20 2523 3.1 0 0 0.0 1 19.6 4.1| da0 0 15 14 1802 0.8 0 0 0.0 1 7.0 1.7| da1 0 23 23 2883 1.6 0 0 0.0 1 8.3 2.2| da2 0 20 19 2403 3.3 0 0 0.0 1 20.0 5.0| da3 0 19 18 2282 1.8 0 0 0.0 1 0.0 2.6| da4 0 21 20 2523 1.8 0 0 0.0 1 0.0 1.6| da5 0 20 19 2403 6.0 0 0 0.0 1 0.0 4.9| da6 0 21 20 2782 3.0 0 0 0.0 1 0.0 2.5| da7 0 20 19 2403 2.9 0 0 0.0 1 0.0 3.2| da8 0 16 15 1922 3.2 0 0 0.0 1 0.0 1.8| da9 0 14 13 1682 1.4 0 0 0.0 1 0.0 1.6| da11 0 14 13 2042 2.2 0 0 0.0 1 9.5 3.2| da12 0 10 9 1201 5.7 0 0 0.0 1 44.4 6.2| da13 0 18 17 2162 3.6 0 0 0.0 1 16.9 4.2| da14 0 10 9 1201 2.1 0 0 0.0 1 24.3 3.6| da15 0 26 25 3244 4.0 0 0 0.0 1 0.0 4.5| da16 0 12 11 1442 5.5 0 0 0.0 1 0.0 3.4| da17 0 8 8 961 4.2 0 0 0.0 1 23.1 3.3| da18 0 16 15 2042 3.8 0 0 0.0 1 16.7 4.2| da19 0 9 8 1081 7.0 0 0 0.0 1 41.4 6.5| da20 0 17 16 2643 3.7 0 0 0.0 1 31.4 5.3| da21 0 11 10 1442 2.7 0 0 0.0 1 0.0 1.0| da22 0 12 11 1442 2.2 0 0 0.0 1 0.0 0.9| da23 0 13 13 1682 1.2 0 0 0.0 0 0.0 1.3| da24 0 13 13 1682 3.6 0 0 0.0 0 0.0 2.0| da25 0 18 17 2403 1.3 0 0 0.0 1 22.5 3.8| da26 0 5 4 481 10.8 0 0 0.0 1 21.4 3.7| da27 0 18 17 2282 1.9 0 0 0.0 1 42.9 5.6| da28 0 14 13 1682 3.4 0 0 0.0 1 40.8 5.8| da29 0 11 10 1321 2.6 0 0 0.0 1 0.0 1.7| da30 0 22 21 2883 2.1 0 0 0.0 1 0.0 1.6| da31 0 17 17 2282 7.8 0 0 0.0 0 0.0 5.8| da32 0 11 11 1442 42.9 0 0 0.0 0 0.0 40.0| da33 0 15 14 1802 2.0 0 0 0.0 1 37.4 4.6| da34 0 16 15 2042 22.8 0 0 0.0 1 41.6 35.6| da35 0 6 0 0 0.0 4 15 0.4 2 15.3 2.9| da126 0 8 0 0 0.0 5 19 11.8 3 20.2 7.1| da127 0 8 0 0 0.0 5 19 11.9 3 20.5 7.2| da128 0 1 0 0 0.0 0 0 0.0 1 7.0 0.7| da129 0 1 0 0 0.0 0 0 0.0 1 34.5 3.2| da130

Code:
top output

last pid: 30477;  load averages:  0.10,  0.08,  0.07                                                          up 250+16:29:16 09:55:44
119 processes: 1 running, 118 sleeping
CPU:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt, 99.9% idle
Mem: 8336K Active, 458M Inact, 432K Laundry, 92G Wired, 32G Free
ARC: 71G Total, 6379M MFU, 61G MRU, 5164K Anon, 454M Header, 3292M Other
     62G Compressed, 74G Uncompressed, 1.20:1 Ratio
Swap: 2048M Total, 48M Used, 2000M Free, 2% Inuse
 
sorry, I must have overlooked/misread that.

But still, /dev/null shouldn't be *that* slow and ZFS is always utilizing all vdevs in parallel (and takes data from whichever provider in a mirror is fastest). I also don't get why there is so much piping going on; as this has a major impact on throughput:
Code:
# zfs send -Rce vms_jails/vms/asa/asa-disk0@test | mbuffer -s 128k -m 1G | pv -rtab | cat > /dev/null
in @  596 MiB/s, out @  596 MiB/s, 3382 MiB total, buffer   0% full^C14GiB 0:00:06 [ 578MiB/s] [ 535MiB/s]
in @  604 MiB/s, out @  604 MiB/s, 3566 MiB total, buffer   0% full
mbuffer: error: outputThread: error writing to <stdout> at offset 0xdee30000: Broken pipe
mbuffer: warning: error during output to <stdout>: Broken pipe
summary: 3566 MiByte in  6.6sec - average of  544 MiB/s

# zfs send -Rce vms_jails/vms/asa/asa-disk0@test | pv -rtab > /dev/null
^C25GiB 0:00:05 [ 836MiB/s] [ 869MiB/s]

The test in #4 (resulting in 113MiB/s) was piping through mbuffer, pv and cat; the test in #5 (resulting in 546MiB/s) ran only through pv (and the second 'cat' could also be eliminated), so those aren't very comparable given that mbuffer is single-threaded and heavily cpu-bound (as I had to find out on a hopelessly underpowered NAS once...).


maybe running zpool iostat -v <poolname> 1 on the 'HDD-host' during the zfs send might also give some insights - e.g. heavy io-load. Gathering ZFS metadata for large ZFS-sends (e.g. lots of snapshots) on HDDs is completely trashing the disks with random IO, so performance during metadata collection is horrible until ZFS can finally move on to actually pushing raw, sequentially written on-disk-bytes through the line.

zpool iostat -v backups 1 shows that at most it's reading in bursts of ~120-130MB/sec.
I also eliminated the cat from the equation and piping straight to /dev/null, same speed. I let it run for a few minutes to allow it 'gather speed'. no change

Code:
zfs send --verbose --raw backups/veeam@sat | mbuffer -s 128k -m 1G | pv -rtab > /dev/null
summary: 31.9 GiByte in  4min 41.7sec - average of  116 MiB/s

q                        capacity     operations     bandwidth
pool                  alloc   free   read  write   read  write
--------------------  -----  -----  -----  -----  -----  -----
backups                194T  56.0T   1023     58   136M   233K
  mirror-0            4.43T  1.03T     38      1  5.02M  7.65K
    multipath/port0       -      -     16      0  2.03M  3.83K
    multipath/port1       -      -     21      0  2.99M  3.83K
  mirror-1            4.44T  1.01T     37      1  4.90M  7.65K
    multipath/port2       -      -     15      0  1.91M  3.83K
    multipath/port3       -      -     21      0  2.99M  3.83K
  mirror-2            4.45T  1.01T     37      1  4.78M  7.65K
    multipath/port4       -      -     18      0  2.39M  3.83K
    multipath/port5       -      -     19      0  2.39M  3.83K
  mirror-3            4.45T  1.00T     38      1  4.90M  7.65K
    multipath/port6       -      -     21      0  2.87M  3.83K
    multipath/port7       -      -     16      0  2.03M  3.83K
  mirror-4            4.40T  1.05T     37      1  4.90M  7.65K
    multipath/port8       -      -     12      0  1.67M  3.83K
    multipath/port9       -      -     24      0  3.23M  3.83K
  mirror-5            4.42T  1.03T     32      0  4.06M      0
    multipath/port10      -      -     21      0  2.63M      0
    multipath/port11      -      -     11      0  1.43M      0
  mirror-6            4.43T  1.02T     32      1  4.06M  7.65K
    multipath/port12      -      -     18      0  2.27M  3.83K
    multipath/port13      -      -     14      0  1.79M  3.83K
  mirror-7            4.40T  1.05T     32      1  4.18M  7.65K
    multipath/port14      -      -     13      0  1.67M  3.83K
    multipath/port15      -      -     19      0  2.51M  3.83K
  mirror-8            4.45T  1.00T     25      1  3.84M  7.65K
    multipath/port16      -      -     17      0  2.76M  3.82K
    multipath/port17      -      -      8      0  1.08M  3.82K
  mirror-9            4.37T  1.09T     31      1  3.94M  7.65K
    multipath/port18      -      -     15      0  1.91M  3.82K
    multipath/port19      -      -     16      0  2.03M  3.82K
  mirror-10           4.38T  1.07T     33      1  4.30M  7.65K
    multipath/port20      -      -     21      0  2.75M  3.82K
    multipath/port21      -      -     12      0  1.55M  3.82K
  mirror-11           4.40T  1.05T     29      3  4.30M  15.3K
    multipath/port22      -      -     16      1  2.15M  7.65K
    multipath/port23      -      -     13      1  2.15M  7.65K
  mirror-12           4.36T  1.09T     24      0  4.18M      0
    multipath/port24      -      -     14      0  2.15M      0
    multipath/port25      -      -     10      0  2.03M      0
  mirror-13           4.40T  1.05T     33      0  4.18M      0
    multipath/port26      -      -     20      0  2.51M      0
    multipath/port27      -      -     13      0  1.67M      0
  mirror-14           4.37T  1.08T     30      1  4.06M  7.65K
    multipath/port28      -      -     21      0  2.75M  3.82K
    multipath/port29      -      -      9      0  1.31M  3.82K
  mirror-15           4.38T  1.07T     28      1  3.82M  7.65K
    multipath/port30      -      -     18      0  2.51M  3.82K
    multipath/port31      -      -     10      0  1.31M  3.82K
  mirror-16           4.38T  1.07T     30      0  3.82M      0
    multipath/port32      -      -     20      0  2.51M      0
    multipath/port33      -      -     10      0  1.31M      0
  mirror-17           4.40T  1.05T     30      1  3.94M  7.65K
    multipath/port34      -      -     12      0  1.67M  3.82K
    multipath/port35      -      -     18      0  2.27M  3.82K
  mirror-18           4.19T  1.27T     25      0  3.82M      0
    da147                 -      -     15      0  2.51M      0
    da148                 -      -     10      0  1.31M      0
  mirror-19           4.28T  1.18T     33      1  4.30M  7.65K
    da149                 -      -     15      0  1.91M  3.82K
    da150                 -      -     18      0  2.39M  3.82K
  mirror-20           4.20T  1.26T     33      1  4.30M  7.65K
    da151                 -      -     15      0  1.91M  3.82K
    da152                 -      -     18      0  2.39M  3.82K
  mirror-21           4.26T  1.20T     23      0  3.97M      0
    da153                 -      -     16      0  2.89M      0
    da154                 -      -      7      0  1.08M      0
  mirror-22           4.27T  1.18T     43      1  5.62M  7.65K
    da155                 -      -     21      0  2.63M  3.82K
    da156                 -      -     21      0  2.99M  3.82K
  mirror-23           4.27T  1.18T     27      1  4.18M  7.65K
    da157                 -      -     15      0  2.39M  3.82K
    da158                 -      -     12      0  1.79M  3.82K
  mirror-24           2.88T   763G     36      0  4.66M      0
    da160                 -      -     23      0  3.11M      0
    da161                 -      -     12      0  1.55M      0
  mirror-25           2.88T   762G     43      0  6.45M      0
    da159                 -      -     26      0  3.94M      0
    da162                 -      -     17      0  2.51M      0
  mirror-26           2.91T   736G     47      0  6.21M      0
    da163                 -      -     28      0  3.82M      0
    da164                 -      -     19      0  2.39M      0
  mirror-27           2.91T   729G     48      0  6.33M      0
    da165                 -      -     31      0  4.06M      0
    da166                 -      -     17      0  2.27M      0
  mirror-28           2.93T   713G     45      0  5.74M      0
    da167                 -      -     21      0  2.63M      0
    da168                 -      -     24      0  3.11M      0
  mirror-29           2.82T   821G     46      0  5.86M      0
    da169                 -      -     22      0  2.87M      0
    da170                 -      -     23      0  2.99M      0
  mirror-30           2.83T   813G      0      0      0      0
    da171                 -      -      0      0      0      0
    da172                 -      -      0      0      0      0
  mirror-31           2.81T   834G      0      0      0      0
    da173                 -      -      0      0      0      0
    da174                 -      -      0      0      0      0
  mirror-32           2.81T   835G      0      0      0      0
    da175                 -      -      0      0      0      0
    da176                 -      -      0      0      0      0
  mirror-33           2.81T   830G      0      0      0      0
    da177                 -      -      0      0      0      0
    da178                 -      -      0      0      0      0
  mirror-34           2.79T   859G      0      0      0      0
    da179                 -      -      0      0      0      0
    da180                 -      -      0      0      0      0
  mirror-35           2.79T   859G      0      0      0      0
    da181                 -      -      0      0      0      0
    da182                 -      -      0      0      0      0
  mirror-36           2.81T   837G      0      0      0      0
    da190                 -      -      0      0      0      0
    da183                 -      -      0      0      0      0
  mirror-37           3.04T   597G      0      0      0      0
    da184                 -      -      0      0      0      0
    da191                 -      -      0      0      0      0
  mirror-38           2.81T   837G      0      0      0      0
    da185                 -      -      0      0      0      0
    da186                 -      -      0      0      0      0
  mirror-39           2.78T   863G      0      0      0      0
    da187                 -      -      0      0      0      0
    da188                 -      -      0      0      0      0
  mirror-40           4.09T  1.36T      0      2      0  11.5K
    da146                 -      -      0      0      0  3.82K
    da126                 -      -      0      0      0  3.82K
    da145                 -      -      0      0      0  3.82K
  mirror-41           3.83T  1.63T      0      1      0  7.65K
    da127                 -      -      0      0      0  3.82K
    da128                 -      -      0      0      0  3.82K
  mirror-42           3.83T  1.63T      0      1      0  7.65K
    da129                 -      -      0      0      0  3.82K
    da130                 -      -      0      0      0  3.82K
  mirror-43           3.83T  1.63T      0      1      0  7.65K
    da131                 -      -      0      0      0  3.82K
    da132                 -      -      0      0      0  3.82K
  mirror-44           3.83T  1.63T      3      1   390K  7.65K
    da133                 -      -      3      0   390K  3.82K
    da134                 -      -      0      0      0  3.82K
  mirror-45           3.83T  1.63T      0      1      0  7.65K
    da135                 -      -      0      0      0  3.82K
    da136                 -      -      0      0      0  3.82K
  mirror-46           3.83T  1.62T      0      1      0  7.65K
    da137                 -      -      0      0      0  3.82K
    da138                 -      -      0      0      0  3.82K
  mirror-47           3.83T  1.63T      0      1      0  7.65K
    da139                 -      -      0      0      0  3.82K
    da140                 -      -      0      0      0  3.82K
  mirror-48           3.83T  1.63T      0      1      0  7.65K
    da141                 -      -      0      0      0  3.82K
    da142                 -      -      0      0      0  3.82K
  mirror-49           3.83T  1.62T      0      1      0  7.65K
    da143                 -      -      0      0      0  3.82K
    da144                 -      -      0      0      0  3.82K
special                   -      -      -      -      -      -
  ada0                5.43T  1.55T      1      0  7.65K      0
 
Other than looking at zpool list for usage / fragmentation, or zpool status -s, as I suggested above, zpool iostat -lv backups 60 might reveal if particular devices are misbehaving, or you could use the /usr/local/share/dtrace-toolkit/hotkernel dtrace script from sysutils/dtrace-toolkit to see if there are a clues to what the kernel is doing.

zpool iostat -r (potentially with pool name and interval) can also show you what IO sizes are being issued.
 
I also tried on a different dataset which holds regular files vs the veeam files which might be sparse or whatnot. Same results. Also ran the zpool iostat -lv backups 60 and nothing jumped at me. Not sure I want to debug the kernel on this production machine :|

Code:
NAME                   SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
backups                250T   194T  56.0T        -         -    51%    77%  1.60x    ONLINE  -
  mirror-0            5.45T  4.43T  1.03T        -         -    52%  81.2%      -    ONLINE
    multipath/port0       -      -      -        -         -      -      -      -    ONLINE
    multipath/port1       -      -      -        -         -      -      -      -    ONLINE
  mirror-1            5.45T  4.44T  1.01T        -         -    52%  81.5%      -    ONLINE
    multipath/port2       -      -      -        -         -      -      -      -    ONLINE
    multipath/port3       -      -      -        -         -      -      -      -    ONLINE
  mirror-2            5.45T  4.45T  1.01T        -         -    52%  81.5%      -    ONLINE
    multipath/port4       -      -      -        -         -      -      -      -    ONLINE
    multipath/port5       -      -      -        -         -      -      -      -    ONLINE
  mirror-3            5.45T  4.45T  1.00T        -         -    53%  81.7%      -    ONLINE
    multipath/port6       -      -      -        -         -      -      -      -    ONLINE
    multipath/port7       -      -      -        -         -      -      -      -    ONLINE
  mirror-4            5.45T  4.40T  1.05T        -         -    52%  80.8%      -    ONLINE
    multipath/port8       -      -      -        -         -      -      -      -    ONLINE
    multipath/port9       -      -      -        -         -      -      -      -    ONLINE
  mirror-5            5.45T  4.42T  1.03T        -         -    52%  81.1%      -    ONLINE
    multipath/port10      -      -      -        -         -      -      -      -    ONLINE
    multipath/port11      -      -      -        -         -      -      -      -    ONLINE
  mirror-6            5.45T  4.43T  1.02T        -         -    52%  81.2%      -    ONLINE
    multipath/port12      -      -      -        -         -      -      -      -    ONLINE
    multipath/port13      -      -      -        -         -      -      -      -    ONLINE
  mirror-7            5.45T  4.40T  1.05T        -         -    52%  80.8%      -    ONLINE
    multipath/port14      -      -      -        -         -      -      -      -    ONLINE
    multipath/port15      -      -      -        -         -      -      -      -    ONLINE
  mirror-8            5.45T  4.45T  1.00T        -         -    52%  81.6%      -    ONLINE
    multipath/port16      -      -      -        -         -      -      -      -    ONLINE
    multipath/port17      -      -      -        -         -      -      -      -    ONLINE
  mirror-9            5.45T  4.37T  1.09T        -         -    52%  80.0%      -    ONLINE
    multipath/port18      -      -      -        -         -      -      -      -    ONLINE
    multipath/port19      -      -      -        -         -      -      -      -    ONLINE
  mirror-10           5.45T  4.38T  1.07T        -         -    52%  80.4%      -    ONLINE
    multipath/port20      -      -      -        -         -      -      -      -    ONLINE
    multipath/port21      -      -      -        -         -      -      -      -    ONLINE
  mirror-11           5.45T  4.40T  1.05T        -         -    52%  80.8%      -    ONLINE
    multipath/port22      -      -      -        -         -      -      -      -    ONLINE
    multipath/port23      -      -      -        -         -      -      -      -    ONLINE
  mirror-12           5.45T  4.36T  1.09T        -         -    51%  80.0%      -    ONLINE
    multipath/port24      -      -      -        -         -      -      -      -    ONLINE
    multipath/port25      -      -      -        -         -      -      -      -    ONLINE
  mirror-13           5.45T  4.40T  1.05T        -         -    52%  80.7%      -    ONLINE
    multipath/port26      -      -      -        -         -      -      -      -    ONLINE
    multipath/port27      -      -      -        -         -      -      -      -    ONLINE
  mirror-14           5.45T  4.37T  1.08T        -         -    52%  80.1%      -    ONLINE
    multipath/port28      -      -      -        -         -      -      -      -    ONLINE
    multipath/port29      -      -      -        -         -      -      -      -    ONLINE
  mirror-15           5.45T  4.38T  1.07T        -         -    52%  80.4%      -    ONLINE
    multipath/port30      -      -      -        -         -      -      -      -    ONLINE
    multipath/port31      -      -      -        -         -      -      -      -    ONLINE
  mirror-16           5.45T  4.38T  1.07T        -         -    52%  80.4%      -    ONLINE
    multipath/port32      -      -      -        -         -      -      -      -    ONLINE
    multipath/port33      -      -      -        -         -      -      -      -    ONLINE
  mirror-17           5.45T  4.40T  1.05T        -         -    52%  80.7%      -    ONLINE
    multipath/port34      -      -      -        -         -      -      -      -    ONLINE
    multipath/port35      -      -      -        -         -      -      -      -    ONLINE
  mirror-18           5.45T  4.19T  1.27T        -         -    50%  76.8%      -    ONLINE
    da147                 -      -      -        -         -      -      -      -    ONLINE
    da148                 -      -      -        -         -      -      -      -    ONLINE
  mirror-19           5.45T  4.28T  1.18T        -         -    51%  78.4%      -    ONLINE
    da149                 -      -      -        -         -      -      -      -    ONLINE
    da150                 -      -      -        -         -      -      -      -    ONLINE
  mirror-20           5.45T  4.20T  1.26T        -         -    50%  76.9%      -    ONLINE
    da151                 -      -      -        -         -      -      -      -    ONLINE
    da152                 -      -      -        -         -      -      -      -    ONLINE
  mirror-21           5.45T  4.26T  1.20T        -         -    50%  78.0%      -    ONLINE
    da153                 -      -      -        -         -      -      -      -    ONLINE
    da154                 -      -      -        -         -      -      -      -    ONLINE
  mirror-22           5.45T  4.27T  1.18T        -         -    51%  78.3%      -    ONLINE
    da155                 -      -      -        -         -      -      -      -    ONLINE
    da156                 -      -      -        -         -      -      -      -    ONLINE
  mirror-23           5.45T  4.27T  1.18T        -         -    51%  78.3%      -    ONLINE
    da157                 -      -      -        -         -      -      -      -    ONLINE
    da158                 -      -      -        -         -      -      -      -    ONLINE
  mirror-24           3.62T  2.88T   763G        -         -    54%  79.5%      -    ONLINE
    da160                 -      -      -        -         -      -      -      -    ONLINE
    da161                 -      -      -        -         -      -      -      -    ONLINE
  mirror-25           3.62T  2.88T   762G        -         -    54%  79.5%      -    ONLINE
    da159                 -      -      -        -         -      -      -      -    ONLINE
    da162                 -      -      -        -         -      -      -      -    ONLINE
  mirror-26           3.62T  2.91T   736G        -         -    54%  80.2%      -    ONLINE
    da163                 -      -      -        -         -      -      -      -    ONLINE
    da164                 -      -      -        -         -      -      -      -    ONLINE
  mirror-27           3.62T  2.91T   729G        -         -    54%  80.4%      -    ONLINE
    da165                 -      -      -        -         -      -      -      -    ONLINE
    da166                 -      -      -        -         -      -      -      -    ONLINE
  mirror-28           3.62T  2.93T   713G        -         -    55%  80.8%      -    ONLINE
    da167                 -      -      -        -         -      -      -      -    ONLINE
    da168                 -      -      -        -         -      -      -      -    ONLINE
  mirror-29           3.62T  2.82T   821G        -         -    53%  77.9%      -    ONLINE
    da169                 -      -      -        -         -      -      -      -    ONLINE
    da170                 -      -      -        -         -      -      -      -    ONLINE
  mirror-30           3.62T  2.83T   813G        -         -    54%  78.1%      -    ONLINE
    da171                 -      -      -        -         -      -      -      -    ONLINE
    da172                 -      -      -        -         -      -      -      -    ONLINE
  mirror-31           3.62T  2.81T   834G        -         -    54%  77.5%      -    ONLINE
    da173                 -      -      -        -         -      -      -      -    ONLINE
    da174                 -      -      -        -         -      -      -      -    ONLINE
  mirror-32           3.62T  2.81T   835G        -         -    53%  77.5%      -    ONLINE
    da175                 -      -      -        -         -      -      -      -    ONLINE
    da176                 -      -      -        -         -      -      -      -    ONLINE
  mirror-33           3.62T  2.81T   830G        -         -    54%  77.6%      -    ONLINE
    da177                 -      -      -        -         -      -      -      -    ONLINE
    da178                 -      -      -        -         -      -      -      -    ONLINE
  mirror-34           3.62T  2.79T   859G        -         -    54%  76.9%      -    ONLINE
    da179                 -      -      -        -         -      -      -      -    ONLINE
    da180                 -      -      -        -         -      -      -      -    ONLINE
  mirror-35           3.62T  2.79T   859G        -         -    54%  76.8%      -    ONLINE
    da181                 -      -      -        -         -      -      -      -    ONLINE
    da182                 -      -      -        -         -      -      -      -    ONLINE
  mirror-36           3.62T  2.81T   837G        -         -    54%  77.4%      -    ONLINE
    da190                 -      -      -        -         -      -      -      -    ONLINE
    da183                 -      -      -        -         -      -      -      -    ONLINE
  mirror-37           3.62T  3.04T   597G        -         -    55%  83.9%      -    ONLINE
    da184                 -      -      -        -         -      -      -      -    ONLINE
    da191                 -      -      -        -         -      -      -      -    ONLINE
  mirror-38           3.62T  2.81T   837G        -         -    54%  77.5%      -    ONLINE
    da185                 -      -      -        -         -      -      -      -    ONLINE
    da186                 -      -      -        -         -      -      -      -    ONLINE
  mirror-39           3.62T  2.78T   863G        -         -    54%  76.8%      -    ONLINE
    da187                 -      -      -        -         -      -      -      -    ONLINE
    da188                 -      -      -        -         -      -      -      -    ONLINE
  mirror-40           5.45T  4.09T  1.36T        -         -    48%  75.1%      -    ONLINE
    da146                 -      -      -        -         -      -      -      -    ONLINE
    da126                 -      -      -        -         -      -      -      -    ONLINE
    da145                 -      -      -        -         -      -      -      -    ONLINE
  mirror-41           5.45T  3.83T  1.63T        -         -    46%  70.2%      -    ONLINE
    da127                 -      -      -        -         -      -      -      -    ONLINE
    da128                 -      -      -        -         -      -      -      -    ONLINE
  mirror-42           5.45T  3.83T  1.63T        -         -    48%  70.2%      -    ONLINE
    da129                 -      -      -        -         -      -      -      -    ONLINE
    da130                 -      -      -        -         -      -      -      -    ONLINE
  mirror-43           5.45T  3.83T  1.63T        -         -    48%  70.2%      -    ONLINE
    da131                 -      -      -        -         -      -      -      -    ONLINE
    da132                 -      -      -        -         -      -      -      -    ONLINE
  mirror-44           5.45T  3.83T  1.63T        -         -    44%  70.2%      -    ONLINE
    da133                 -      -      -        -         -      -      -      -    ONLINE
    da134                 -      -      -        -         -      -      -      -    ONLINE
  mirror-45           5.45T  3.83T  1.63T        -         -    48%  70.2%      -    ONLINE
    da135                 -      -      -        -         -      -      -      -    ONLINE
    da136                 -      -      -        -         -      -      -      -    ONLINE
  mirror-46           5.45T  3.83T  1.62T        -         -    46%  70.2%      -    ONLINE
    da137                 -      -      -        -         -      -      -      -    ONLINE
    da138                 -      -      -        -         -      -      -      -    ONLINE
  mirror-47           5.45T  3.83T  1.63T        -         -    47%  70.2%      -    ONLINE
    da139                 -      -      -        -         -      -      -      -    ONLINE
    da140                 -      -      -        -         -      -      -      -    ONLINE
  mirror-48           5.45T  3.83T  1.63T        -         -    49%  70.2%      -    ONLINE
    da141                 -      -      -        -         -      -      -      -    ONLINE
    da142                 -      -      -        -         -      -      -      -    ONLINE
  mirror-49           5.45T  3.83T  1.62T        -         -    46%  70.2%      -    ONLINE
    da143                 -      -      -        -         -      -      -      -    ONLINE
    da144                 -      -      -        -         -      -      -      -    ONLINE
special                   -      -      -        -         -      -      -      -  -
  ada0                6.98T  5.43T  1.55T        -         -    76%  77.8%      -    ONLINE
spare                     -      -      -        -         -      -      -      -  -
  da189                   -      -      -        -         -      -      -      -     AVAIL
zroot                  236G  37.9G   198G        -         -    28%    16%  1.00x    ONLINE  -
  ada1p3               236G  37.9G   198G        -         -    28%  16.1%      -    ONLINE
 
Looking at these results, could it be slow because I've passed 80% usage on some of these mirrors ?

Code:
zpool iostat -lv backups 60

                        capacity     operations     bandwidth    total_wait     disk_wait    syncq_wait    asyncq_wait  scrub   trim
pool                  alloc   free   read  write   read  write   read  write   read  write   read  write   read  write   wait   wait
--------------------  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----
backups                194T  56.0T    899    126   118M  8.41M    8ms    7ms    7ms    2ms    2us   28us    1ms    5ms      -      -
  mirror-0            4.43T  1.03T     28      1  3.73M   135K    5ms   17ms    5ms    9ms      -    3us  610us   13ms      -      -
    multipath/port0       -      -     14      0  1.86M  67.7K    3ms    2ms    2ms    1ms      -    3us  241us    1ms      -      -
    multipath/port1       -      -     14      0  1.87M  67.7K    8ms   33ms    7ms   17ms      -    2us  981us   27ms      -      -
  mirror-1            4.44T  1.01T     27      1  3.57M   146K    6ms    7ms    6ms    6ms      -    2us  601us    2ms      -      -
    multipath/port2       -      -     13      0  1.77M  73.1K    2ms    1ms    2ms    1ms      -    2us  270us  922us      -      -
    multipath/port3       -      -     13      0  1.80M  73.1K   10ms   13ms    9ms   11ms      -    2us  924us    3ms      -      -
  mirror-2            4.45T  1.01T     29      1  3.90M   167K    6ms   15ms    5ms    4ms      -    2us  912us   19ms      -      -
    multipath/port4       -      -     14      0  1.98M  83.6K    3ms    3ms    3ms    1ms      -    2us  386us    3ms      -      -
    multipath/port5       -      -     14      0  1.92M  83.6K    9ms   28ms    8ms    8ms      -    2us    1ms   35ms      -      -
  mirror-3            4.45T  1.00T     31      2  4.06M   177K    7ms    7ms    5ms    3ms      -    2us    1ms    6ms      -      -
    multipath/port6       -      -     15      1  1.97M  88.7K   10ms   12ms    8ms    5ms      -    3us    1ms    9ms      -      -
    multipath/port7       -      -     15      1  2.09M  88.7K    4ms    2ms    3ms    1ms      -    2us  620us    2ms      -      -
  mirror-4            4.40T  1.05T     28      1  3.72M   180K    3ms    8ms    3ms    4ms      -    2us  608us    8ms      -      -
    multipath/port8       -      -     14      0  1.92M  90.2K    4ms   15ms    3ms    7ms      -    2us  855us   14ms      -      -
    multipath/port9       -      -     13      0  1.79M  90.2K    3ms    2ms    2ms    1ms      -    2us  343us    1ms      -      -
  mirror-5            4.42T  1.03T     31      1  4.16M  96.2K    7ms    8ms    6ms    5ms      -    2us  901us    8ms      -      -
    multipath/port10      -      -     15      0  2.13M  48.1K    9ms   15ms    8ms    9ms      -    2us  861us   16ms      -      -
    multipath/port11      -      -     15      0  2.03M  48.1K    5ms    1ms    4ms    1ms      -    2us  942us    1ms      -      -
  mirror-6            4.43T  1.02T     29      1  3.81M  76.6K    4ms    3ms    3ms    2ms      -    2us  479us    1ms      -      -
    multipath/port12      -      -     13      0  1.81M  38.3K    3ms    1ms    2ms  707us      -    2us  482us  615us      -      -
    multipath/port13      -      -     15      0  2.00M  38.3K    4ms    6ms    4ms    4ms      -    2us  476us    3ms      -      -
  mirror-7            4.40T  1.05T     28      1  3.71M  72.3K    5ms    4ms    4ms    3ms      -    2us  435us    5ms      -      -
    multipath/port14      -      -     13      0  1.77M  36.2K    3ms    1ms    2ms  866us      -    3us  342us  372us      -      -
    multipath/port15      -      -     14      0  1.94M  36.2K    7ms    8ms    6ms    5ms      -    2us  520us    9ms      -      -
  mirror-8            4.45T  1.00T     30      1  4.10M   169K    9ms   19ms    7ms    7ms      -    2us    1ms   21ms      -      -
    multipath/port16      -      -     14      0  1.94M  84.3K    4ms    3ms    4ms    1ms      -    2us  846us    4ms      -      -
    multipath/port17      -      -     16      0  2.16M  84.3K   13ms   35ms   11ms   13ms      -    2us    2ms   40ms      -      -
  mirror-9            4.37T  1.09T     27      1  3.59M   176K   10ms   12ms   10ms    7ms      -    2us  788us    9ms      -      -
    multipath/port18      -      -     13      0  1.80M  88.0K   14ms   13ms   13ms    8ms      -    2us  990us    9ms      -      -
    multipath/port19      -      -     13      0  1.80M  88.0K    7ms   11ms    6ms    6ms      -    2us  587us    9ms      -      -
  mirror-10           4.38T  1.07T     28      1  3.77M   136K    9ms    8ms    8ms    5ms      -    2us    1ms    6ms      -      -
    multipath/port20      -      -     13      0  1.82M  67.9K   11ms    7ms    9ms    4ms      -    3us    1ms    4ms      -      -
    multipath/port21      -      -     14      0  1.95M  67.9K    7ms   10ms    7ms    6ms      -    2us  678us    8ms      -      -
  mirror-11           4.40T  1.05T     29      1  3.87M   159K   10ms   19ms    8ms   10ms      -    2us    1ms   14ms      -      -
    multipath/port22      -      -     14      0  1.94M  79.6K   13ms   17ms   11ms   10ms      -    2us    2ms   11ms      -      -
    multipath/port23      -      -     14      0  1.94M  79.6K    7ms   21ms    6ms   10ms      -    2us    1ms   17ms      -      -
  mirror-12           4.36T  1.09T     28      1  3.71M   112K    7ms    7ms    6ms    4ms      -    2us  791us    5ms      -      -
    multipath/port24      -      -     15      0  1.96M  56.0K    7ms    7ms    6ms    4ms      -    2us  998us    6ms      -      -
    multipath/port25      -      -     13      0  1.74M  56.0K    6ms    7ms    5ms    4ms      -    2us  559us    5ms      -      -
  mirror-13           4.40T  1.05T     31      0  4.09M   106K   12ms   13ms   11ms    8ms      -    2us    1ms   12ms      -      -
    multipath/port26      -      -     15      0  1.97M  53.1K   11ms   12ms   10ms    8ms      -    2us  960us    9ms      -      -
    multipath/port27      -      -     16      0  2.12M  53.1K   14ms   15ms   12ms    9ms      -    2us    1ms   16ms      -      -
  mirror-14           4.37T  1.08T     28      1  3.68M   180K   13ms   22ms   11ms    4ms      -    2us    1ms   25ms      -      -
    multipath/port28      -      -     14      0  1.94M  90.2K   15ms   29ms   13ms    5ms      -    2us    1ms   32ms      -      -
    multipath/port29      -      -     13      0  1.74M  90.2K   10ms   16ms    8ms    3ms      -    2us    2ms   18ms      -      -
  mirror-15           4.38T  1.07T     26      1  3.44M   173K   13ms   25ms   11ms    9ms      -    2us    2ms   23ms      -      -
    multipath/port30      -      -     13      0  1.70M  86.3K   19ms   24ms   15ms    8ms      -    3us    3ms   23ms      -      -
    multipath/port31      -      -     13      0  1.74M  86.3K    8ms   25ms    7ms    9ms      -    2us  847us   24ms      -      -
  mirror-16           4.38T  1.07T     26      1  3.48M   106K   11ms    8ms    9ms    6ms      -    2us    1ms    4ms      -      -
    multipath/port32      -      -     12      0  1.64M  53.0K   14ms    8ms   12ms    6ms      -    2us    1ms    5ms      -      -
    multipath/port33      -      -     14      0  1.84M  53.0K    7ms    9ms    7ms    7ms      -    2us  384us    4ms      -      -
  mirror-17           4.40T  1.05T     28      0  3.77M  79.9K   13ms    5ms   12ms    5ms      -    2us    1ms    2ms      -      -
    multipath/port34      -      -     13      0  1.78M  40.0K    7ms    6ms    6ms    5ms      -    2us  719us    2ms      -      -
    multipath/port35      -      -     15      0  2.00M  40.0K   19ms    4ms   17ms    4ms      -    2us    2ms    2ms      -      -
  mirror-18           4.19T  1.27T     26      1  3.62M   122K   18ms   12ms   16ms    6ms      -    2us    2ms    9ms      -      -
    da147                 -      -     13      0  1.86M  60.9K   21ms   12ms   19ms    7ms      -    2us    2ms    9ms      -      -
    da148                 -      -     12      0  1.75M  60.9K   15ms   11ms   13ms    6ms      -    2us    2ms    9ms      -      -
  mirror-19           4.28T  1.18T     26      1  3.54M   186K   10ms   14ms    9ms   10ms      -    3us    1ms    7ms      -      -
    da149                 -      -     13      0  1.88M  93.2K   11ms   14ms   10ms   11ms      -    3us    1ms    6ms      -      -
    da150                 -      -     12      0  1.67M  93.2K    8ms   15ms    7ms    9ms      -    3us    1ms    8ms      -      -
  mirror-20           4.20T  1.26T     25      1  3.36M   218K    8ms   37ms    7ms   13ms      -    2us  659us   44ms      -      -
    da151                 -      -     13      0  1.79M   109K    6ms   37ms    5ms   14ms      -    2us  804us   43ms      -      -
    da152                 -      -     11      0  1.56M   109K   10ms   38ms    9ms   13ms      -    2us  496us   45ms      -      -
  mirror-21           4.26T  1.20T     28      1  3.71M   128K    7ms   22ms    6ms    9ms      -    2us  916us   18ms      -      -
    da153                 -      -     14      0  1.87M  63.9K    7ms   25ms    6ms   10ms      -    2us  569us   19ms      -      -
    da154                 -      -     14      0  1.83M  63.9K    8ms   20ms    7ms    8ms      -    2us    1ms   16ms      -      -
  mirror-22           4.27T  1.18T     26      1  3.42M   198K    9ms   73ms    8ms   14ms      -    3us  806us   96ms      -      -
    da155                 -      -     13      0  1.79M  99.2K    9ms   72ms    8ms   14ms      -    3us    1ms   97ms      -      -
    da156                 -      -     12      0  1.63M  99.2K    9ms   74ms    8ms   14ms      -    2us  491us   95ms      -      -
  mirror-23           4.27T  1.18T     30      0  4.01M  80.3K   10ms   12ms    8ms    5ms      -    2us    1ms   15ms      -      -
    da157                 -      -     15      0  2.04M  40.2K   11ms    5ms    8ms    3ms      -    2us    2ms    5ms      -      -
    da158                 -      -     15      0  1.97M  40.2K   10ms   18ms    8ms    7ms      -    2us    1ms   25ms      -      -
  mirror-24           2.88T   763G     35      0  5.04M  94.4K    4ms    5ms    3ms    4ms      -    2us  681us    1ms      -      -
    da160                 -      -     17      0  2.56M  47.2K    4ms    3ms    3ms    3ms      -    2us  700us    1ms      -      -
    da161                 -      -     17      0  2.49M  47.2K    4ms    6ms    3ms    6ms      -    2us  662us    1ms      -      -
  mirror-25           2.88T   762G     37      1  5.15M  82.7K   14ms    7ms   10ms    4ms      -    2us    4ms    5ms      -      -
    da159                 -      -     18      0  2.49M  41.4K   20ms   15ms   14ms    7ms      -    2us    6ms   11ms      -      -
    da162                 -      -     19      0  2.66M  41.4K    9ms  638us    6ms  498us      -    2us    2ms  239us      -      -
  mirror-26           2.91T   736G     34      0  4.60M  78.4K    3ms    2ms    2ms    2ms      -    2us  486us    2ms      -      -
    da163                 -      -     17      0  2.38M  39.2K    3ms    2ms    2ms    2ms      -    3us  494us  919us      -      -
    da164                 -      -     16      0  2.22M  39.2K    3ms    3ms    2ms    2ms      -    2us  477us    3ms      -      -
  mirror-27           2.91T   729G     34      0  4.66M  99.0K    2ms    6ms    2ms    3ms      -    2us  400us    8ms      -      -
    da165                 -      -     17      0  2.30M  49.5K    2ms    6ms    2ms    3ms      -    2us  415us    9ms      -      -
    da166                 -      -     17      0  2.36M  49.5K    2ms    6ms    2ms    3ms      -    2us  386us    8ms      -      -
  mirror-28           2.93T   713G     34      0  4.61M  91.4K    3ms    7ms    2ms    3ms      -    2us  379us   10ms      -      -
    da167                 -      -     17      0  2.42M  45.7K    3ms    8ms    2ms    3ms      -    3us  388us   14ms      -      -
    da168                 -      -     16      0  2.19M  45.7K    3ms    5ms    2ms    3ms      -    2us  370us    5ms      -      -
  mirror-29           2.82T   821G     33      0  4.58M   109K    3ms    2ms    2ms    1ms      -    2us  440us    1ms      -      -
    da169                 -      -     16      0  2.23M  54.7K    3ms    2ms    2ms    1ms      -    3us  429us    1ms      -      -
    da170                 -      -     17      0  2.35M  54.7K    2ms    2ms    2ms    1ms      -    2us  451us    1ms      -      -
  mirror-30           2.83T   813G      0      0      0  81.8K      -    4ms      -    3ms      -    2us      -    2ms      -      -
    da171                 -      -      0      0      0  40.9K      -    3ms      -    3ms      -    2us      -  984us      -      -
    da172                 -      -      0      0      0  40.9K      -    4ms      -    3ms      -    2us      -    2ms      -      -
  mirror-31           2.81T   834G      0      1      0   111K      -    4ms      -    2ms      -    2us      -    2ms      -      -
    da173                 -      -      0      0      0  55.4K      -    3ms      -    2ms      -    3us      -    2ms      -      -
    da174                 -      -      0      0      0  55.4K      -    4ms      -    2ms      -    2us      -    3ms      -      -
  mirror-32           2.81T   835G      0      0      0  67.8K      -   12ms      -   12ms      -    2us      -    1ms      -      -
    da175                 -      -      0      0      0  33.9K      -   12ms      -   12ms      -    2us      -    1ms      -      -
    da176                 -      -      0      0      0  33.9K      -   12ms      -   12ms      -    2us      -    1ms      -      -
  mirror-33           2.81T   830G      0      1      0  93.8K      -    2ms      -    2ms      -    3us      -  546us      -      -
    da177                 -      -      0      0      0  46.9K      -    2ms      -    2ms      -    3us      -  623us      -      -
    da178                 -      -      0      0      0  46.9K      -    2ms      -    2ms      -    2us      -  465us      -      -
  mirror-34           2.79T   859G      0      1      0   147K      -    5ms      -    3ms      -    2us      -    2ms      -      -
    da179                 -      -      0      0      0  73.3K      -    5ms      -    3ms      -    3us      -    2ms      -      -
    da180                 -      -      0      0      0  73.3K      -    5ms      -    3ms      -    2us      -    2ms      -      -
  mirror-35           2.79T   859G      0      0      0  77.8K      -    3ms      -    3ms      -    2us      -    1ms      -      -
    da181                 -      -      0      0      0  38.9K      -    3ms      -    3ms      -    2us      -    1ms      -      -
    da182                 -      -      0      0      0  38.9K      -    3ms      -    3ms      -    2us      -  985us      -      -
  mirror-36           2.81T   837G      0      1      0  86.2K      -    3ms      -    3ms      -    2us      -    1ms      -      -
    da190                 -      -      0      0      0  43.1K      -    1ms      -  889us      -    2us      -    1ms      -      -
    da183                 -      -      0      0      0  43.1K      -    6ms      -    5ms      -    2us      -  812us      -      -
  mirror-37           3.04T   597G      0      1      0   124K      -    4ms      -    2ms      -    2us      -    3ms      -      -
    da184                 -      -      0      0      0  62.1K      -    2ms      -    1ms      -    2us      -    2ms      -      -
    da191                 -      -      0      0      0  62.1K      -    5ms      -    3ms      -    2us      -    4ms      -      -
  mirror-38           2.81T   837G      0      0      0  86.3K      -    4ms      -    2ms      -    2us      -    6ms      -      -
    da185                 -      -      0      0      0  43.2K      -    4ms      -    2ms      -    2us      -    5ms      -      -
    da186                 -      -      0      0      0  43.2K      -    4ms      -    2ms      -    2us      -    6ms      -      -
  mirror-39           2.78T   863G      0      0     68  96.6K   12ms    4ms   12ms    3ms    3us    2us      -    4ms      -      -
    da187                 -      -      0      0      0  48.3K      -    4ms      -    3ms      -    3us      -    4ms      -      -
    da188                 -      -      0      0     68  48.3K   12ms    4ms   12ms    3ms    3us    2us      -    4ms      -      -
  mirror-40           4.09T  1.36T      0      2      0   495K      -   10ms      -    2ms      -  102us      -   11ms      -      -
    da146                 -      -      0      0      0   165K      -   18ms      -    3ms      -  199us      -   20ms      -      -
    da126                 -      -      0      0      0   165K      -    5ms      -    1ms      -   33us      -    5ms      -      -
    da145                 -      -      0      0      0   165K      -    7ms      -    2ms      -   76us      -    8ms      -      -
  mirror-41           3.83T  1.63T      0      1      0   274K      -    8ms      -    3ms      -   81us      -    8ms      -      -
    da127                 -      -      0      0      0   137K      -    8ms      -    3ms      -   81us      -    9ms      -      -
    da128                 -      -      0      0      0   137K      -    8ms      -    3ms      -   80us      -    7ms      -      -
  mirror-42           3.83T  1.63T      0      1      0   195K      -    5ms      -    3ms      -    2us      -    4ms      -      -
    da129                 -      -      0      0      0  97.6K      -    6ms      -    3ms      -    3us      -    4ms      -      -
    da130                 -      -      0      0      0  97.6K      -    5ms      -    2ms      -    2us      -    3ms      -      -
  mirror-43           3.83T  1.63T      0      2      0   258K      -   10ms      -    3ms      -    2us      -    8ms      -      -
    da131                 -      -      0      1      0   129K      -    7ms      -    2ms      -    2us      -    5ms      -      -
    da132                 -      -      0      1      0   129K      -   13ms      -    3ms      -    2us      -   11ms      -      -
  mirror-44           3.83T  1.63T      0      1      0   167K      -    2ms      -    2ms      -   26us      -    1ms      -      -
    da133                 -      -      0      0      0  83.5K      -    3ms      -    2ms      -   14us      -    1ms      -      -
    da134                 -      -      0      0      0  83.5K      -    2ms      -    2ms      -   37us      -    1ms      -      -
  mirror-45           3.83T  1.63T      0      1      0   213K      -   12ms      -    3ms      -    2us      -    9ms      -      -
    da135                 -      -      0      0      0   107K      -    9ms      -    3ms      -    2us      -    6ms      -      -
    da136                 -      -      0      0      0   107K      -   14ms      -    3ms      -    2us      -   12ms      -      -
  mirror-46           3.83T  1.62T      0      1      0   205K      -    5ms      -    3ms      -   88us      -    4ms      -      -
    da137                 -      -      0      0      0   103K      -    5ms      -    3ms      -   60us      -    3ms      -      -
    da138                 -      -      0      0      0   103K      -    5ms      -    3ms      -  115us      -    5ms      -      -
  mirror-47           3.83T  1.63T      0      1      0   260K      -   24ms      -   12ms      -  631us      -   19ms      -      -
    da139                 -      -      0      0      0   130K      -   22ms      -   12ms      -  422us      -   18ms      -      -
    da140                 -      -      0      0      0   130K      -   27ms      -   11ms      -  841us      -   20ms      -      -
  mirror-48           3.83T  1.63T      0      1      0   268K      -   14ms      -    4ms      -  474us      -   16ms      -      -
    da141                 -      -      0      0      0   134K      -   13ms      -    5ms      -  841us      -   12ms      -      -
    da142                 -      -      0      0      0   134K      -   16ms      -    3ms      -  107us      -   19ms      -      -
  mirror-49           3.83T  1.62T      0      1      0   270K      -    7ms      -    4ms      -    3us      -    6ms      -      -
    da143                 -      -      0      0      0   135K      -    7ms      -    4ms      -    3us      -    5ms      -      -
    da144                 -      -      0      0      0   135K      -    8ms      -    4ms      -    3us      -    7ms      -      -
special                   -      -      -      -      -      -      -      -      -      -      -      -      -      -      -      -
  ada0                5.43T  1.55T      3     58  20.7K  1.07M  178us    2ms  174us  140us    2us      -      -    2ms      -      -
 
Before worrying about anything else, I would get some redundancy on your special device. You've got a standard endurance flash drive that is a single point of failure for your whole pool.

From zpoolconcepts(7): ('**'-emphasis mine)
Code:
     special  A device dedicated solely for allocating various kinds of
              internal metadata, and optionally small file blocks.  **The
              redundancy of this device should match the redundancy of the
              other normal devices in the pool.**  If more than one special
              device is specified, then allocations are load-balanced between
              those devices.
 
Back
Top