Is this a bug of the mps driver?

My ZFS pool is set up as a raidz2 pool as follows:

Code:
        NAME          STATE     READ WRITE CKSUM
        myzfs         ONLINE       0     0     0
          raidz2-0    ONLINE       0     0     0
            gpt/akjc  ONLINE       0     0     0
            gpt/0773  ONLINE       0     0     0
            gpt/6062  ONLINE       0     0     0
            gpt/2651  ONLINE       0     0     0
            gpt/gvkc  ONLINE       0     0     0
            gpt/ja7h  ONLINE       0     0     0
        logs
          gpt/log0    ONLINE       0     0     0

The six disks are connected to an on board SAS2008 HBA, using the mps() driver, while the log is connected to an ahci() interface.

Recently, I tried to write a DTrace script to measure disk IOs. I found out that whenever there were sync writes, the ahci driver would send 1 sync command to the drive, while the mps driver would send 2 sync commands to each drives in the pool increasing the latencies of the disks significantly. My question is, should this be considered a bug of the mps driver and report it?
 
I am using the latest driver. Further instrumenting shows that it may really be the correct behaviour when using ZFS:

Code:
1145715382631222 vdev_config_sync
1145715382677212 ada0                               <--- sync begins
1145715382712384 da0
1145715382747498 da3
1145715382755185 da2
1145715382787783 da5
1145715382794004 da1
1145715382825821 da4
1145715383691279 dsl_pool_sync_context
1145715383726360 dsl_pool_sync_context
1145715383968588 dsl_pool_sync_context
1145715383973194 dsl_pool_sync_context
1145715384176831 dsl_pool_sync_context
1145715384207206 dsl_pool_sync_context
1145715384413634 dsl_pool_sync_context
1145715384418129 dsl_pool_sync_context
1145715384647256 dsl_pool_sync_context
1145715384658957 dsl_pool_sync_context
1145715391385137 zfs_sync
1145715549664998 vdev_label_sync_list
1145715549673206 vdev_uberblock_sync_list
1145715549676373 vdev_uberblock_sync
1145715549678501 vdev_uberblock_sync                <----- sync uberblock
1145715549702698 vdev_uberblock_sync
1145715549711938 vdev_uberblock_sync
1145715549720420 vdev_uberblock_sync
1145715549739658 vdev_uberblock_sync
1145715549748262 vdev_uberblock_sync
1145715553397569 da0                                <---- another round of sync
1145715553406088 da3
1145715553415079 da2
1145715553433392 da5
1145715553439616 da1
1145715553445770 da4
1145715624215422 vdev_label_sync_list
1145715624224469 dsl_pool_sync_done
1145715624231452 vdev_sync_done
1145715624250372 metaslab_sync_done
1145715624257012 metaslab_sync_reassess
1145715624340958 vdev_sync_done
1145715624345313 metaslab_sync_done
1145715624386954 metaslab_sync_done
1145715624417512 metaslab_sync_done
1145715624424065 metaslab_sync_reassess

The first column is the timestamp. The second one is either a disk that received a sync command or a ZFS function related to sync.
 
Back
Top