Hi all..
I have a performance problem with ZFS that I just can't seem to resolve.
I have an external four-drive eSATA enclosure with four disks in it which I'm using essentially as a NAS on my home network. Each of these drives when directly accessed with dd can easily eclipse 100 MBps throughput. When I create a ZFS pool on them, in any configuration, the performance tanks dramatically. This is true of all four drives, which are all four different models from four different manufacturers (Seagate, Samsung, WD, Hitachi).
My end goal is a pool containing four disks in two mirrors (essentially RAID 10). The performance seems limited to roughly 60-70 MBps regardless of configuration. I've tried:
A single disk under ZFS will sustain about 60-70 MBps read. Two disks in a ZFS mirror or stripe configuration will together sustain the same read throughput of about 60-70 MBps (each disk will only read at about 30 MBps). If I launch two instances of dd reading from the same two raw devices they will saturate the SATA bus. I have tried both giving the disks directly to ZFS to manage and also tried using GPT partitions to ensure 4k alignment on two of the drives. No significant differences.
CPU: Intel(R) Core(TM)2 CPU 4300 @ 1.80 GHz (1794.23 MHz K8-class CPU)
Real memory = 6442450944 (6144 MB)
Installed version of FreeBSD is 9.1-RELEASE.
I'm about to migrate this server either to FreeBSD 10 or to Linux. I'd prefer to keep it on FreeBSD, but I don't see this performance issue using ZFS on Linux. As it stands my ZFS array can't even saturate my gigabit Ethernet which is unacceptable.
Below are some ZFS details. I know atime is enabled on the mount, but I'm testing on a single file so it's only one access time update and the other ZFS mirror in that chassis has atime disabled.
Thanks in advance for your time and thoughts!
I have a performance problem with ZFS that I just can't seem to resolve.
I have an external four-drive eSATA enclosure with four disks in it which I'm using essentially as a NAS on my home network. Each of these drives when directly accessed with dd can easily eclipse 100 MBps throughput. When I create a ZFS pool on them, in any configuration, the performance tanks dramatically. This is true of all four drives, which are all four different models from four different manufacturers (Seagate, Samsung, WD, Hitachi).
My end goal is a pool containing four disks in two mirrors (essentially RAID 10). The performance seems limited to roughly 60-70 MBps regardless of configuration. I've tried:
- A pool containing a single disk.
- A pool containing two disks striped.
- A pool containing two disks in a mirror.
A single disk under ZFS will sustain about 60-70 MBps read. Two disks in a ZFS mirror or stripe configuration will together sustain the same read throughput of about 60-70 MBps (each disk will only read at about 30 MBps). If I launch two instances of dd reading from the same two raw devices they will saturate the SATA bus. I have tried both giving the disks directly to ZFS to manage and also tried using GPT partitions to ensure 4k alignment on two of the drives. No significant differences.
CPU: Intel(R) Core(TM)2 CPU 4300 @ 1.80 GHz (1794.23 MHz K8-class CPU)
Real memory = 6442450944 (6144 MB)
Installed version of FreeBSD is 9.1-RELEASE.
I'm about to migrate this server either to FreeBSD 10 or to Linux. I'd prefer to keep it on FreeBSD, but I don't see this performance issue using ZFS on Linux. As it stands my ZFS array can't even saturate my gigabit Ethernet which is unacceptable.
Below are some ZFS details. I know atime is enabled on the mount, but I'm testing on a single file so it's only one access time update and the other ZFS mirror in that chassis has atime disabled.
Thanks in advance for your time and thoughts!
Code:
# zfs get all test
NAME PROPERTY VALUE SOURCE
test type filesystem -
test creation Wed Feb 26 16:00 2014 -
test used 9.77G -
test available 2.67T -
test referenced 9.77G -
test compressratio 1.00x -
test mounted yes -
test quota none default
test reservation none default
test recordsize 128K default
test mountpoint /test default
test sharenfs off default
test checksum on default
test compression off default
test atime on default
test devices on default
test exec on default
test setuid on default
test readonly off default
test jailed off default
test snapdir hidden default
test aclmode discard default
test aclinherit restricted default
test canmount on default
test xattr off temporary
test copies 1 default
test version 5 -
test utf8only off -
test normalization none -
test casesensitivity sensitive -
test vscan off default
test nbmand off default
test sharesmb off default
test refquota none default
test refreservation none default
test primarycache all default
test secondarycache all default
test usedbysnapshots 0 -
test usedbydataset 9.77G -
test usedbychildren 240K -
test usedbyrefreservation 0 -
test logbias latency default
test dedup off default
test mlslabel -
test sync standard default
test refcompressratio 1.00x -
test written 9.77G -
ahci0: <Marvell 88SE912x AHCI SATA controller> port 0xdce0-0xdce7,0xdcd8-0xdcdb,0xdce8-0xdcef,0xdcdc-0xdcdf,0xdcf0-0xdcff mem 0xdfbff800-0xdfbfffff irq 16 at device 0.0 on pci1
ahci0: AHCI v1.20 with 8 6Gbps ports, Port Multiplier not supported
vfs.zfs.l2c_only_size: 0
vfs.zfs.mfu_ghost_data_lsize: 2289741312
vfs.zfs.mfu_ghost_metadata_lsize: 412580864
vfs.zfs.mfu_ghost_size: 2702322176
vfs.zfs.mfu_data_lsize: 30138880
vfs.zfs.mfu_metadata_lsize: 1067008
vfs.zfs.mfu_size: 86690304
vfs.zfs.mru_ghost_data_lsize: 1084065792
vfs.zfs.mru_ghost_metadata_lsize: 356906496
vfs.zfs.mru_ghost_size: 1440972288
vfs.zfs.mru_data_lsize: 2139996160
vfs.zfs.mru_metadata_lsize: 535059456
vfs.zfs.mru_size: 2708894720
vfs.zfs.anon_data_lsize: 0
vfs.zfs.anon_metadata_lsize: 0
vfs.zfs.anon_size: 2408448
vfs.zfs.l2arc_norw: 1
vfs.zfs.l2arc_feed_again: 1
vfs.zfs.l2arc_noprefetch: 1
vfs.zfs.l2arc_feed_min_ms: 200
vfs.zfs.l2arc_feed_secs: 1
vfs.zfs.l2arc_headroom: 2
vfs.zfs.l2arc_write_boost: 8388608
vfs.zfs.l2arc_write_max: 8388608
vfs.zfs.arc_meta_limit: 1262219264
vfs.zfs.arc_meta_used: 2000077656
vfs.zfs.arc_min: 631109632
vfs.zfs.arc_max: 5048877056
vfs.zfs.dedup.prefetch: 1
vfs.zfs.mdcomp_disable: 0
vfs.zfs.write_limit_override: 0
vfs.zfs.write_limit_inflated: 19024920576
vfs.zfs.write_limit_max: 792705024
vfs.zfs.write_limit_min: 33554432
vfs.zfs.write_limit_shift: 3
vfs.zfs.no_write_throttle: 0
vfs.zfs.zfetch.array_rd_sz: 1048576
vfs.zfs.zfetch.block_cap: 256
vfs.zfs.zfetch.min_sec_reap: 2
vfs.zfs.zfetch.max_streams: 8
vfs.zfs.prefetch_disable: 0
vfs.zfs.mg_alloc_failures: 8
vfs.zfs.check_hostid: 1
vfs.zfs.recover: 0
vfs.zfs.txg.synctime_ms: 1000
vfs.zfs.txg.timeout: 5
vfs.zfs.vdev.cache.bshift: 16
vfs.zfs.vdev.cache.size: 0
vfs.zfs.vdev.cache.max: 16384
vfs.zfs.vdev.write_gap_limit: 4096
vfs.zfs.vdev.read_gap_limit: 32768
vfs.zfs.vdev.aggregation_limit: 131072
vfs.zfs.vdev.ramp_rate: 2
vfs.zfs.vdev.time_shift: 6
vfs.zfs.vdev.min_pending: 4
vfs.zfs.vdev.max_pending: 10
vfs.zfs.vdev.bio_flush_disable: 0
vfs.zfs.cache_flush_disable: 0
vfs.zfs.zil_replay_disable: 0
vfs.zfs.zio.use_uma: 0
vfs.zfs.snapshot_list_prefetch: 0
vfs.zfs.version.zpl: 5
vfs.zfs.version.spa: 28
vfs.zfs.version.acl: 1
vfs.zfs.debug: 0
vfs.zfs.super_owner: 0
security.jail.param.allow.mount.zfs: 0
security.jail.mount_zfs_allowed: 0
kstat.zfs.misc.xuio_stats.onloan_read_buf: 0
kstat.zfs.misc.xuio_stats.onloan_write_buf: 0
kstat.zfs.misc.xuio_stats.read_buf_copied: 0
kstat.zfs.misc.xuio_stats.read_buf_nocopy: 0
kstat.zfs.misc.xuio_stats.write_buf_copied: 0
kstat.zfs.misc.xuio_stats.write_buf_nocopy: 404258
kstat.zfs.misc.zfetchstats.hits: 175179193
kstat.zfs.misc.zfetchstats.misses: 5846446
kstat.zfs.misc.zfetchstats.colinear_hits: 1250
kstat.zfs.misc.zfetchstats.colinear_misses: 5845196
kstat.zfs.misc.zfetchstats.stride_hits: 174390568
kstat.zfs.misc.zfetchstats.stride_misses: 1348
kstat.zfs.misc.zfetchstats.reclaim_successes: 28295
kstat.zfs.misc.zfetchstats.reclaim_failures: 5816901
kstat.zfs.misc.zfetchstats.streams_resets: 82
kstat.zfs.misc.zfetchstats.streams_noresets: 788536
kstat.zfs.misc.zfetchstats.bogus_streams: 0
kstat.zfs.misc.arcstats.hits: 91927308
kstat.zfs.misc.arcstats.misses: 1396356
kstat.zfs.misc.arcstats.demand_data_hits: 84422170
kstat.zfs.misc.arcstats.demand_data_misses: 10883
kstat.zfs.misc.arcstats.demand_metadata_hits: 5929721
kstat.zfs.misc.arcstats.demand_metadata_misses: 614442
kstat.zfs.misc.arcstats.prefetch_data_hits: 12958
kstat.zfs.misc.arcstats.prefetch_data_misses: 646102
kstat.zfs.misc.arcstats.prefetch_metadata_hits: 1562459
kstat.zfs.misc.arcstats.prefetch_metadata_misses: 124929
kstat.zfs.misc.arcstats.mru_hits: 4522308
kstat.zfs.misc.arcstats.mru_ghost_hits: 313472
kstat.zfs.misc.arcstats.mfu_hits: 85829777
kstat.zfs.misc.arcstats.mfu_ghost_hits: 237884
kstat.zfs.misc.arcstats.allocated: 1947469
kstat.zfs.misc.arcstats.deleted: 963406
kstat.zfs.misc.arcstats.stolen: 1126801
kstat.zfs.misc.arcstats.recycle_miss: 435272
kstat.zfs.misc.arcstats.mutex_miss: 1804
kstat.zfs.misc.arcstats.evict_skip: 1123669
kstat.zfs.misc.arcstats.evict_l2_cached: 0
kstat.zfs.misc.arcstats.evict_l2_eligible: 115656109056
kstat.zfs.misc.arcstats.evict_l2_ineligible: 10684319744
kstat.zfs.misc.arcstats.hash_elements: 311004
kstat.zfs.misc.arcstats.hash_elements_max: 322470
kstat.zfs.misc.arcstats.hash_collisions: 1187880
kstat.zfs.misc.arcstats.hash_chains: 89713
kstat.zfs.misc.arcstats.hash_chain_max: 12
kstat.zfs.misc.arcstats.p: 3910849857
kstat.zfs.misc.arcstats.c: 4171730979
kstat.zfs.misc.arcstats.c_min: 631109632
kstat.zfs.misc.arcstats.c_max: 5048877056
kstat.zfs.misc.arcstats.size: 4170212696
kstat.zfs.misc.arcstats.hdr_size: 73015904
kstat.zfs.misc.arcstats.data_size: 2797993472
kstat.zfs.misc.arcstats.other_size: 1299203320
kstat.zfs.misc.arcstats.l2_hits: 0
kstat.zfs.misc.arcstats.l2_misses: 0
kstat.zfs.misc.arcstats.l2_feeds: 0
kstat.zfs.misc.arcstats.l2_rw_clash: 0
kstat.zfs.misc.arcstats.l2_read_bytes: 0
kstat.zfs.misc.arcstats.l2_write_bytes: 0
kstat.zfs.misc.arcstats.l2_writes_sent: 0
kstat.zfs.misc.arcstats.l2_writes_done: 0
kstat.zfs.misc.arcstats.l2_writes_error: 0
kstat.zfs.misc.arcstats.l2_writes_hdr_miss: 0
kstat.zfs.misc.arcstats.l2_evict_lock_retry: 0
kstat.zfs.misc.arcstats.l2_evict_reading: 0
kstat.zfs.misc.arcstats.l2_free_on_write: 0
kstat.zfs.misc.arcstats.l2_abort_lowmem: 0
kstat.zfs.misc.arcstats.l2_cksum_bad: 0
kstat.zfs.misc.arcstats.l2_io_error: 0
kstat.zfs.misc.arcstats.l2_size: 0
kstat.zfs.misc.arcstats.l2_hdr_size: 0
kstat.zfs.misc.arcstats.memory_throttle_count: 76
kstat.zfs.misc.arcstats.l2_write_trylock_fail: 0
kstat.zfs.misc.arcstats.l2_write_passed_headroom: 0
kstat.zfs.misc.arcstats.l2_write_spa_mismatch: 0
kstat.zfs.misc.arcstats.l2_write_in_l2: 0
kstat.zfs.misc.arcstats.l2_write_io_in_progress: 0
kstat.zfs.misc.arcstats.l2_write_not_cacheable: 96729
kstat.zfs.misc.arcstats.l2_write_full: 0
kstat.zfs.misc.arcstats.l2_write_buffer_iter: 0
kstat.zfs.misc.arcstats.l2_write_pios: 0
kstat.zfs.misc.arcstats.l2_write_buffer_bytes_scanned: 0
kstat.zfs.misc.arcstats.l2_write_buffer_list_iter: 0
kstat.zfs.misc.arcstats.l2_write_buffer_list_null_iter: 0
kstat.zfs.misc.vdev_cache_stats.delegations: 0
kstat.zfs.misc.vdev_cache_stats.hits: 0
kstat.zfs.misc.vdev_cache_stats.misses: 0