Hello,
I've been having some issues with ZFS performance.
I'm running FreeBSD8.0-RC1.
I am running with an AMD x2-3000 with 4Gigs of DDR ram.
Hard drives are 3 1TB caviar greens. With 3 different Silicon Image PCI-E controllers.
No matter what configuration of drives I use, I have the same performance. I've split the raid across all the controls, and left them all on one.
During file copy I noticed the following...
Gstat shows all drives at around 30% utilization
Memory test show 90% availability.
File transfer would be very "jerky" as if it was filling up too small of a cache, about 14MB at a time.
Here is my sysctl values
According to an article I found, FreeBSD has had updates committed to it that automatically allocates an appropriate amount of ram caching, and that specific tuning is no longer needed.
I'm not sure what the issue is, and I am at a loss.
Thank you for your help!
I've been having some issues with ZFS performance.
I'm running FreeBSD8.0-RC1.
I am running with an AMD x2-3000 with 4Gigs of DDR ram.
Hard drives are 3 1TB caviar greens. With 3 different Silicon Image PCI-E controllers.
Code:
serenity# zpool status tank
pool: tank
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
raidz1 ONLINE 0 0 0
ad6 ONLINE 0 0 0
ad8 ONLINE 0 0 0
ad16 ONLINE 0 0 0
No matter what configuration of drives I use, I have the same performance. I've split the raid across all the controls, and left them all on one.
Code:
serenity# zpool iostat -v
capacity operations bandwidth
pool used avail read write read write
---------- ----- ----- ----- ----- ----- -----
tank 10.1G 2.71T 0 76 22.9K 6.87M
raidz1 10.1G 2.71T 0 76 22.9K 6.87M
ad6 - - 0 42 12.9K 3.44M
ad8 - - 0 42 13.3K 3.44M
ad16 - - 0 42 16.9K 3.44M
---------- ----- ----- ----- ----- ----- -----
During file copy I noticed the following...
Gstat shows all drives at around 30% utilization
Memory test show 90% availability.
File transfer would be very "jerky" as if it was filling up too small of a cache, about 14MB at a time.
Here is my sysctl values
Code:
serenity# sysctl -a |grep zfs
vfs.zfs.arc_meta_limit: 210805120
vfs.zfs.arc_meta_used: 308232
vfs.zfs.mdcomp_disable: 0
vfs.zfs.arc_min: 105402560
vfs.zfs.arc_max: 843220480
vfs.zfs.zfetch.array_rd_sz: 1048576
vfs.zfs.zfetch.block_cap: 256
vfs.zfs.zfetch.min_sec_reap: 2
vfs.zfs.zfetch.max_streams: 8
vfs.zfs.prefetch_disable: 1
vfs.zfs.recover: 0
vfs.zfs.txg.synctime: 5
vfs.zfs.txg.timeout: 30
vfs.zfs.scrub_limit: 10
vfs.zfs.vdev.cache.bshift: 16
vfs.zfs.vdev.cache.size: 10485760
vfs.zfs.vdev.cache.max: 16384
vfs.zfs.vdev.aggregation_limit: 131072
vfs.zfs.vdev.ramp_rate: 2
vfs.zfs.vdev.time_shift: 6
vfs.zfs.vdev.min_pending: 4
vfs.zfs.vdev.max_pending: 35
vfs.zfs.cache_flush_disable: 0
vfs.zfs.zil_disable: 0
vfs.zfs.version.zpl: 3
vfs.zfs.version.vdev_boot: 1
vfs.zfs.version.spa: 13
vfs.zfs.version.dmu_backup_stream: 1
vfs.zfs.version.dmu_backup_header: 2
vfs.zfs.version.acl: 1
vfs.zfs.debug: 0
vfs.zfs.super_owner: 0
kstat.zfs.misc.arcstats.hits: 7557
kstat.zfs.misc.arcstats.misses: 3209
kstat.zfs.misc.arcstats.demand_data_hits: 2198
kstat.zfs.misc.arcstats.demand_data_misses: 457
kstat.zfs.misc.arcstats.demand_metadata_hits: 5359
kstat.zfs.misc.arcstats.demand_metadata_misses: 2752
kstat.zfs.misc.arcstats.prefetch_data_hits: 0
kstat.zfs.misc.arcstats.prefetch_data_misses: 0
kstat.zfs.misc.arcstats.prefetch_metadata_hits: 0
kstat.zfs.misc.arcstats.prefetch_metadata_misses: 0
kstat.zfs.misc.arcstats.mru_hits: 6928
kstat.zfs.misc.arcstats.mru_ghost_hits: 2718
kstat.zfs.misc.arcstats.mfu_hits: 629
kstat.zfs.misc.arcstats.mfu_ghost_hits: 459
kstat.zfs.misc.arcstats.deleted: 123629
kstat.zfs.misc.arcstats.recycle_miss: 132460
kstat.zfs.misc.arcstats.mutex_miss: 24
kstat.zfs.misc.arcstats.evict_skip: 0
kstat.zfs.misc.arcstats.hash_elements: 1688
kstat.zfs.misc.arcstats.hash_elements_max: 7560
kstat.zfs.misc.arcstats.hash_collisions: 3385
kstat.zfs.misc.arcstats.hash_chains: 24
kstat.zfs.misc.arcstats.hash_chain_max: 2
kstat.zfs.misc.arcstats.p: 13926912
kstat.zfs.misc.arcstats.c: 105402560
kstat.zfs.misc.arcstats.c_min: 105402560
kstat.zfs.misc.arcstats.c_max: 843220480
kstat.zfs.misc.arcstats.size: 8173896
kstat.zfs.misc.arcstats.hdr_size: 363584
kstat.zfs.misc.arcstats.l2_hits: 0
kstat.zfs.misc.arcstats.l2_misses: 0
kstat.zfs.misc.arcstats.l2_feeds: 0
kstat.zfs.misc.arcstats.l2_rw_clash: 0
kstat.zfs.misc.arcstats.l2_writes_sent: 0
kstat.zfs.misc.arcstats.l2_writes_done: 0
kstat.zfs.misc.arcstats.l2_writes_error: 0
kstat.zfs.misc.arcstats.l2_writes_hdr_miss: 0
kstat.zfs.misc.arcstats.l2_evict_lock_retry: 0
kstat.zfs.misc.arcstats.l2_evict_reading: 0
kstat.zfs.misc.arcstats.l2_free_on_write: 0
kstat.zfs.misc.arcstats.l2_abort_lowmem: 0
kstat.zfs.misc.arcstats.l2_cksum_bad: 0
kstat.zfs.misc.arcstats.l2_io_error: 0
kstat.zfs.misc.arcstats.l2_size: 0
kstat.zfs.misc.arcstats.l2_hdr_size: 0
kstat.zfs.misc.arcstats.memory_throttle_count: 2827
kstat.zfs.misc.vdev_cache_stats.delegations: 2016
kstat.zfs.misc.vdev_cache_stats.hits: 3430
kstat.zfs.misc.vdev_cache_stats.misses: 1355
serenity#
According to an article I found, FreeBSD has had updates committed to it that automatically allocates an appropriate amount of ram caching, and that specific tuning is no longer needed.
I'm not sure what the issue is, and I am at a loss.
Thank you for your help!