How do i test my zfs speed??
SirDice said:# /usr/bin/time -h dd if=/dev/zero of=sometestfile bs=1024 count=10
That will test write speed
# /usr/bin/time -h dd if=sometestfile of=/dev/null bs=1024 count=10
Tests read speed.
# /usr/bin/time -h dd if=/dev/zero of=sometestfile bs=1024 count=30000
result: 43214651 bytes/sec, so 41.2 Mb/s.
# /usr/bin/time -h dd if=sometestfile of=/dev/zero bs=1024 count=30000
result: 115284629 bytes/sec, so 109.9 Mb/s.
phoenix said:Depends on how your pool is setup. Are you using mirrored or raidz vdevs? Are you using more than 1 vdev? What types of disks are you using (IDE, SATA, SCSI, SAS, SSD, etc)? What are the disks connected to (onboard controller, PCI controller, PCI-X/PCIe controller, RAID controller, USB/Firewire/eSATA, etc)? How much RAM is in the system? What is kmem_max and zfs_arc_max set to? What speed is the CPU? Are you using ZFS compression?
Without knowing those details, we can't say whether or not that is good, bad, or indifferent.
For comparison, my storage servers can do 550 MBytes/sec write and 5.5 GBytes/sec read, using ZFS. But I'm betting they're much heftier boxes than the one you are testing.
Also, you shouldn't use dd for benchmarking, except as a quick first test. Have a look at the bonnie++ and iozone benchmarking tools for more in-depth benchmarks. (See my zfs how-to thread for an example of using iozone.)
server# zpool status tank
pool: tank
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
raidz1 ONLINE 0 0 0
ad6 ONLINE 0 0 0
ad8 ONLINE 0 0 0
ad10 ONLINE 0 0 0
errors: No known data errors
tags, I can't keep editing every single post ..
server# bonnie++ -u root -d /tank/foo/ -s 3552M -n 10:102400:1024:1024
Using uid:0, gid:0.
Writing a byte at a time...done
Writing intelligently...done
Rewriting...done
Reading a byte at a time...done
Reading intelligently...done
start 'em...done...done...done...done...done...
Create files in sequential order...done.
Stat files in sequential order...done.
Delete files in sequential order...done.
Create files in random order...done.
Stat files in random order...done.
Delete files in random order...done.
Version 1.93d ------Sequential Output------ --Sequential Input- --Random-
Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
server 3552M 42 87 70457 37 38908 21 138 99 154701 36 165.5 9
Latency 882ms 1487ms 3123ms 74887us 373ms 449ms
Version 1.93d ------Sequential Create------ --------Random Create--------
server -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files:max:min /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
10:102400:1024/1024 724 45 812 18 5081 69 752 47 132 2 6547 96
Latency 3507ms 19311us 10634us 1237ms 250ms 7204us
1.93c,1.93d,server,1,1244675237,3552M,,42,87,70457,37,38908,21,138,99,154701,36,165.5,9,10,102400,1024,,1024,724,45,812,18,5081,69,752,47,132,2,6547,96,882ms,1487ms,3123ms,74887us,373ms,449ms,3507ms,19311us,10634us,1237ms,250ms,7204us
iozone -M -e -+u -T -t 32 -r 128k -s 40960 -i 0 -i 1 -i 2 -i 8 -+p 70 -C
phoenix said:Not bad. 75 MBytes/s write, and 95 MBytes/sec read (for iozone), and in the same ballpark from bonnie++. Sounds roughly about right for a single raidz vdev, with 2 GB of RAM.
Adding more RAM may improve things, especially on the read side. The only way to improve the write speed would be to add more raidz vdevs to the pool.
For comparison, my home box (3x 120 GB SATA drives in single raidz1 vdev) only gets 5.5 MBytes/s write and 8.5 MBytes/s read. OUCH!! Using the same iozone command as you did. It's a 32-bit FreeBSD 7.1 install with 2 GB of RAM and only 128 MB for zfs_arc_max.
# zpool add tank raidz1 ad12 ad14 ad16
server# zpool status tank
pool: tank
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
raidz1 ONLINE 0 0 0
ad6 ONLINE 0 0 0
ad8 ONLINE 0 0 0
ad10 ONLINE 0 0 0
raidz1 ONLINE 0 0 0
ad12 ONLINE 0 0 0
ad14 ONLINE 0 0 0
ad16 ONLINE 0 0 0
errors: No known data errors
phoenix said:Correct. If you have the space, you could add more harddrives to the system, configure them as a raidz vdev, and add that vdev to the pool. For example, if you added 3 drives and they came up as ad12 ad14 ad16, then you could use:
# zpool add tank raidz1 ad12 ad14 ad16
Afterwards, the output of zpool status would look something like:
Code:server# zpool status tank pool: tank state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 raidz1 ONLINE 0 0 0 ad6 ONLINE 0 0 0 ad8 ONLINE 0 0 0 ad10 ONLINE 0 0 0 raidz1 ONLINE 0 0 0 ad12 ONLINE 0 0 0 ad14 ONLINE 0 0 0 ad16 ONLINE 0 0 0 errors: No known data errors