Test ZFS speed

# /usr/bin/time -h dd if=/dev/zero of=sometestfile bs=1024 count=10
That will test write speed

# /usr/bin/time -h dd if=sometestfile of=/dev/null bs=1024 count=10
Tests read speed.
 
SirDice said:
# /usr/bin/time -h dd if=/dev/zero of=sometestfile bs=1024 count=10
That will test write speed

# /usr/bin/time -h dd if=sometestfile of=/dev/null bs=1024 count=10
Tests read speed.

when u say "sometestfile" there where i put my pool name?, i dont see where u put the name of the pool to be tested.. I have my OS installed in another HD. and i have 3x500gb hd in a zpool named tank
 
i cd into /tank (my zpool) and ran:
Code:
# /usr/bin/time -h dd if=/dev/zero of=sometestfile bs=1024 count=30000
result: 43214651 bytes/sec, so 41.2 Mb/s.
# /usr/bin/time -h dd if=sometestfile of=/dev/zero bs=1024 count=30000
result: 115284629 bytes/sec, so 109.9 Mb/s.
is that any good??
 
Depends on how your pool is setup. Are you using mirrored or raidz vdevs? Are you using more than 1 vdev? What types of disks are you using (IDE, SATA, SCSI, SAS, SSD, etc)? What are the disks connected to (onboard controller, PCI controller, PCI-X/PCIe controller, RAID controller, USB/Firewire/eSATA, etc)? How much RAM is in the system? What is kmem_max and zfs_arc_max set to? What speed is the CPU? Are you using ZFS compression?

Without knowing those details, we can't say whether or not that is good, bad, or indifferent.

For comparison, my storage servers can do 550 MBytes/sec write and 5.5 GBytes/sec read, using ZFS. But I'm betting they're much heftier boxes than the one you are testing. :)

Also, you shouldn't use dd for benchmarking, except as a quick first test. Have a look at the bonnie++ and iozone benchmarking tools for more in-depth benchmarks. (See my zfs how-to thread for an example of using iozone.)
 
phoenix said:
Depends on how your pool is setup. Are you using mirrored or raidz vdevs? Are you using more than 1 vdev? What types of disks are you using (IDE, SATA, SCSI, SAS, SSD, etc)? What are the disks connected to (onboard controller, PCI controller, PCI-X/PCIe controller, RAID controller, USB/Firewire/eSATA, etc)? How much RAM is in the system? What is kmem_max and zfs_arc_max set to? What speed is the CPU? Are you using ZFS compression?

Without knowing those details, we can't say whether or not that is good, bad, or indifferent.

For comparison, my storage servers can do 550 MBytes/sec write and 5.5 GBytes/sec read, using ZFS. But I'm betting they're much heftier boxes than the one you are testing. :)

Also, you shouldn't use dd for benchmarking, except as a quick first test. Have a look at the bonnie++ and iozone benchmarking tools for more in-depth benchmarks. (See my zfs how-to thread for an example of using iozone.)
Code:
server# zpool status tank
  pool: tank
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        tank        ONLINE       0     0     0
          raidz1    ONLINE       0     0     0
            ad6     ONLINE       0     0     0
            ad8     ONLINE       0     0     0
            ad10    ONLINE       0     0     0

errors: No known data errors

My drive are 3 WD 500gb sata drive 7200 RPM, Connected dirreclty to my MB controller, system has 2gb of ram (1 stick not dual channel), vm.kmem_size_max= 4509713203 , vfs.zfs.arc_max: 377282560 . My cpu is an athlon 6000+ running at 3.1GHz. No compression.
 
SuperMiguel, please post your system output in
Code:
 tags, I can't keep editing every single post ..
 
Code:
server# bonnie++ -u root -d /tank/foo/ -s 3552M -n 10:102400:1024:1024
Using uid:0, gid:0.
Writing a byte at a time...done
Writing intelligently...done
Rewriting...done
Reading a byte at a time...done
Reading intelligently...done
start 'em...done...done...done...done...done...
Create files in sequential order...done.
Stat files in sequential order...done.
Delete files in sequential order...done.
Create files in random order...done.
Stat files in random order...done.
Delete files in random order...done.
Version 1.93d       ------Sequential Output------ --Sequential Input- --Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
server        3552M    42  87 70457  37 38908  21   138  99 154701  36 165.5   9
Latency               882ms    1487ms    3123ms   74887us     373ms     449ms
Version 1.93d       ------Sequential Create------ --------Random Create--------
server              -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files:max:min        /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
10:102400:1024/1024   724  45   812  18  5081  69   752  47   132   2  6547  96
Latency              3507ms   19311us   10634us    1237ms     250ms    7204us
1.93c,1.93d,server,1,1244675237,3552M,,42,87,70457,37,38908,21,138,99,154701,36,165.5,9,10,102400,1024,,1024,724,45,812,18,5081,69,752,47,132,2,6547,96,882ms,1487ms,3123ms,74887us,373ms,449ms,3507ms,19311us,10634us,1237ms,250ms,7204us
 
Not bad. 75 MBytes/s write, and 95 MBytes/sec read (for iozone), and in the same ballpark from bonnie++. Sounds roughly about right for a single raidz vdev, with 2 GB of RAM.

Adding more RAM may improve things, especially on the read side. The only way to improve the write speed would be to add more raidz vdevs to the pool.

For comparison, my home box (3x 120 GB SATA drives in single raidz1 vdev) only gets 5.5 MBytes/s write and 8.5 MBytes/s read. OUCH!! Using the same iozone command as you did. It's a 32-bit FreeBSD 7.1 install with 2 GB of RAM and only 128 MB for zfs_arc_max.
 
phoenix said:
Not bad. 75 MBytes/s write, and 95 MBytes/sec read (for iozone), and in the same ballpark from bonnie++. Sounds roughly about right for a single raidz vdev, with 2 GB of RAM.

Adding more RAM may improve things, especially on the read side. The only way to improve the write speed would be to add more raidz vdevs to the pool.

For comparison, my home box (3x 120 GB SATA drives in single raidz1 vdev) only gets 5.5 MBytes/s write and 8.5 MBytes/s read. OUCH!! Using the same iozone command as you did. It's a 32-bit FreeBSD 7.1 install with 2 GB of RAM and only 128 MB for zfs_arc_max.

raidz vdevs to the pool= more hard drives???
Im thinking to add 6GB of ram next weekend so that will put me in 8GB
 
Correct. If you have the space, you could add more harddrives to the system, configure them as a raidz vdev, and add that vdev to the pool. For example, if you added 3 drives and they came up as ad12 ad14 ad16, then you could use:
# zpool add tank raidz1 ad12 ad14 ad16

Afterwards, the output of zpool status would look something like:
Code:
server# zpool status tank
  pool: tank
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        tank        ONLINE       0     0     0
          raidz1    ONLINE       0     0     0
            ad6     ONLINE       0     0     0
            ad8     ONLINE       0     0     0
            ad10    ONLINE       0     0     0
          raidz1    ONLINE       0     0     0
            ad12    ONLINE       0     0     0
            ad14    ONLINE       0     0     0
            ad16    ONLINE       0     0     0

errors: No known data errors
 
phoenix said:
Correct. If you have the space, you could add more harddrives to the system, configure them as a raidz vdev, and add that vdev to the pool. For example, if you added 3 drives and they came up as ad12 ad14 ad16, then you could use:
# zpool add tank raidz1 ad12 ad14 ad16

Afterwards, the output of zpool status would look something like:
Code:
server# zpool status tank
  pool: tank
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        tank        ONLINE       0     0     0
          raidz1    ONLINE       0     0     0
            ad6     ONLINE       0     0     0
            ad8     ONLINE       0     0     0
            ad10    ONLINE       0     0     0
          raidz1    ONLINE       0     0     0
            ad12    ONLINE       0     0     0
            ad14    ONLINE       0     0     0
            ad16    ONLINE       0     0     0

errors: No known data errors

i might do that in few weeks, do you think the memory upgrade is worth it? from 2gb to 8gb?? how much would u guess my read speed will increase 5%, 10%?
 
I couldn't tell you for sure, but it should increase quite a bit, depending on the workload. With more RAM in the box, more memory can be devoted to the ZFS ARC (adaptive read cache / filesystem cache), so there's a higher chance that the needed data will be in RAM and not have to be read from disk.
 
Back
Top