1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

ZFS Performance and tuning

Discussion in 'General' started by Ungaro, Jun 17, 2010.

  1. Ungaro

    Ungaro New Member

    Messages:
    9
    Likes Received:
    0
    Hi,

    I've been using ZFS on my home server for 2 days, and spending a lot of time on it.
    Here is my conf :
    - AMD Sempron 140
    - MB MSI K9N6PGM2-V
    - 3GB DDR2 PC2-6400
    - 2 x 1TB Seagate SATA2
    - 1 x 80GB Maxtor IDE ATA-133

    I installed FreeBSD 8.0p3 (64 bits) on the 80GB disk, and I mounted the 2 SATA disks
    in ZFS RAID1 with the following command :
    Code:
    zpool create mirror share ad4 ad6 
    Then, I benchmarked my server, by copying some big files, and then a lot of small files. The performance was really bad, in the two cases !

    I read a lot of threads about that, and I tried several configurations, by modifying /boot/loader.conf, and the "best" performance I could get is with these parameters :

    Code:
    vfs.zfs.prefetch_disable=1
    vm.kmem_size="1536M"
    vfs.zfs.arc_min="1024M"
    vfs.zfs.arc_max="1536M"
    vfs.zfs.vdev.min_pending=2
    vfs.zfs.vdev.max_pending=8
    vfs.zfs.txg.timeout=5
    
    When I copy a big file, a movie for instance, I got this speed : 27MB/s,
    and 18MB/s for a lot of small files.

    Maybe my parameters are not so good, but I don't find anything which could help me configure it better, or maybe ZFS is not for me, maybe I should switch back to UFS2, or Linux - EXT4 which was working well.

    Someone has any idea ?
     
  2. olav

    olav New Member

    Messages:
    349
    Likes Received:
    0
    How do you benchmark?
    Do you have compression enabled?
     
  3. Ungaro

    Ungaro New Member

    Messages:
    9
    Likes Received:
    0
    Compression is disabled, and to benchmark I copy some files over my gigabyte network, from my computer with NFS.
     
  4. olav

    olav New Member

    Messages:
    349
    Likes Received:
    0
    Try benchmarking with dd first. Then you know where to start looking.

    It could be bad network cabling, wrong nfs settings or bad sata cables. And you're not using a PCI SATA controller, right?
     
  5. Ungaro

    Ungaro New Member

    Messages:
    9
    Likes Received:
    0
    Yes, I'm gonna try with dd.

    It can't be a network cabling problem, or nfs settings because the same configuration was working well under Debian :)
    I'm not using the motherboard raid controller, right.
     
  6. danbi

    danbi New Member

    Messages:
    227
    Likes Received:
    0
    Try the most generic tuning first, for example comment everything else ZFS related. Add

    Code:
    vm.kmem_size="5G"
    to /boot/loader.conf.
    You will be best to use the motherboard SATA ports, with the AHCI driver if supported. Add

    Code:
    ahci_load="YES"
    to /boot/loader.conf. ports on the motherboard are likely to be the fastest you will ever get (unless not supported well).

    You may try to compare UFS vs. ZFS on the same server by NFS exporting filesystem from your third disk.

    It is expected, that writing to ZFS over NFS will be slower. This is because of the ZIL and the synchronous writes NFS is performing. You may get much better performance with a separate ZIL device (such as flash memory of some sort). To test this, you may try

    # sysctl vfs.zfs.zil_disable=1

    Just don't forget to revert it back!

    I would not compare ZFS with ext4 on any account. It is better safe, than sorry.
    You may also try copying the same files on the server, to compare the influence of NFS and remote machine.
     
  7. wonslung

    wonslung New Member

    Messages:
    850
    Likes Received:
    0
    nfs is going to perform slower...that's a given.



    you should check the filesystem performance locally first, chances are you will find the problem isn't due to ZFS at all but due to your network protocol.

    you might find samba performs better.....it did for me when i was using FreeBSD as my home server (i've switched my ZFS servers to opensolaris recently)

    also, you should look into adding as much ram as possible....for a ZFS machine, ram is king..but i'm willing to bet the problem is just NFS and not ZFS.
     
  8. Ungaro

    Ungaro New Member

    Messages:
    9
    Likes Received:
    0
    Ok I made some changes in my /boot/loader.conf settings :

    Code:
    vfs.zfs.prefetch_disable=1
    vm.kmem_size="3096M"
    ahci_load="YES"
    
    The performance seems to be better when writing (60MB/s), not for reading (35MB/s), pretty curious !
     
  9. fgordon

    fgordon New Member

    Messages:
    33
    Likes Received:
    0
    maybe as the system can always cache writing, but caching while reading only works if you`ve read the data at least once before....

    So with very huge amounts of data many GBytes or even TBytes reading should be faster than writing.
     
  10. Ungaro

    Ungaro New Member

    Messages:
    9
    Likes Received:
    0
    Is there a way to disable caching ? I don't need it, because my server is a home storage server which is used punctually, so caching is useless I think.
     
  11. t1066

    t1066 Member

    Messages:
    169
    Likes Received:
    0
    From the man page,

    # zfs set primarycache=var

    where var can be none, metadata or all.
     
  12. Matty

    Matty New Member

    Messages:
    162
    Likes Received:
    0
    I don't think it would hurt to keep using the cache either.
     
  13. phoenix

    phoenix Moderator Staff Member Moderator

    Messages:
    3,404
    Likes Received:
    0
    Why would you ever want to disable caching? Doing so will send disk performance through the floor (as in, it would be horrible).
     
  14. boblog

    boblog New Member

    Messages:
    4
    Likes Received:
    0
    Disabling the prefetcher nukes read performance. I had the same performance before. Faster writes than reads, enabling the prefetcher fixed that right up.
     
  15. Ungaro

    Ungaro New Member

    Messages:
    9
    Likes Received:
    0
    I can't enable prefetcher : I've got only 3Gb of RAM installed which is not enough (4Gb recommended), and my mobo is full (no more slot to add 1 more Gb).
     
  16. Ungaro

    Ungaro New Member

    Messages:
    9
    Likes Received:
    0
    Here is my solution : I moved to Debian ! I can't tune ZFS which is certainly a powerful filesystem, but not for me.
    So I moved today to Debian stable, and I put my SATA disk in RAID1 (with the mobo controller).
    I'm gonna make some reading/writing tests to compare with ZFS and my last params. I'll tell you that later.
     
  17. wonslung

    wonslung New Member

    Messages:
    850
    Likes Received:
    0
    ZFS really shines on newer hardware...you can think of it like a sliding scale....the newer your hardware is, the better ZFS is going to look compared to other options.


    I ultimately moved to Solaris for my home servers because of the newer ZFS features but when i was using FreeBSD, it worked very well with around 8 gb ram, a decent multi core 64 bit cpu and several drives.

    I know people using it on machines with 2 gb ram who have it working well, but at that level of ram i think ufs is going to perform better. They use it for the OTHER features, and not the performance at that level of ram.