ZFS Performance and tuning

General questions about the FreeBSD operating system. Ask here if your question does not fit elsewhere.

ZFS Performance and tuning

Postby Ungaro » 17 Jun 2010, 17:07

Hi,

I've been using ZFS on my home server for 2 days, and spending a lot of time on it.
Here is my conf :
- AMD Sempron 140
- MB MSI K9N6PGM2-V
- 3GB DDR2 PC2-6400
- 2 x 1TB Seagate SATA2
- 1 x 80GB Maxtor IDE ATA-133

I installed FreeBSD 8.0p3 (64 bits) on the 80GB disk, and I mounted the 2 SATA disks
in ZFS RAID1 with the following command :
Code: Select all
zpool create mirror share ad4 ad6


Then, I benchmarked my server, by copying some big files, and then a lot of small files. The performance was really bad, in the two cases !

I read a lot of threads about that, and I tried several configurations, by modifying /boot/loader.conf, and the "best" performance I could get is with these parameters :

Code: Select all
vfs.zfs.prefetch_disable=1
vm.kmem_size="1536M"
vfs.zfs.arc_min="1024M"
vfs.zfs.arc_max="1536M"
vfs.zfs.vdev.min_pending=2
vfs.zfs.vdev.max_pending=8
vfs.zfs.txg.timeout=5


When I copy a big file, a movie for instance, I got this speed : 27MB/s,
and 18MB/s for a lot of small files.

Maybe my parameters are not so good, but I don't find anything which could help me configure it better, or maybe ZFS is not for me, maybe I should switch back to UFS2, or Linux - EXT4 which was working well.

Someone has any idea ?
Ungaro
Junior Member
 
Posts: 9
Joined: 27 Jan 2010, 15:05

Postby olav » 17 Jun 2010, 20:41

How do you benchmark?
Do you have compression enabled?
User avatar
olav
Member
 
Posts: 349
Joined: 23 Apr 2010, 19:39
Location: Norway, Stavanger

Postby Ungaro » 17 Jun 2010, 20:59

Compression is disabled, and to benchmark I copy some files over my gigabyte network, from my computer with NFS.
Ungaro
Junior Member
 
Posts: 9
Joined: 27 Jan 2010, 15:05

Postby olav » 17 Jun 2010, 21:08

Try benchmarking with dd first. Then you know where to start looking.

It could be bad network cabling, wrong nfs settings or bad sata cables. And you're not using a PCI SATA controller, right?
User avatar
olav
Member
 
Posts: 349
Joined: 23 Apr 2010, 19:39
Location: Norway, Stavanger

Postby Ungaro » 17 Jun 2010, 21:13

Yes, I'm gonna try with dd.

It can't be a network cabling problem, or nfs settings because the same configuration was working well under Debian :)
I'm not using the motherboard raid controller, right.
Ungaro
Junior Member
 
Posts: 9
Joined: 27 Jan 2010, 15:05

Postby danbi » 18 Jun 2010, 07:51

Try the most generic tuning first, for example comment everything else ZFS related. Add

Code: Select all
vm.kmem_size="5G"


to [FILE]/boot/loader.conf[/FILE].
You will be best to use the motherboard SATA ports, with the AHCI driver if supported. Add

Code: Select all
ahci_load="YES"


to [FILE]/boot/loader.conf[/FILE]. ports on the motherboard are likely to be the fastest you will ever get (unless not supported well).

You may try to compare UFS vs. ZFS on the same server by NFS exporting filesystem from your third disk.

It is expected, that writing to ZFS over NFS will be slower. This is because of the ZIL and the synchronous writes NFS is performing. You may get much better performance with a separate ZIL device (such as flash memory of some sort). To test this, you may try

[CMD="#"]sysctl vfs.zfs.zil_disable=1[/CMD]

Just don't forget to revert it back!

I would not compare ZFS with ext4 on any account. It is better safe, than sorry.
You may also try copying the same files on the server, to compare the influence of NFS and remote machine.
danbi
Member
 
Posts: 227
Joined: 25 Apr 2010, 09:32
Location: Varna, Bulgaria

Postby wonslung » 18 Jun 2010, 09:31

nfs is going to perform slower...that's a given.



you should check the filesystem performance locally first, chances are you will find the problem isn't due to ZFS at all but due to your network protocol.

you might find samba performs better.....it did for me when i was using FreeBSD as my home server (i've switched my ZFS servers to opensolaris recently)

also, you should look into adding as much ram as possible....for a ZFS machine, ram is king..but i'm willing to bet the problem is just NFS and not ZFS.
wonslung
Member
 
Posts: 850
Joined: 07 May 2009, 00:15

Postby Ungaro » 18 Jun 2010, 17:06

Ok I made some changes in my /boot/loader.conf settings :

Code: Select all
vfs.zfs.prefetch_disable=1
vm.kmem_size="3096M"
ahci_load="YES"


The performance seems to be better when writing (60MB/s), not for reading (35MB/s), pretty curious !
Ungaro
Junior Member
 
Posts: 9
Joined: 27 Jan 2010, 15:05

Postby fgordon » 22 Jun 2010, 06:36

maybe as the system can always cache writing, but caching while reading only works if you`ve read the data at least once before....

So with very huge amounts of data many GBytes or even TBytes reading should be faster than writing.
fgordon
Junior Member
 
Posts: 33
Joined: 28 Mar 2010, 11:44

Postby Ungaro » 23 Jun 2010, 07:48

Is there a way to disable caching ? I don't need it, because my server is a home storage server which is used punctually, so caching is useless I think.
Ungaro
Junior Member
 
Posts: 9
Joined: 27 Jan 2010, 15:05

Postby t1066 » 23 Jun 2010, 09:16

From the man page,

[CMD="#"]zfs set primarycache=var[/CMD]

where var can be none, metadata or all.
t1066
Member
 
Posts: 154
Joined: 07 Jun 2010, 16:49

Postby Matty » 23 Jun 2010, 11:04

Ungaro wrote:Is there a way to disable caching ? I don't need it, because my server is a home storage server which is used punctually, so caching is useless I think.


I don't think it would hurt to keep using the cache either.
Matty
Member
 
Posts: 162
Joined: 18 Nov 2008, 07:17
Location: Breda, The Netherlands

Postby phoenix » 23 Jun 2010, 19:30

Why would you ever want to disable caching? Doing so will send disk performance through the floor (as in, it would be horrible).
Freddie

Help for FreeBSD: Handbook, FAQ, man pages, mailing lists.
User avatar
phoenix
MFC'd
 
Posts: 3349
Joined: 17 Nov 2008, 05:43
Location: Kamloops, BC, Canada

Postby boblog » 24 Jun 2010, 17:02

Disabling the prefetcher nukes read performance. I had the same performance before. Faster writes than reads, enabling the prefetcher fixed that right up.
boblog
Junior Member
 
Posts: 4
Joined: 18 Jun 2010, 17:05

Postby Ungaro » 24 Jun 2010, 19:32

I can't enable prefetcher : I've got only 3Gb of RAM installed which is not enough (4Gb recommended), and my mobo is full (no more slot to add 1 more Gb).
Ungaro
Junior Member
 
Posts: 9
Joined: 27 Jan 2010, 15:05

Postby Ungaro » 25 Jun 2010, 17:12

Here is my solution : I moved to Debian ! I can't tune ZFS which is certainly a powerful filesystem, but not for me.
So I moved today to Debian stable, and I put my SATA disk in RAID1 (with the mobo controller).
I'm gonna make some reading/writing tests to compare with ZFS and my last params. I'll tell you that later.
Ungaro
Junior Member
 
Posts: 9
Joined: 27 Jan 2010, 15:05

Postby wonslung » 03 Jul 2010, 13:48

ZFS really shines on newer hardware...you can think of it like a sliding scale....the newer your hardware is, the better ZFS is going to look compared to other options.


I ultimately moved to Solaris for my home servers because of the newer ZFS features but when i was using FreeBSD, it worked very well with around 8 gb ram, a decent multi core 64 bit cpu and several drives.

I know people using it on machines with 2 gb ram who have it working well, but at that level of ram i think ufs is going to perform better. They use it for the OTHER features, and not the performance at that level of ram.
wonslung
Member
 
Posts: 850
Joined: 07 May 2009, 00:15


Return to General

Who is online

Users browsing this forum: amilojko and 1 guest