Performance issues: raidz over sata on 8.0-release

Hi, I hope this is in the right place.

I've spent the best part of this week getting my home server rebuilt and set up, and am currently moving the data back across from backups, but it seems to me to be going more than a little slowly and I'm hoping you guys can help. It used to be a linux box, but the motherboard was giving out, so almost everything except the drives and case got replaced.

My setup:
MB: D410-PT w/ Intel Atom mini-itx, 64 bit single core @ 1.66GHz
1Gb ram
4x 500Gb SATA drives (2 on the onboard controller, 2 via a Si3114 pci card)
Partitioned: 1gb ufs /boot; 1Gb swap each; with the rest of the space used for raid-z1 across them all
FreeBSD 8.0-Release amd64

vm.kmem_size_max="512M"
vm.kmem_size="512M"
vfs.zfs.arc_max="100M"


The install process was... arduous to say the least. First of all it seems like my motherboard, despite being a recent model, doesn't like GUID Partition Tables. It also didn't want to boot entirely via the pci sata card, so it took me a day or two of frustration to actually get the system installed.
I would like to set up the ufs /boot partition to be mirrored with one or all of the other drives, but that is a lower priority than getting the system back up and shiney.

What I am concerned about mostly is the write performance of the array. I've done some tests with dd, as well as monitoring with zpool iostat, and my average write speeds seem to be somewhere between 2.5 and 3.5 MB/sec.
Amusingly (or not), when testing with dd, I got an average of 3.17 MB/sec, but with "zpool iostat zroot 1" running, dd reported just 2.78 MB/sec.
Finding stats from similar specced systems online is proving tricky, but I think I'm right in thinking this is surprisingly low write speeds, aren't I?
I'm not sure where exactly the bottleneck is. As I write this, I'm copying from a usb-attached disk at similar rates: the cpu is at 10-15%, only about 250MB of physical ram is in use, and there's just 5MB in swap. I'm wondering if the problem could be the Si3114 pci-sata card (and then is the problem the card or the pci bus?).

Any suggestions would be welcomed.
 
It could just be your CPU, as ZFS is doing software RAID and checksums etc etc. Although I take your point that CPU utilization isnt very high.
So presumably you have some UFS file systems on the 2 internal drives? Have you tried doing a like for like write performance comparison? If UFS is just as slow then its obviously not the fault of ZFS.
On the card, Ive read this is a slow card. However just how slow I dont know, but I would imagine it can go a lot faster than you are seeing. Also the bus, presumably PCIe, Im sure cannot possibly be a bottleneck.
I have got some Dell servers using SIIS 3124 cards and few year old Xeon CPUs, performance isnt great really but its good enough and the pools I have only consist of 2 disks (mirrored) so its normal not to have amazing performance.
 
Couple of other things, seems to be a lot of noise about disks with 4k sectors and some problems if things are configured correctly. Be worth checking that out, think the first thing is to check if ur drives have 4k sectors and if they are emulating 512byte sectors for compatibility or not.
Another thing would be to wait for FreeBSD 8.1, due out very soon, which I believe (can someone confirm) should support zpool version 19 or higher which allows the use of unmirrored ZIL log devices for improved performance. That would allow you to use a single USB or other flash device to boost your performance.
 
AndyUKG said:
Another thing would be to wait for FreeBSD 8.1, due out very soon, which I believe (can someone confirm) should support zpool version 19

the latest demo iso (http://mfsbsd.vx.sk/iso/8.1rc1-zfsv16.iso) is with zpool v16 and it will _not_ be included with the next 8.1 release.
That would allow you to use a single USB or other flash device to boost your performance.
please show me an usb stick that is faster then a harddisk. Lower latency maybe; faster in terms of MB/s noway.
Only SSD makes real sense as cache or zil log device because these devices should be _faster_ then your harddrives.
 
Matty said:
the latest demo iso (http://mfsbsd.vx.sk/iso/8.1rc1-zfsv16.iso) is with zpool v16 and it will _not_ be included with the next 8.1 release.

Thats a shame, Id read that the lastest version of ZFS would be included in 8.1, or was it just latest bug fixes perhaps?

Matty said:
please show me an usb stick that is faster then a harddisk. Lower latency maybe; faster in terms of MB/s noway.
Only SSD makes real sense as cache or zil log device because these devices should be _faster_ then your harddrives.

Ive seen a few people claiming good performance with USB flash disks, this guy is talking about a USB SanDisk Cruzr. Ive just seen a few people mentioning this, not done a thorough research into exactly what level of product could really help.

http://www.abisen.com/blog/?p=76
 
tried myself. The latency and iops were great but still only 15mb/s transfer if I remember correctly. My raid10 gives me 190mb/s write and 280+mb/s read with 4 disk setup. So what's the point in adding it.

If you look closely on the video he is impressed with the w/s (writes per sec).
But try yourself: put an usb stick in you usb2 port and add a 100mb file and tell me if it's fast. Well I can give you the answer right now. No it is not.
 
Well for whatever reason, this guy is only getting 3.5MBsec so would be an improvement! Perhaps there is a more fundamental issue on his system causing the low speed, not sure what else to suggest him tho.
I too have seen very low performance with ZFS, eSATA, 2x 5400rpm disks with many small files (maildir email directories). So the right low end USB drive might even help me... My performance is sufficient for my requirements though so no plans to test this at the mo.
 
for lots of small file like email the usb cache might help but for lets say dvd images it wouldn't do much good. Just try you can add and remove cache vdev without any problems. Zil log can't be removed so be careful with that one.
 
I can't remember where I read it, but I set my ZFS pool to use a USB drive for metadata cache only.

Code:
zfs set secondarycache=metadata tankname

That way you aren't storing data on the USB drive, just metadata and it cuts some I/O off yours spinning disks. It also doesn't matter if something blows up since ZFS will fall back to reading from the pool itself. Probably not a huge increase in performance, but USB drives are cheap :)
 
It's important to note that raidz will get TERRIBLE random write performance.

This is because of the way it lays down the data in a single variable block across all drives (basically limiting the entire raidz vdev to the random write/read performance of the slowest single drive)


it's also important to note that 1GB of ram is really not enough for ZFS. It's enough for ZFS to function but it's not nearly enough for zfs to function well.

I wouldn't even run zfs without 4 gb ram...and honestly, i don't run it without 8
 
Back
Top