Hello all, I'm a long time linux admin, just started a week ago with freebsd.
I have 2 servers, with Intel S5000 variant motherboards, both have only 2GB ram, one is using the onboard intel embedded raid controller and the other has a 3ware 9550.
These are the cards I've been dealt, not my choice to have such small amount of memory in them.
I just cannot get any decent performance using ZFS out of these servers, and after searching around and trying out the various tuneables, I am no closer.
I'll concentrate on the machine without the 3ware card as I've done most my testing with it.
It has 2 re2 400gb drives mirrored, as the ufs2 system volume. It has 4 re2 400GB drives as the zfs pool.
I created the pool:
I then tried some simple dd tests which gave me writes around 30-35MB/s and reads sometimes even slower. I used bonnie++ as a better test, which had very similar results.
I then tried the ZFS amd64 tunings recommended:
Now it gets a little weird and inconsistent:
If I then try the benchmarks again, bonnie reports around 65-130MB/s writes and about the same on reads, and consistantly seems to do this, but with large variation.
If I then do a simple copy of a 4GB file from the system ufs2 volume to the zfs mount, its terribly slow....25MB/s or less.
Then, and only sometimes, subsequent benchmarks waver between the 65-90 all the way down to 25MB/s again. A reboot seems to clean things up as far as the benchmarks are concerned, bringing tests back to 130MB/s
I'm currently trying to borrow some memory from another server to increase memory to 4G, but I still cannot understand why I get such poor and inconsistent performance from ZFS - the same drives used in a raid5 linux raid will consistantly give me 150MB-200MB/s throughput, and copying from volume to volume is very fast.
So are there any definitive tunings that have been posted elsewhere ? I've only found bits and pieces, some limiting the write speed, some limiting ARC and kernel pools, but nothing so far that seems to make it 'work'.
What am I expecting ? Probably 150MB/s throughput as a ballpark. I don't care what the benchmarks really give me, they're just benchmarks, but file copying both large and small is what is important, and I've never achieved anything acceptable so far.
I have 2 servers, with Intel S5000 variant motherboards, both have only 2GB ram, one is using the onboard intel embedded raid controller and the other has a 3ware 9550.
These are the cards I've been dealt, not my choice to have such small amount of memory in them.
I just cannot get any decent performance using ZFS out of these servers, and after searching around and trying out the various tuneables, I am no closer.
I'll concentrate on the machine without the 3ware card as I've done most my testing with it.
It has 2 re2 400gb drives mirrored, as the ufs2 system volume. It has 4 re2 400GB drives as the zfs pool.
I created the pool:
zpool create data raidz ad{8,10,12,14}
I then tried some simple dd tests which gave me writes around 30-35MB/s and reads sometimes even slower. I used bonnie++ as a better test, which had very similar results.
I then tried the ZFS amd64 tunings recommended:
Code:
vm.kmem_size_max="1024M"
vm.kmem_size="1024M"
vfs.zfs.arc_mac="100M"
If I then try the benchmarks again, bonnie reports around 65-130MB/s writes and about the same on reads, and consistantly seems to do this, but with large variation.
If I then do a simple copy of a 4GB file from the system ufs2 volume to the zfs mount, its terribly slow....25MB/s or less.
Then, and only sometimes, subsequent benchmarks waver between the 65-90 all the way down to 25MB/s again. A reboot seems to clean things up as far as the benchmarks are concerned, bringing tests back to 130MB/s
I'm currently trying to borrow some memory from another server to increase memory to 4G, but I still cannot understand why I get such poor and inconsistent performance from ZFS - the same drives used in a raid5 linux raid will consistantly give me 150MB-200MB/s throughput, and copying from volume to volume is very fast.
So are there any definitive tunings that have been posted elsewhere ? I've only found bits and pieces, some limiting the write speed, some limiting ARC and kernel pools, but nothing so far that seems to make it 'work'.
What am I expecting ? Probably 150MB/s throughput as a ballpark. I don't care what the benchmarks really give me, they're just benchmarks, but file copying both large and small is what is important, and I've never achieved anything acceptable so far.