@RusDyr
To find out for yourself the exact difference between ZFS and UFS you can benchmark striped zfs mirrors (without log- or cache-device) and gstriped gmirrors to make a completly accurate comparison. Then, as peetaur is on to, comes the next question, "What do I really need?". Then you can compare your initial results with the results from one raidz vdev, two raidz´s, one raidz2, two raidz2´s, and so on. Please post your results in a new thread called like "Comparative benchmark between zfs mirrors and grstriped gmirrors", something or other. Use my MO to create a RAM-disk, fill up a big file of random data and use that with
dd to test write-speed. Then it´s ok to read from that random data-file and write towards another file in the ZFS or UFS file system, like:
# dd if=/mnt/ram/randfile of=/foo/bar/randfile bs=1m
Also install
benchmarks/bonnie++ from ports and test:
# bonnie++ -d /foo/bar -u 0 -s Xg
-d /foo/bar (the directory with the ZFS or UFS filesystem mounted)
-u 0 (if you´re running as root)
-s Xg ("X" should be double the size of your RAM)
I would love to see those numbers.
Also keep in mind the tips I gave you about
gpart and
gnop. They have been big performance enhancers for me personally.
@peetaur
I quite like rants. It´s about the only way to find out what the the support and sales won´t tell you. How many times have you heard, "Our products are
not good at etc, etc" or "Our products have N bugs that muck up this and that in this way"
(Luckily, I am in charge of this, so I can decide whether or not to throw away ESXi; Do you have the same control?)
No. We are running 3-400 VM´s in a HP blade chassis with NetApp NAS for serving NFS to VMWare and SMB to our users, farming about 200TB. We are however planning on a much more price-efficient solution for a gigantic video archive running Supermicro HW and FreeBSD or FreeNAS, which I will be in charge of.
Oh by the way... now gstat doesn't show much load on the ZIL during a sync Linux client write. I don't know why it stopped, but my best guess is that it is because I destroyed the pool and recreated it. The old one was an old version that was upgraded to v28. The new one was created v28.
Aha! So we have the same behaviour. That is so strange. A big regression, I´d say.
(another quirk I found in upgraded pools vs created as v28 is that you really can't remove the log... You can remove log vdevs, or run with the log OFFLINE, but the last one won't go away.)
Big bummer. I wonder how a power-outage would affect the pool if running in that state... Best to have a new pool created with V28 and send/recv between then.
/Sebulon