ZFS ZFS and Bonnie++ results

I am trying to track down if I have a problem. Basically, Bonnie results look good except for the "Per Char" benchmarks. If I do a single drive with UFS I am getting about 500kb per second, but with a ZFS 3 disk stripe I am only getting 135kb for sequential output. I have several systems running ZFS, all of them have the same issue. Hardware varies from Dell R200s with SAS6ir cards, and Dell R610s with H200s. All of them seem to max out at around 135kb per second. Other then those 2 tests, it outperforms our other hardware raids. These are not 4k drives, but I did test with everything aligned, made no difference. Tried with disk cache enabled/disabled, made no difference for this test. Does anyone have any idea why this is? Below is output from one server, this is 6 10k SAS drives in a raid 10 configuration.

Code:
Version  1.97       ------Sequential Output------ --Sequential Input- --Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
em02           24G   130  99 485372  95 312260  87   344  99 1070812  95  1319  24
Latency               106ms    5545us     149ms   61254us     407ms     159ms
Version  1.97       ------Sequential Create------ --------Random Create--------
em02            -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 17896  96 +++++ +++ 19303  96 26814  96 +++++ +++ 21697  97
Latency             16973us     135us     181us   14612us      66us     142us
 
  • Thanks
Reactions: Oko
A good start would be to post the bonnie command you are using:

You should be sure you are entering the RAM variable to make sure you are getting out of cache also.

Additionally you can dump it to HTML to make it more readable:

bonnie++ -d /mnt/Datastore/tmp -r 16384 -u 0 | bon_csv2html | tee /root/bonnie_benchmark1.html
 
Last edited by a moderator:
  • Thanks
Reactions: mae
I have done some profiling with benchmarks/iozone last week. Both row raidz2 and NFSv3. On raidz2 pool created of 6HDD I am getting read and write speeds of over 300 MB/s. NFSv3 with Red Hat client is more like 90 MB/s.I am in particularly interested in performance comparison with Red Hat server XFS soft RAID 6 and NFSv4. From the test I have done XFS with soft RAID 6 created from the same 6 HDD appears to be 10-15% better than ZFS (I have turned on lz4 compression on the ZFS data set). As of NFSv3 vs Linux NFSv4 I got FreeBSD 10.1 tuned to about 10% better sequential read speeds than Red Hat and about 5% lower write speed. Server has 48 cores and 384GB of RAM and most importantly 1Gigabit LAN controller just like the clients so the resources are not the problem. Please send me PM if you want to discuss performance issues further.
 
I am simply doing bonnie++ with no options. The above machine has 12GB of ram and is creating a 24GB file so we should be well out of cache. To me the "per char" tests, are just more CPU dependent and probably have no real value outside of testing. I have a ZFS volume on my desktop which has a much faster CPU and gets better results, but still nothing compared to single ufs drive. I have never seen performance issues in real life use so not all that concerned, more looking for a reason why its so poor with those specific tests.
 
Here are results running bonnie++ with no arguments. This is a ZFS mirror on top of geli using two Samsung 850 evos (one SATA, one mSATA) in a Lenovo X220. When setting up the vdevs, I used an ashift of 12 (4 KiB blocks). As you can see, the Per Chr results are low here too.
Code:
Version      1.97   ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
phe.ftfl.ca     16G   148  99 775583  93 559480  91   499  99 1694873  85 +++++ +++
Latency             90604us    7601us   17003us   18675us   23453us   12136us
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files:max:min        /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
phe.ftfl.ca      16 +++++ +++ +++++ +++ 25340  99 17556  91 +++++ +++ 16585  98
Latency             11600us    4448us     844us   17727us     211us     360us
.
 
Back
Top