ZFS / UFS / Soft Updates / GJurnal / Bonnie Performance

I have done some tests with UFS/ZFS under bonnie benchmark.

here are the results if you are interested.

software:
OS: FreeBSD 7-CURRENT 200708 snapshot
benchmark: [font="Courier New"]bonnie -s 2048[/font]
CFLAGS: [font="Courier New"]-O2 -fno-strict-aliasing -pipe -s[/font]
CPUTYPE: [font="Courier New"]athlon-mp[/font]
scheduler: ULE​
hardware:
CPU: (single) Athlon XP 2000+ [ 12.5 x 1333MHz ]
MEM: 1 GB DDR 266MHz CL2
FSB Ratio: 1:1
MOTHERBOARD: AMD 760 MPX
HDD: (single) Maxtor 6L160P0 ATA/133​
legend:
Code:
    GJ - GJournal
    SU - Soft Updates
  lzjb - zfs set compression=lzjb ${POOL}
gzip-* - zfs set compression=gzip-* ${POOL}

colors:
Code:
 [B][color="Green"]GREEN[/color][/B] - first
[B][color="Orange"]ORANGE[/color][/B] - second
   [color="Red"][B]RED[/B][/color] - third

Code:
                     ---------Sequential Output----------   -----Sequential Input---   --Random--
                     -Per Char-   --Block---   -Rewrite--   -Per Char-    ---Block---  --Seeks---
                     K/sec %CPU   K/sec %CPU   K/sec %CPU   K/sec %CPU    K/sec %CPU   /sec  %CPU
UFS                  44735 64.9   [B][color="Red"]46970[/color][/B] 18.0   15565  7.0   41166 54.9    47447 12.9   173.9  1.1
UFS.noatime          45524 66.0   [B][color="Orange"]47032[/color][/B] 18.1   15397  7.0   40431 54.3    46874 12.8   177.8  1.1
UFS.noatime.async    [color="Red"][B]45621[/B][/color] 66.4   46510 17.8   15432  7.0   41227 55.4    47501 12.9   174.0  1.1
UFS_SU               45294 66.5   42729 17.5   15563  7.1   39849 53.4    43410 11.9   167.4  1.0
UFS_SU.noatime       [B][color="Orange"]45998[/color][/B] 67.6   42278 17.3   15378  6.9   39169 51.7    44086 12.0   166.6  1.0
UFS_SU.noatime.async [color="Green"][B]46125[/B][/color] 67.7   43361 17.7   15520  7.0   39132 52.4    43598 11.9   169.0  1.0
UFS_GJ               18357 27.5   18079  7.5   10931  4.7   40076 52.9    46950 13.3   [color="red"][B]181.1[/B][/color]  1.2
UFS_GJ.noatime       18140 27.1   16990  7.1   10973  4.7   39837 53.4    47476 13.4   169.4  1.1
UFS_GJ.noatime.async 17942 26.9   17586  7.3   11107  4.8   38021 51.1    47414 13.2   171.4  1.1
ZFS                  32858 64.1   30611 20.4   15401 10.0   39544 60.3    47483 11.0    65.5  0.8
ZFS.noatime          32463 64.5   29860 20.8   14992  9.8   40286 62.0    47717 12.9    65.3  0.7
ZFS.comp=lzjb        40061 78.8   [color="#008000"][B]86064[/B][/color] 61.5   [B][color="#008000"]55270[/color][/B] 42.2   [B][color="#008000"]51819[/color][/B] 79.8   [B][color="#008000"]132028[/color][/B] 50.1   138.0  3.2
ZFS.comp=gzip-1      25843 49.2   38214 26.8   [color="Orange"][B]25772[/B][/color] 30.7   [color="Red"][B]45479[/B][/color] 77.2   [color="#ff0000"][B]102446[/B][/color] 54.4   [color="Orange"][B]354.7[/B][/color] 21.0
ZFS.comp=gzip-9      19968 38.2   22995 16.3   [color="Red"][B]19615[/B][/color] 25.2   [color="Orange"][B]46752[/B][/color] 84.6   [color="#ffa500"][B]102759[/B][/color] 63.0   [color="Green"][B]740.6[/B][/color] 62.6

ZFS_DEF: default ZFS/FreeBSD settings for 1GB/i386 system

Code:
kern.maxvnodes:           70235
vfs.zfs.prefetch_disable: 0
vfs.zfs.arc_max:          167772160
vm.kmem_size_max:         335544320
vfs.zfs.zil_disable:      0

ZFS_TUNE: tuned settings reccomneded here: http://wiki.freebsd.org/ZFSTuningGuide

Code:
kern.maxvnodes:           50000
vfs.zfs.prefetch_disable: 1
vfs.zfs.arc_max:          104857600
vm.kmem_size_max:         402653184
vfs.zfs.zil_disable:      0 / 1

ZFS results:

Code:
                                ---------Sequential Output----------   -----Sequential Input---   --Random--
                                -Per Char-   --Block---   -Rewrite--   -Per Char-    ---Block---  --Seeks---
                                K/sec %CPU   K/sec %CPU   K/sec %CPU   K/sec %CPU    K/sec %CPU   /sec  %CPU
ZFS_DEF                         32858 64.1   30611 20.4   15401 10.0   39544 60.3    47483 11.0    65.5  0.8
ZFS_TUNE                        35637 68.1   30117 20.2   18787  9.9   35982 47.9    48953  9.3    66.3  0.7
ZFS_TUNE.zil=disabled           38353 74.9   31409 21.1   20198 10.6   35449 48.6    48207  9.6    65.6  0.7

ZFS_DEF.comp=lzjb               40061 78.8   86064 61.5   55270 42.2   51819 79.8   132028 50.1   138.0  3.2
ZFS_TUNE.comp=lzjb              40228 75.6   89397 59.1   50634 40.1   54886 91.4   156476 80.1   127.6  2.9
ZFS_TUNE.comp=lzjb.zil=disabled 40536 76.4   83370 57.4   52601 41.8   54335 92.1   151080 80.2   133.3  2.9


kernel config:

Code:
cpu		I686_CPU
ident		VERMADEN

options	SCHED_ULE		# ULE scheduler
options 	PREEMPTION		# Enable kernel thread preemption
options 	INET			# InterNETworking
options 	FFS			# Berkeley Fast Filesystem
options 	SOFTUPDATES		# Enable FFS soft updates support
options 	UFS_ACL		# Support for access control lists
options 	UFS_DIRHASH		# Improve performance on big directories
options 	UFS_GJOURNAL		# Enable gjournal-based UFS journaling
options 	GEOM_PART_GPT		# GUID Partition Tables.
options 	GEOM_LABEL		# Provides labelization
options 	COMPAT_43TTY		# BSD 4.3 TTY compat [KEEP THIS!]
options 	SCSI_DELAY=5000	# Delay (in ms) before probing SCSI
options 	SYSVSHM		# SYSV-style shared memory
options 	SYSVMSG		# SYSV-style message queues
options 	SYSVSEM		# SYSV-style semaphores
options 	_KPOSIX_PRIORITY_SCHEDULING # POSIX P1003_1B real-time extensions
options 	KBD_INSTALL_CDEV	# install a CDEV entry in /dev
options 	ADAPTIVE_GIANT	# Giant mutex is adaptive.
options 	STOP_NMI		# Stop CPUS using NMI instead of IPI

# SMP kernel
options 	SMP			# Symmetric MultiProcessor Kernel
device		apic			# I/O APIC

# Bus support
device		eisa
device		pci

# Floppy drives
device		fdc

# ATA and ATAPI devices
device		ata
device		atadisk		# ATA disk drives
device		atapicd		# ATAPI CDROM drives
device		atapifd		# ATAPI floppy drives
options 	ATA_STATIC_ID	# Static device numbering

# SCSI peripherals
device		scbus		# SCSI bus (required for SCSI)
device		da		# Direct Access (disks)
device		cd		# CD
device		pass		# Passthrough device (direct SCSI access)

# Keyboard and the PS/2 mouse
device		atkbdc		# AT keyboard controller
device		atkbd		# AT keyboard
device		psm		# PS/2 mouse
device		kbdmux		# keyboard multiplexer

# Syscons console driver
device		sc
device		vga		# VGA video card driver
device		splash		# Splash screen and screen saver support

# Add suspend/resume support for the i8254.
device		pmtimer

# NIC
device		miibus		# MII bus support
device		fxp		# Intel EtherExpress PRO/100B (82557, 82558)

# Pseudo devices
device		loop		# Network loopback
device		random		# Entropy device
device		ether		# Ethernet support
device		pty		# Pseudo-ttys (telnet etc)
device		md		# Memory "disks"

# Berkeley Packet Filter
device		bpf		# Berkeley packet filter

I added this old thread here, since it was present on [font="Courier New"]bsdformus.org[/font] [RIP] and is still present on other UNIX sites/forums but not here.
 
On zfs lzjb is faster then gzip, this is because, gzip will compress data better, then lzjb, which is faster than gzip AFAIK

How knows, maybe on faster machine gzip would be faster than lzjb ;)
 
I see one huge advantage at home without any benchmark, ZFS literally flies while building world or updating the source/ports-tree compared to UFS+SU.
 
I've tried using gzip-1 as well. Although it's still slower than lzjb, it's faster than gzip (default is gzip-6) and has a nice compression ratio.
 
Here are my current results on new box:

Create:
# zfs create basefs/test
# zfs set mountpoint=/test basefs/test


Options:
# zfs set compression=[on|off] basefs/test
# zfs set checksum=[on|off] basefs/test


Code:
# cd /test && bonnie -s 8192 (this machine has 3GB RAM)
              -------Sequential Output-------- ---Sequential Input-- --Random--
              -Per Char- --Block--- -Rewrite-- -Per Char-  --Block--- --Seeks---
Machine    MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  K/sec %CPU  /sec %CPU 
         8192 36165 36.9 46683 16.4 20419  9.7 74582 78.0  94540 13.8  75.1  1.8 ZFS checksum=on  compression=off (default)
         8192 36325 37.1 43597 15.6 19792  9.5 72155 75.4  83432 12.8  58.6  1.6 ZFS checksum=off compression=off 
         8192 36345 37.0 45016 16.5 23312 10.5 69788 72.5  84694 12.8  67.9  1.2 ZFS checksum=off compression=on
         8192 56174 57.8 94827 31.1 71615 28.9 81527 88.6 301633 59.8 113.7  1.3 ZFS checksum=on  compression=lzjb
         8192 58430 59.1 90259 29.3 79894 32.2 82658 89.6 324807 64.3 150.5  1.4 ZFS checksum=off compression=lzjb

/boot/loader.conf
Code:
# modules
zfs_load="YES"
ahci_load="YES"

# zfs tuning
vm.kmem_size=536870912          # 512 MB
vm.kmem_size_max=536870912      # 512 MB
vfs.zfs.vdev.cache.size=8388608 #   8 MB
vfs.zfs.arc_max=67108864        #  64 MB
vfs.zfs.prefetch_disable=0      # enable prefetch

# page share factor per proc
vm.pmap.shpgperproc=512

# default 1000
kern.hz=100

# avoid additional 128 interrupts per second per core
hint.atrtc.0.clock=0

# do not power devices without driver
hw.pci.do_power_nodriver=3

# ahci power management
hint.ahcich.0.pm_level=5
hint.ahcich.1.pm_level=5
hint.ahcich.2.pm_level=5
hint.ahcich.3.pm_level=5
hint.ahcich.4.pm_level=5
hint.ahcich.5.pm_level=5

/etc/sysctl.conf
Code:
# fs
vfs.read_max=32
 
Well, without 8-stable (lots of improvements for FreeBSD-related ZFS bugs) and decent memory (or even decent hardware) you will certainly just see a glimpse of ZFS performance. It's nice to see ZFS run on such low specs, but it creates a distorted image of this great filesystem and it is barely comparable in my opinion.
 
I'm throwing in some benchmarks showing the other side of ZFS. :)

$ uname -a
Code:
FreeBSD freebsd.* 8.0-RELEASE-p2 FreeBSD 8.0-RELEASE-p2 #6: Thu Jan 21 05:16:55 CET 2010     marie@freebsd.*:/usr/obj/usr/src/sys/ServeWho  amd64

# zpool status
Code:
  pool: storage
 state: ONLINE
 scrub: scrub completed after 2h4m with 0 errors on Sun Jan 24 22:11:49 2010
config:

        NAME        STATE     READ WRITE CKSUM
        storage     ONLINE       0     0     0
          raidz1    ONLINE       0     0     0
            ad10    ONLINE       0     0     0
            ad12    ONLINE       0     0     0
            ad14    ONLINE       0     0     0
            ad16    ONLINE       0     0     0
        spares
          ad6       AVAIL

errors: No known data errors

Those disks are all Western Digital Greenpower 1.5TB disks, with "idle3 timer" set to 25.5s.

Specifications:
Code:
CPU: Intel Core 2 Duo E7400 2.8GHz, Socket 775, 3MB, FSB 1066, Boxed
Motherboard: MSI P45 NEO-F, P45, Socket-775, DDR2, 1600FSB, ATX, ICH10, PCI-Ex(2.0)x16
RAM: 2x Corsair Value S. PC5300 DDR2 4GB Kit w/two matched Value Select 2048MB  (8GB total)

Benchmark:
# zfs create storage/test
# cd /storage/test && bonnie -s 8192
Code:
              -------Sequential Output-------- ---Sequential Input-- --Random--
              -Per Char-   --Block--- -Rewrite-- -Per Char-  --Block--- --Seeks---
Machine    MB K/sec  %CPU   K/sec %CPU K/sec  %CPU  K/sec  %CPU K/sec  %CPU  /sec   %CPU
         8192 [color="DarkGreen"]142033[/color] 84.5  [color="Red"]156967[/color] 31.2  90154 21.6 154796  72.2 245225 22.3  164.2  0.8 (compression=off checksum=on)
         8192 [color="SandyBrown"]133396[/color] 76.9  [color="DarkGreen"]270523[/color] 54.9 [color="DarkGreen"]226394[/color] 50.5 [color="SandyBrown"]184542[/color]  83.0 [color="DarkGreen"]735736[/color] 63.8  168.2  0.6 (compression=lzjb checksum=on)
         8192 [color="Red"]103418[/color] 58.8  [color="SandyBrown"]178906[/color] 36.1 [color="SandyBrown"]149135[/color] 32.7 [color="DarkGreen"]202452[/color]  89.9 [color="Red"]673094[/color] 56.6  226.8  0.8 (compression=gzip-3 checksum=on)
         8192  85329 49.2  118054 23.9 107935 24.0 [color="Red"]168482[/color]  74.9 [color="SandyBrown"]689845[/color] 54.5  828.8  2.5 (compression=gzip-6 checksum=on)
         8192  82837 47.6  111416 23.2 [color="Red"]108160[/color] 23.8 155636  69.6 663867 52.9 1605.9  4.4 (compression=gzip-9 checksum=on)


The system had 4-6GB free ram at all times.
 
oliverh said:
Well, without 8-stable (lots of improvements for FreeBSD-related ZFS bugs) and decent memory (or even decent hardware) you will certainly just see a glimpse of ZFS performance.

I will be changing current storage setup, since I own the new "deathstar" series disk, Western Diigtal Caviar Geern to be precise, maybe Iwill end up with raid5 (raidz1) with 3 disks or maybe some mirror on two bigger disks, I currently seek for some 'not so green' drives, maybe two more of Caviar Blue 640GB for example.

I would like to get RE3 driver, but they are very pricey ...

oliverh said:
It's nice to see ZFS run on such low specs, but it creates a distorted image of this great filesystem and it is barely comparable in my opinion.
You mean i386, loader.conf tunnig, system RAM or using it on just one disk?
 
Really helpful thread, vermaden. :)

Do you mind if I use your values to tune my i386 box? Mine are a bit too restrictive I believe and on many concurrent file operations the system is lagging.
 
@volatilevoid

Thanks mate, I havent played a lot with these values, I should propably put them in some for loop and test all night by sctipt which are the best, but these just seem reasonable, I am also qurious what oliverh will tell with which thing I limit ZFS that much.

Also, post your 'restrictive' settings, I am curious what other people use.
 
volatilevoid said:
I'd guess that tuning on amd64 is much easier...

Thanks for sharing.

The best thing about running ZFS on amd64 is that on amd64 it does not need tuning at all ;)
 
vermaden said:
I will be changing current storage setup, since I own the new "deathstar" series disk, Western Diigtal Caviar Geern to be precise, maybe Iwill end up with raid5 (raidz1) with 3 disks or maybe some mirror on two bigger disks, I currently seek for some 'not so green' drives, maybe two more of Caviar Blue 640GB for example.
you could check out the new samsung F3 1TB disks
 
vermaden said:
Thanks for sharing.

The best thing about running ZFS on amd64 is that on amd64 it does not need tuning at all ;)

Well, you have to under certain conditions even on OpenSolaris.

But to answer your initial question:

CPU: (single) Athlon XP 2000+ [ 12.5 x 1333MHz ]
MEM: 1 GB DDR 266MHz CL2
FSB Ratio: 1:1
MOTHERBOARD: AMD 760 MPX
HDD: (single) Maxtor 6L160P0 ATA/133

With those specs you're using ZFS in low-power mode (in terms of secure operation and performance). To unleash its power and to understand my saying you have to compare it to some Porsche. The latter isn't a car for city-traffic and so it isn't of much use in such an environment. Anything is possible, but many things are barely reasonable. ZFS is a filesystem designed for really big servers or comparable workstations.
 
oliverh said:
Well, you have to under certain conditions even on OpenSolaris.

But to answer your initial question:

CPU: (single) Athlon XP 2000+ [ 12.5 x 1333MHz ]
MEM: 1 GB DDR 266MHz CL2
FSB Ratio: 1:1
MOTHERBOARD: AMD 760 MPX
HDD: (single) Maxtor 6L160P0 ATA/133

With those specs you're using ZFS in low-power mode (in terms of secure operation and performance). To unleash its power and to understand my saying you have to compare it to some Porsche. The latter isn't a car for city-traffic and so it isn't of much use in such an environment. Anything is possible, but many things are barely reasonable. ZFS is a filesystem designed for really big servers or comparable workstations.

Yes, that hardware is ancient, I do not own it since long time, my current setup provides these results, but I must get some more disks (and get rid of WD Green):
http://forums.freebsd.org/showpost.php?p=64121&postcount=5

Matty said:
I see but these are the F1 drives not the F3 (Samsung Spinpoint F3 HD103SJ)

If you look at the pic http://www.xbitlabs.com/images/storage/1tb-14hdd-roundup/p13.jpg of the samsung drive you see its manufacture date 2007.11.

edit: will try at home. got 4x1tb with the named F3's in raidz1.
Thanks for info, I am only little scared of these new F3, since they are somehow very little on power, and I am curious if they do not incorporate some 'green' shit like Green Caviars from WD do:
http://tomshardware.com/reviews/2tb-hdd-7200,2430-10.html

Also random access time is not as good as on WD Caviar Blue 11.9-12.4 vs 13.5-13.9 difrence (more I/O operations), but F3 has a lot better MB/s transfers, hard to decide ...
 
I ended up ordering 3 x Samsung Spinpoint F3 HD103SJ 1TB, after all reviews they seem better choice then others and a lot lower on power consumption and with lower temperature, thanks for suggestion Matty ;)
 
vermaden said:
I ended up ordering 3 x Samsung Spinpoint F3 HD103SJ 1TB, after all reviews they seem better choice then others and a lot lower on power consumption and with lower temperature, thanks for suggestion Matty ;)

I couldn't find any reviews about multithreading on these drives which is too bad because I would really like to know how they perform.

edit: well. there is but it's in german. http://www.ocaholic.ch/xoops/html/modules/smartsection/item.php?itemid=369&page=6
tested with iozone -Rb test_xk.out -i0 -i1 -i2 -+n -r xk -s4g -t2

too bad there is no comparison with some WD disks
 
Matty said:
I couldn't find any reviews about multithreading on these drives which is too bad because I would really like to know how they perform.

(...)

too bad there is no comparison with some WD disks

I have also found these:
http://bit-tech.net/hardware/storage/2009/10/06/samsung-spinpoint-f3-1tb-review/9
http://tomshardware.com/charts/2009-3.5-desktop-hard-drive-charts/IOMeter-2006.07.27,1039.html

They are really good (better then WD Black/RE) if it comes to performance per watt, raw I/O operations are faster on WD Black/RE, but RAW interface performance is faster on Samsungs F3.

Also, only the 2TB version of WD Black/RE uses 500GB platters (same as Samsung F3), older WD's use 320GB platters (3 of them on 1TB WD Black).

The only 'bad' thing that Smasung F3 lacks is 5 year warranty that WD Black/RE drives have ... (only 3 years for Samsung)

WD Caviar Black 1TB (WD1001FALS)
4996118.jpg


Samsung F3 1TB (HD103SJ)
kovot.jpg
 
diskinfo(1) results for Samsung F3 1TB (HD103SJ)

Code:
# dmesg | grep ada0
ada0 at ahcich0 bus 0 target 0 lun 0
ada0: <SAMSUNG HD103SJ 1AJ100E4> ATA/ATAPI-8 SATA 2.x device
ada0: 300.000MB/s transfers
ada0: 953869MB (1953525168 512 byte sectors: 16H 63S/T 16383C)
ada0: Native Command Queueing enabled

Code:
# diskinfo -c -v -t ada0
ada0    
        512             # sectorsize
        1000204886016   # mediasize in bytes (932G)
        1953525168      # mediasize in sectors
        1938021         # Cylinders according to firmware.
        16              # Heads according to firmware.
        63              # Sectors according to firmware.
        S246J90Z131004  # Disk ident.

I/O command overhead:
        time to read 10MB block      0.077151 sec       =    0.004 msec/sector
        time to read 20480 sectors   2.211735 sec       =    0.108 msec/sector
        calculated command overhead                     =    0.104 msec/sector

Seek times:
        Full stroke:      250 iter in   5.234394 sec =   20.938 msec
        Half stroke:      250 iter in   3.918627 sec =   15.675 msec
        Quarter stroke:   500 iter in   6.541610 sec =   13.083 msec
        Short forward:    400 iter in   1.145674 sec =    2.864 msec
        Short backward:   400 iter in   2.402746 sec =    5.992 msec
        Seq outer:       2048 iter in   0.161329 sec =    0.079 msec
        Seq inner:       2048 iter in   0.216652 sec =    0.106 msec
Transfer rates:
        outside:       102400 kbytes in   0.697989 sec =   146707 kbytes/sec
        middle:        102400 kbytes in   0.832913 sec =   122942 kbytes/sec
        inside:        102400 kbytes in   1.297791 sec =    78903 kbytes/sec

Will post some ZFS benchmarks later ...
 
RAID0 on 3 x Samsung F3 1TB (HD103SJ)

Simple bonnie benchmark performance:
Code:
[B]raw#[/B] [color="Blue"]gstripe status[/color]
       Name  Status  Components
stripe/raw       UP  ada0s2
                     ada1s2
                     ada2s2

[B]bonnie#[/B] [color="Blue"]bonnie -s 2560m[/color]
              -------Sequential Output-------- ---Sequential Input-- --Random--
              -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
Machine    MB  K/sec %CPU  K/sec %CPU  K/sec %CPU  K/sec %CPU   K/sec %CPU    /sec  %CPU
         2560 105848 77.5 255758 46.2  53981 11.5  87115 82.3  214039 26.5  8254.0  16.3 UFS
         2560 110135 78.7 255036 46.6  52714 11.3  86784 82.0  215048 27.2 10383.1  20.4 UFS (SoftUpdates)
         2560  77014 60.5 114061 24.4 110470 27.5 110183 98.0 1286423 99.7 49717.9 180.7 UFS (GJournal/async)

Simple raw device performance:
Code:
[B]write#[/B] [color="Blue"]dd < /dev/zero > /dev/stripe/raw bs=4M count=256[/color]
256+0 records in
256+0 records out
1073741824 bytes transferred in 4.187841 secs (256395083 bytes/sec) [B][250MB/s][/B]

[B]read#[/B] [color="Blue"]dd > /dev/null < /dev/stripe/raw bs=4M count=256[/color]
256+0 records in
256+0 records out
1073741824 bytes transferred in 3.785693 secs (283631498 bytes/sec) [B][280MB/s][/B]


RAID5 (zfs raidz) on 3 x Samsung F3 1TB (HD103SJ)

Simple bonnie benchmark performance:
Code:
[B]zfs#[/B] [color="#0000ff"]zpool create basefs raidz ada0s3 ada1s3 ada2s3[/color]
[B]zfs#[/B] [color="Blue"]zpool status[/color]
  pool: basefs
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        basefs      ONLINE       0     0     0
          raidz1    ONLINE       0     0     0
            ada0s3  ONLINE       0     0     0
            ada1s3  ONLINE       0     0     0
            ada2s3  ONLINE       0     0     0

errors: No known data errors

[B]zfs#[/B] [color="Blue"]zpool list[/color]
NAME     SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
basefs  2.72T  3.00G  2.72T     0%  ONLINE  -

[B]zfs#[/B] [color="Blue"]zfs list[/color]
NAME        USED  AVAIL  REFER  MOUNTPOINT
basefs     2.00G  1.78T  2.00G  /basefs

[B]zfs#[/B] [color="Blue"]df -h /basefs[/color]
Filesystem    Size    Used   Avail Capacity  Mounted on
basefs        1.8T    2.0G    1.8T     0%    /basefs

[B]zfs#[/B] [color="Blue"]cd /basefs && bonnie -s 8192m[/color]
              -------Sequential Output-------- ---Sequential Input-- --Random--
              -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
Machine    MB K/sec %CPU K/sec  %CPU K/sec %CPU K/sec %CPU K/sec  %CPU   /sec %CPU
	 8192 46592 49.4  70095 21.9 44199 19.9 73329 77.9 151731 24.8   97.8  1.5 checksum=on  compression=off
	 8192 50684 51.1  76920 24.0 47233 19.7 85592 92.9 153819 24.3  115.0  1.2 checksum=off compression=off
	 8192 59356 59.3 103940 32.5 84086 34.0 83807 89.0 348380 55.3  157.1  1.9 checksum=on  compression=on (lzjb)
	 8192 58047 58.3 102645 32.1 83974 34.0 84356 89.6 353521 56.8  159.5  1.9 checksum=off compression=on (lzjb)
	 8192 43438 43.6  66016 20.6 49970 19.8 78088 81.5 256126 40.0  247.0  2.6 checksum=on  compression=gzip-1
	 8192 42704 43.1  65948 20.6 50832 20.1 77435 81.9 256208 40.0  255.0  2.5 checksum=off compression=gzip-1
	 8192 36383 36.7  45631 15.6 41276 17.1 76290 82.0 250496 42.0 1353.5  8.1 checksum=on  compression=gzip-9
	 8192 36896 37.1  46299 14.4 41364 17.0 77537 81.7 259652 40.6 1236.4  7.4 checksum=off compression=gzip-9

Simple raw device performance:
Code:
[B]write#[/B] [color="Blue"]dd < /dev/zero > /basefs/FILE bs=4M count=512[/color]
512+0 records in
512+0 records out
2147483648 bytes transferred in 23.038062 secs (93214596 bytes/sec) [B][90MB/s][/B]

[B]read#[/B] [color="Blue"]dd > /dev/null < /basefs/FILE bs=4M count=512[/color]
512+0 records in
512+0 records out
2147483648 bytes transferred in 13.027241 secs (164845622 bytes/sec) [B][160MB/s][/B]
 
Back
Top