ZFS performance on HP MicroServer n54l

Hi all!

How do I increase the performance of my box? And is it possible?

My system:
# uname -srm
Code:
FreeBSD 9.2-RELEASE-p2 amd64
HDD:
# camcontrol devlist
Code:
<VB0250EAVER HPG9>                 at scbus0 target 0 lun 0 (ada0,pass0)
<WDC WD30EZRX-00MMMB0 80.00A80>    at scbus1 target 0 lun 0 (ada1,pass1)
<WDC WD30EZRX-00MMMB0 80.00A80>    at scbus2 target 0 lun 0 (ada2,pass2)
<WDC WD30EZRX-00DC0B0 80.00A80>    at scbus3 target 0 lun 0 (ada3,pass3)
dmesg.boot:
Code:
    Copyright (c) 1992-2013 The FreeBSD Project.
    Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 1994
            The Regents of the University of California. All rights reserved.
    FreeBSD is a registered trademark of The FreeBSD Foundation.
    FreeBSD 9.2-RELEASE-p2 #0 r258792: Sun Dec  1 20:03:49 MSK 2013
        vovas@proliant:/usr/obj/usr/src/sys/PROLIANT amd64
    FreeBSD clang version 3.3 (tags/RELEASE_33/final 183502) 20130610
    CPU: AMD Turion(tm) II Neo N54L Dual-Core Processor (2196.38-MHz K8-class CPU)
      Origin = "AuthenticAMD"  Id = 0x100f63  Family = 0x10  Model = 0x6  Stepping = 3
      Features=0x178bfbff<FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CLFLUSH,MMX,FXSR,SSE,SSE2,HTT>
      Features2=0x802009<SSE3,MON,CX16,POPCNT>
      AMD Features=0xee500800<SYSCALL,NX,MMX+,FFXSR,Page1GB,RDTSCP,LM,3DNow!+,3DNow!>
      AMD Features2=0x837ff<LAHF,CMP,SVM,ExtAPIC,CR8,ABM,SSE4A,MAS,Prefetch,OSVW,IBS,SKINIT,WDT,NodeId>
      TSC: P-state invariant
    real memory  = 4294967296 (4096 MB)
    avail memory = 3975405568 (3791 MB)
    Event timer "LAPIC" quality 400
    ACPI APIC Table: <HP     ProLiant>
    FreeBSD/SMP: Multiprocessor System Detected: 2 CPUs
    FreeBSD/SMP: 1 package(s) x 2 core(s)
     cpu0 (BSP): APIC ID:  0
     cpu1 (AP): APIC ID:  1
    ioapic0 <Version 2.1> irqs 0-23 on motherboard
    kbd1 at kbdmux0
    acpi0: <HP ProLiant> on motherboard
    acpi0: Power Button (fixed)
    acpi0: reservation of fee00000, 1000 (3) failed
    acpi0: reservation of ffb80000, 80000 (3) failed
    acpi0: reservation of fec10000, 20 (3) failed
    acpi0: reservation of fed80000, 1000 (3) failed
    acpi0: reservation of 0, a0000 (3) failed
    acpi0: reservation of 100000, d7f00000 (3) failed
    cpu0: <ACPI CPU> on acpi0
    cpu1: <ACPI CPU> on acpi0
    attimer0: <AT timer> port 0x40-0x43 irq 0 on acpi0
    Timecounter "i8254" frequency 1193182 Hz quality 0
    Event timer "i8254" frequency 1193182 Hz quality 100
    atrtc0: <AT realtime clock> port 0x70-0x71 irq 8 on acpi0
    Event timer "RTC" frequency 32768 Hz quality 0
    hpet0: <High Precision Event Timer> iomem 0xfed00000-0xfed003ff on acpi0
    Timecounter "HPET" frequency 14318180 Hz quality 950
    Event timer "HPET" frequency 14318180 Hz quality 550
    Event timer "HPET1" frequency 14318180 Hz quality 450
    Timecounter "ACPI-safe" frequency 3579545 Hz quality 850
    acpi_timer0: <32-bit timer at 3.579545MHz> port 0x808-0x80b on acpi0
    pcib0: <ACPI Host-PCI bridge> port 0xcf8-0xcff on acpi0
    pci0: <ACPI PCI bus> on pcib0
    pcib1: <ACPI PCI-PCI bridge> at device 1.0 on pci0
    pci1: <ACPI PCI bus> on pcib1
    vgapci0: <VGA-compatible display> port 0xe000-0xe0ff mem 0xf0000000-0xf7ffffff,0xfe8f0000-0xfe8fffff,0xfe700000-0xfe7fffff irq 18 at device 5.0 on pci1
    pcib2: <ACPI PCI-PCI bridge> irq 18 at device 6.0 on pci0
    pci2: <ACPI PCI bus> on pcib2
    bge0: <HP NC107i PCIe Gigabit Server Adapter, ASIC rev. 0x5784100> mem 0xfe9f0000-0xfe9fffff irq 18 at device 0.0 on pci2
    bge0: CHIP ID 0x05784100; ASIC REV 0x5784; CHIP REV 0x57841; PCI-E
    miibus0: <MII bus> on bge0
    brgphy0: <BCM5784 10/100/1000baseT PHY> PHY 1 on miibus0
    brgphy0:  10baseT, 10baseT-FDX, 100baseTX, 100baseTX-FDX, 1000baseT, 1000baseT-master, 1000baseT-FDX, 1000baseT-FDX-master, auto, auto-flow
    bge0: Ethernet address: 28:92:4a:34:dc:6b
    ahci0: <ATI IXP700 AHCI SATA controller> port 0xd000-0xd007,0xc000-0xc003,0xb000-0xb007,0xa000-0xa003,0x9000-0x900f mem 0xfe6ffc00-0xfe6fffff irq 19 at device 17.0 on pci0
    ahci0: AHCI v1.20 with 6 3Gbps ports, Port Multiplier supported
    ahcich0: <AHCI channel> at channel 0 on ahci0
    ahcich1: <AHCI channel> at channel 1 on ahci0
    ahcich2: <AHCI channel> at channel 2 on ahci0
    ahcich3: <AHCI channel> at channel 3 on ahci0
    ahcich4: <AHCI channel> at channel 4 on ahci0
    ahcich5: <AHCI channel> at channel 5 on ahci0
    ohci0: <AMD SB7x0/SB8x0/SB9x0 USB controller> mem 0xfe6fe000-0xfe6fefff irq 18 at device 18.0 on pci0
    usbus0 on ohci0
    ehci0: <AMD SB7x0/SB8x0/SB9x0 USB 2.0 controller> mem 0xfe6ff800-0xfe6ff8ff irq 17 at device 18.2 on pci0
    usbus1: EHCI version 1.0
    usbus1 on ehci0
    ohci1: <AMD SB7x0/SB8x0/SB9x0 USB controller> mem 0xfe6fd000-0xfe6fdfff irq 18 at device 19.0 on pci0
    usbus2 on ohci1
    ehci1: <AMD SB7x0/SB8x0/SB9x0 USB 2.0 controller> mem 0xfe6ff400-0xfe6ff4ff irq 17 at device 19.2 on pci0
    usbus3: EHCI version 1.0
    usbus3 on ehci1
    pci0: <serial bus, SMBus> at device 20.0 (no driver attached)
    isab0: <PCI-ISA bridge> at device 20.3 on pci0
    isa0: <ISA bus> on isab0
    pcib3: <ACPI PCI-PCI bridge> at device 20.4 on pci0
    pci3: <ACPI PCI bus> on pcib3
    ohci2: <AMD SB7x0/SB8x0/SB9x0 USB controller> mem 0xfe6fc000-0xfe6fcfff irq 18 at device 22.0 on pci0
    usbus4 on ohci2
    ehci2: <AMD SB7x0/SB8x0/SB9x0 USB 2.0 controller> mem 0xfe6ff000-0xfe6ff0ff irq 17 at device 22.2 on pci0
    usbus5: EHCI version 1.0
    usbus5 on ehci2
    amdtemp0: <AMD CPU On-Die Thermal Sensors> on hostb4
    acpi_button0: <Power Button> on acpi0
    sc0: <System console> at flags 0x100 on isa0
    sc0: VGA <16 virtual consoles, flags=0x300>
    vga0: <Generic ISA VGA> at port 0x3c0-0x3df iomem 0xa0000-0xbffff on isa0
    ppc0: cannot reserve I/O port range
    acpi_throttle0: <ACPI CPU Throttling> on cpu0
    hwpstate0: <Cool`n'Quiet 2.0> on cpu0
    Timecounters tick every 1.000 msec
    usbus0: 12Mbps Full Speed USB v1.0
    usbus1: 480Mbps High Speed USB v2.0
    usbus2: 12Mbps Full Speed USB v1.0
    usbus3: 480Mbps High Speed USB v2.0
    usbus4: 12Mbps Full Speed USB v1.0
    usbus5: 480Mbps High Speed USB v2.0
    ugen0.1: <ATI> at usbus0
    uhub0: <ATI OHCI root HUB, class 9/0, rev 1.00/1.00, addr 1> on usbus0
    ugen1.1: <ATI> at usbus1
    uhub1: <ATI EHCI root HUB, class 9/0, rev 2.00/1.00, addr 1> on usbus1
    ugen2.1: <ATI> at usbus2
    uhub2: <ATI OHCI root HUB, class 9/0, rev 1.00/1.00, addr 1> on usbus2
    ugen3.1: <ATI> at usbus3
    uhub3: <ATI EHCI root HUB, class 9/0, rev 2.00/1.00, addr 1> on usbus3
    ugen4.1: <ATI> at usbus4
    uhub4: <ATI OHCI root HUB, class 9/0, rev 1.00/1.00, addr 1> on usbus4
    ugen5.1: <ATI> at usbus5
    uhub5: <ATI EHCI root HUB, class 9/0, rev 2.00/1.00, addr 1> on usbus5
    (aprobe0:ahcich0:0:0:0): SETFEATURES ENABLE SATA FEATURE. ACB: ef 10 00 00 00 40 00 00 00 00 02 00
    (aprobe0:ahcich0:0:0:0): CAM status: ATA Status Error
    (aprobe0:ahcich0:0:0:0): ATA status: 51 (DRDY SERV ERR), error: 04 (ABRT )
    (aprobe0:ahcich0:0:0:0): RES: 51 04 00 00 00 40 00 00 00 02 00
    (aprobe0:ahcich0:0:0:0): Retrying command
    (aprobe0:ahcich0:0:0:0): SETFEATURES ENABLE SATA FEATURE. ACB: ef 10 00 00 00 40 00 00 00 00 02 00
    (aprobe0:ahcich0:0:0:0): CAM status: ATA Status Error
    (aprobe0:ahcich0:0:0:0): ATA status: 51 (DRDY SERV ERR), error: 04 (ABRT )
    (aprobe0:ahcich0:0:0:0): RES: 51 04 00 00 00 40 00 00 00 02 00
    (aprobe0:ahcich0:0:0:0): Error 5, Retries exhausted
    ada0 at ahcich0 bus 0 scbus0 target 0 lun 0
    ada0: <VB0250EAVER HPG9> ATA-8 SATA 2.x device
    ada0: 300.000MB/s transfers (SATA 2.x, UDMA5, PIO 8192bytes)
    ada0: Command Queueing enabled
    ada0: 238475MB (488397168 512 byte sectors: 16H 63S/T 16383C)
    ada0: Previously was known as ad4
    ada1 at ahcich1 bus 0 scbus1 target 0 lun 0
    ada1: <WDC WD30EZRX-00MMMB0 80.00A80> ATA-8 SATA 3.x device
    ada1: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes)
    ada1: Command Queueing enabled
    ada1: 2861588MB (5860533168 512 byte sectors: 16H 63S/T 16383C)
    ada1: quirks=0x1<4K>
    ada1: Previously was known as ad6
    ada2 at ahcich2 bus 0 scbus2 target 0 lun 0
    ada2: <WDC WD30EZRX-00MMMB0 80.00A80> ATA-8 SATA 3.x device
    ada2: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes)
    ada2: Command Queueing enabled
    ada2: 2861588MB (5860533168 512 byte sectors: 16H 63S/T 16383C)
    ada2: quirks=0x1<4K>
    ada2: Previously was known as ad8
    ada3 at ahcich3 bus 0 scbus3 target 0 lun 0
    ada3: <WDC WD30EZRX-00DC0B0 80.00A80> ATA-9 SATA 3.x device
    ada3: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes)
    ada3: Command Queueing enabled
    ada3: 2861588MB (5860533168 512 byte sectors: 16H 63S/T 16383C)
    ada3: quirks=0x1<4K>
    ada3: Previously was known as ad10
    SMP: AP CPU #1 Launched!
    Timecounter "TSC-low" frequency 1098191930 Hz quality 800
    uhub4: 4 ports with 4 removable, self powered
    uhub0: 5 ports with 5 removable, self powered
    uhub2: 5 ports with 5 removable, self powered
    Root mount waiting for: usbus5 usbus3 usbus1
    Root mount waiting for: usbus5 usbus3 usbus1
    uhub5: 4 ports with 4 removable, self powered
    uhub1: 5 ports with 5 removable, self powered
    uhub3: 5 ports with 5 removable, self powered
    Trying to mount root from ufs:/dev/ada0p2 [rw]...
    ZFS NOTICE: Prefetch is disabled by default if less than 4GB of RAM is present;
                to enable, add "vfs.zfs.prefetch_disable=0" to /boot/loader.conf.
    ZFS filesystem version: 5
    ZFS storage pool version: features support (5000)
Zpool status:
# zpool status
Code:
      pool: storage
     state: ONLINE
      scan: scrub canceled on Mon Oct 21 22:31:17 2013
    config:

            NAME           STATE     READ WRITE CKSUM
            storage        ONLINE       0     0     0
              raidz1-0     ONLINE       0     0     0
                gpt/disk1  ONLINE       0     0     0
                gpt/disk2  ONLINE       0     0     0
                gpt/disk3  ONLINE       0     0     0

    errors: No known data errors
vfs.zfs.prefetch_disable:
# sysctl -a | grep vfs.zfs.prefetch_disable
Code:
vfs.zfs.prefetch_disable: 0
ashift:
# zdb | grep ashift
Code:
                ashift: 12
Tests:
# vmstat -P 5
Code:
 procs      memory      page                    disks     faults         cpu0     cpu1
 r b w     avm    fre   flt  re  pi  po    fr  sr ad0 ad1   in   sy   cs us sy id us sy id
 0 0 0   4594M   174M    54   0   0   0 14586  48   0   0 5406 8745 11757  4  7 89  3  8 89
 0 0 0   4594M   177M     0   0   0   0 22412   0   0 283 2685 1760 7611 19  7 74 14 12 74
 2 0 0   4594M   170M     0   0   0   0 24410   0   0 287 2750 1686 7904 20  9 71 17 12 71
 0 0 0   4594M   167M     0   0   0   0 21649   0   3 265 2584 1728 7500 16  9 75 12 12 77
 0 0 0   4594M   176M     1   0   0   0 22064   0   0 270 2627 2209 7245 24  8 68 11 12 78
 0 0 0   4594M   172M     0   0   0   0 21098   0   2 256 2576 1720 7293 17  7 76 13 11 76
iostat:
# iostat -d -n5 5
Code:
            ada0             ada1             ada2             ada3            pass0
  KB/t tps  MB/s   KB/t tps  MB/s   KB/t tps  MB/s   KB/t tps  MB/s   KB/t tps  MB/s
 15.99   4  0.06  79.03 178 13.77  78.93 178 13.73  78.35 179 13.72   0.38   0  0.00
 14.67   1  0.01  93.13 264 23.99  94.25 271 24.98  94.18 270 24.87   0.00   0  0.00
  0.00   0  0.00  91.22 264 23.51  91.47 270 24.13  90.44 263 23.21   0.00   0  0.00
 25.00   2  0.04  88.90 277 24.04  88.46 276 23.82  88.28 281 24.19   0.00   0  0.00
  0.00   0  0.00  90.97 274 24.32  91.99 271 24.36  90.87 270 23.96   0.00   0  0.00
  0.00   0  0.00  91.55 280 25.03  91.28 276 24.58  90.41 296 26.15   0.00   0  0.00
  0.00   0  0.00  92.77 276 25.00  91.71 273 24.46  90.41 270 23.83   0.00   0  0.00

# zpool iostat 5
Code:
               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
storage     3,77T  4,35T    290     77  34,4M  4,35M
storage     3,77T  4,35T    601      0  74,4M      0
storage     3,77T  4,35T    597      0  74,1M      0
storage     3,77T  4,35T    562      0  69,5M      0
storage     3,77T  4,35T    600      0  74,5M      0
storage     3,77T  4,35T    588      0  73,0M      0
storage     3,77T  4,35T    592      0  73,2M      0
storage     3,77T  4,35T    655      0  81,3M      0
storage     3,77T  4,35T    680      0  84,4M      0
storage     3,77T  4,35T    648      0  80,5M      0
storage     3,77T  4,35T    608      0  75,3M      0
/etc/sysctl.conf
Code:
net.inet.tcp.cc.algorithm=htcp
net.inet.tcp.hostcache.expire=3900
kern.ipc.somaxconn=1024
net.inet.tcp.mssdflt=1460
net.inet.tcp.nolocaltimewait=1
net.inet.tcp.experimental.initcwnd10=1
net.inet.tcp.rfc1323=1
net.inet.tcp.rfc3390=1
net.inet.tcp.sendspace=262144
net.inet.tcp.recvspace=262144
net.inet.tcp.sendbuf_inc=262144
net.inet.tcp.recvbuf_inc=262144
net.inet.tcp.syncache.rexmtlimit=1
net.inet.tcp.syncookies=0
net.inet.ip.check_interface=1         # verify packet arrives on correct interface (default 0)
net.inet.ip.portrange.randomized=1    # randomize outgoing upper ports (default 1)
net.inet.ip.process_options=0         # IP options in the incoming packets will be ignored (default 1)
net.inet.ip.random_id=1               # assign a random IP_ID to each packet leaving the system (default 0)
net.inet.ip.redirect=0                # do not send IP redirects (default 1)
net.inet.ip.accept_sourceroute=0      # drop source routed packets since they can not be trusted (default 0)
net.inet.ip.sourceroute=0             # if source routed packets are accepted the route data is ignored (default 0)
net.inet.ip.stealth=1                 # do not reduce the TTL by one(1) when a packets goes through the firewall (default 0)
net.inet.icmp.bmcastecho=0            # do not respond to ICMP packets sent to IP broadcast addresses (default 0)
net.inet.icmp.maskfake=0              # do not fake reply to ICMP Address Mask Request packets (default 0)
net.inet.icmp.maskrepl=0              # replies are not sent for ICMP address mask requests (default 0)
net.inet.icmp.log_redirect=0          # do not log redirected ICMP packet attempts (default 0)
net.inet.icmp.drop_redirect=1         # no redirected ICMP packets (default 0)
net.inet.icmp.icmplim=10              # number of ICMP/RST packets/sec to limit returned packet bursts during a DoS. (default 200)
net.inet.icmp.icmplim_output=1        # show "Limiting open port RST response" messages (default 1)
#net.inet.tcp.delayed_ack=1           # always employ delayed ack, 6 packets get 1 ack to increase bandwidth (default 1)
net.inet.tcp.drop_synfin=1            # SYN/FIN packets get dropped on initial connection (default 0)
#net.inet.tcp.ecn.enable=0            # explicit congestion notification (ecn) warning: some ISP routers abuse it (default 0)
net.inet.tcp.fast_finwait2_recycle=1  # recycle FIN/WAIT states quickly (helps against DoS, but may cause false RST) (default 0)
net.inet.tcp.icmp_may_rst=0           # icmp may not send RST to avoid spoofed icmp/udp floods (default 1)
#net.inet.tcp.maxtcptw=15000          # max number of tcp time_wait states for closing connections (default 5120)
net.inet.tcp.msl=3000                 # 3s maximum segment life waiting for an ACK in reply to a SYN-ACK or FIN-ACK (default 30000)
net.inet.tcp.path_mtu_discovery=0     # disable MTU discovery since most ICMP type 3 packets are dropped by others (default 1)
net.inet.tcp.rfc3042=0                # disable limited transmit mechanism which can slow burst transmissions (default 1)
net.inet.tcp.sack.enable=1            # TCP Selective Acknowledgments are needed for high throughput (default 1)
net.inet.udp.blackhole=1              # drop udp packets destined for closed sockets (default 0)
net.inet.tcp.blackhole=2              # drop tcp packets destined for closed ports (default 0)
kern.maxvnodes=250000
/boot/loader.conf
Code:
aio_load="YES"
autoboot_delay="3"
cc_htcp_load="YES"
amdtemp_load="YES"
#vfs.zfs.write_limit_override=268435456
vfs.zfs.prefetch_disable=0
vfs.zfs.arc_max="2048M"
 
What performance problems are you seeing? I've got a 2x 2 disk mirror in an N54L box and my speed is limited by the gigabit Ethernet port.

I see you're using RAIDZ1 - you could speed things up by using 2 mirrors. Also, adding RAM will help.
 
Assuming you're looking for more network throughput: the bottleneck might be on the other end of the Ethernet cable.
 
worldi said:
How did you determine these numbers?
# dd if=/dev/zero of=/storage/test.hdd bs=1G count=40
Code:
40+0 records in
40+0 records out
42949672960 bytes transferred in 92.870659 secs (462467623 bytes/sec)
# dd of=/dev/zero if=/storage/test.hdd bs=1G count=40
Code:
40+0 records in
40+0 records out
42949672960 bytes transferred in 49.442305 secs (868682657 bytes/sec)
 
Um.... 420 megabytes per sec write speed isn't actually bad :)

Isn't that 420 megabytes / sec to write 40 GB in 90 odd seconds?
 
/dev/zero is completely useless for performance testing. Those results are clearly incorrect.

You're better off trying with /dev/random or even better use iozone or bonnie.
 
# bonnie++ -d /storage -s 64000 -r 4000 -u root
Code:
Version  1.97       ------Sequential Output------ --Sequential Input- --Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
izmc.zapto.o 64000M   109  99 296197  76 249299  67   263  96 680394  77 369.7  18
Latency               130ms    2562ms    3149ms     239ms     102ms     147ms
Version  1.97       ------Sequential Create------ --------Random Create--------
izmc.zapto.org      -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 21368  94 18115  28 16969  94  4851  22 +++++ +++ 19703  96
Latency             18975us     387ms    2784us    2386ms     159us     302us
1.97,1.97,izmc.zapto.org,1,1389712438,64000M,,109,99,296197,76,249299,67,263,96,680394,77,369.7,18,16,,,,,21368,94,18115,28,16969,94,4851,22,+++++,+++,19703,96,130ms,2562ms,3149ms,239ms,102ms,147ms,18975us,387ms,2784us,2386ms,159us,302us
# zfs-stats -a
Code:
------------------------------------------------------------------------
ZFS Subsystem Report                            Tue Jan 14 19:16:22 2014
------------------------------------------------------------------------

System Information:

        Kernel Version:                         902001 (osreldate)
        Hardware Platform:                      amd64
        Processor Architecture:                 amd64

        ZFS Storage pool Version:               5000
        ZFS Filesystem Version:                 5

FreeBSD 9.2-RELEASE-p2 #0 r258792: Sun Dec 1 20:03:49 MSK 2013 vovas
19:16  up  7:27, 2 users, load averages: 5,71 2,59 1,13

------------------------------------------------------------------------

System Memory:

        5.13%   195.94  MiB Active,     1.54%   58.95   MiB Inact
        65.80%  2.45    GiB Wired,      0.10%   3.77    MiB Cache
        27.42%  1.02    GiB Free,       0.01%   540.00  KiB Gap

        Real Installed:                         4.00    GiB
        Real Available:                 96.56%  3.86    GiB
        Real Managed:                   96.57%  3.73    GiB

        Logical Total:                          4.00    GiB
        Logical Used:                   72.90%  2.92    GiB
        Logical Free:                   27.10%  1.08    GiB

Kernel Memory:                                  2.16    GiB
        Data:                           99.41%  2.15    GiB
        Text:                           0.59%   13.15   MiB

Kernel Memory Map:                              3.08    GiB
        Size:                           68.03%  2.10    GiB
        Free:                           31.97%  1008.74 MiB

------------------------------------------------------------------------

ARC Summary: (THROTTLED)
        Memory Throttle Count:                  283

ARC Misc:
        Deleted:                                8.76m
        Recycle Misses:                         6.84k
        Mutex Misses:                           1.71k
        Evict Skips:                            788.88k

ARC Size:                               77.59%  2.12    GiB
        Target Size: (Adaptive)         77.57%  2.12    GiB
        Min Size (Hard Limit):          12.50%  349.44  MiB
        Max Size (High Water):          8:1     2.73    GiB

ARC Size Breakdown:
        Recently Used Cache Size:       93.72%  1.99    GiB
        Frequently Used Cache Size:     6.28%   136.18  MiB

ARC Hash Breakdown:
        Elements Max:                           101.25k
        Elements Current:               79.76%  80.75k
        Collisions:                             5.59m
        Chain Max:                              17
        Chains:                                 21.74k

------------------------------------------------------------------------

ARC Efficiency:                                 29.97m
        Cache Hit Ratio:                71.95%  21.57m
        Cache Miss Ratio:               28.05%  8.41m
        Actual Hit Ratio:               69.06%  20.70m

        Data Demand Efficiency:         96.49%  15.00m
        Data Prefetch Efficiency:       9.45%   8.50m

        CACHE HITS BY CACHE LIST:
          Anonymously Used:             3.34%   721.20k
          Most Recently Used:           46.14%  9.95m
          Most Frequently Used:         49.83%  10.75m
          Most Recently Used Ghost:     0.25%   53.00k
          Most Frequently Used Ghost:   0.43%   92.98k

        CACHE HITS BY DATA TYPE:
          Demand Data:                  67.11%  14.47m
          Prefetch Data:                3.72%   803.17k
          Demand Metadata:              28.87%  6.23m
          Prefetch Metadata:            0.30%   64.01k

        CACHE MISSES BY DATA TYPE:
          Demand Data:                  6.26%   526.41k
          Prefetch Data:                91.52%  7.69m
          Demand Metadata:              1.79%   150.84k
          Prefetch Metadata:            0.42%   35.46k

------------------------------------------------------------------------

L2ARC is disabled

------------------------------------------------------------------------

File-Level Prefetch: (HEALTHY)

DMU Efficiency:                                 70.03m
        Hit Ratio:                      92.72%  64.93m
        Miss Ratio:                     7.28%   5.10m

        Colinear:                               5.10m
          Hit Ratio:                    0.36%   18.54k
          Miss Ratio:                   99.64%  5.08m

        Stride:                                 60.22m
          Hit Ratio:                    99.94%  60.18m
          Miss Ratio:                   0.06%   37.61k

DMU Misc:
        Reclaim:                                5.08m
          Successes:                    7.86%   399.06k
          Failures:                     92.14%  4.68m

        Streams:                                4.76m
          +Resets:                      0.13%   6.24k
          -Resets:                      99.87%  4.75m
          Bogus:                                0

------------------------------------------------------------------------

VDEV cache is disabled

------------------------------------------------------------------------

ZFS Tunables (sysctl):
        kern.maxusers                           384
        vm.kmem_size                            4005072896
        vm.kmem_size_scale                      1
        vm.kmem_size_min                        0
        vm.kmem_size_max                        329853485875
        vfs.zfs.arc_max                         2931331072
        vfs.zfs.arc_min                         366416384
        vfs.zfs.arc_meta_used                   213789352
        vfs.zfs.arc_meta_limit                  732832768
        vfs.zfs.l2arc_write_max                 8388608
        vfs.zfs.l2arc_write_boost               8388608
        vfs.zfs.l2arc_headroom                  2
        vfs.zfs.l2arc_feed_secs                 1
        vfs.zfs.l2arc_feed_min_ms               200
        vfs.zfs.l2arc_noprefetch                1
        vfs.zfs.l2arc_feed_again                1
        vfs.zfs.l2arc_norw                      1
        vfs.zfs.anon_size                       450248704
        vfs.zfs.anon_metadata_lsize             0
        vfs.zfs.anon_data_lsize                 0
        vfs.zfs.mru_size                        1694060032
        vfs.zfs.mru_metadata_lsize              72515072
        vfs.zfs.mru_data_lsize                  1609170944
        vfs.zfs.mru_ghost_size                  153367552
        vfs.zfs.mru_ghost_metadata_lsize        3552256
        vfs.zfs.mru_ghost_data_lsize            149815296
        vfs.zfs.mfu_size                        85296128
        vfs.zfs.mfu_metadata_lsize              69534720
        vfs.zfs.mfu_data_lsize                  786432
        vfs.zfs.mfu_ghost_size                  2096538112
        vfs.zfs.mfu_ghost_metadata_lsize        743841280
        vfs.zfs.mfu_ghost_data_lsize            1352696832
        vfs.zfs.l2c_only_size                   0
        vfs.zfs.dedup.prefetch                  1
        vfs.zfs.nopwrite_enabled                1
        vfs.zfs.mdcomp_disable                  0
        vfs.zfs.no_write_throttle               0
        vfs.zfs.write_limit_shift               3
        vfs.zfs.write_limit_min                 33554432
        vfs.zfs.write_limit_max                 518406656
        vfs.zfs.write_limit_inflated            12441759744
        vfs.zfs.write_limit_override            0
        vfs.zfs.prefetch_disable                0
        vfs.zfs.zfetch.max_streams              8
        vfs.zfs.zfetch.min_sec_reap             2
        vfs.zfs.zfetch.block_cap                256
        vfs.zfs.zfetch.array_rd_sz              1048576
        vfs.zfs.top_maxinflight                 32
        vfs.zfs.resilver_delay                  2
        vfs.zfs.scrub_delay                     4
        vfs.zfs.scan_idle                       50
        vfs.zfs.scan_min_time_ms                1000
        vfs.zfs.free_min_time_ms                1000
        vfs.zfs.resilver_min_time_ms            3000
        vfs.zfs.no_scrub_io                     0
        vfs.zfs.no_scrub_prefetch               0
        vfs.zfs.mg_alloc_failures               8
        vfs.zfs.write_to_degraded               0
        vfs.zfs.check_hostid                    1
        vfs.zfs.recover                         0
        vfs.zfs.deadman_synctime                1000
        vfs.zfs.deadman_enabled                 1
        vfs.zfs.txg.synctime_ms                 1000
        vfs.zfs.txg.timeout                     5
        vfs.zfs.vdev.cache.max                  16384
        vfs.zfs.vdev.cache.size                 0
        vfs.zfs.vdev.cache.bshift               16
        vfs.zfs.vdev.trim_on_init               1
        vfs.zfs.vdev.max_pending                10
        vfs.zfs.vdev.min_pending                4
        vfs.zfs.vdev.time_shift                 29
        vfs.zfs.vdev.ramp_rate                  2
        vfs.zfs.vdev.aggregation_limit          131072
        vfs.zfs.vdev.read_gap_limit             32768
        vfs.zfs.vdev.write_gap_limit            4096
        vfs.zfs.vdev.bio_flush_disable          0
        vfs.zfs.vdev.bio_delete_disable         0
        vfs.zfs.vdev.trim_max_bytes             2147483648
        vfs.zfs.vdev.trim_max_pending           64
        vfs.zfs.zil_replay_disable              0
        vfs.zfs.cache_flush_disable             0
        vfs.zfs.zio.use_uma                     0
        vfs.zfs.sync_pass_deferred_free         2
        vfs.zfs.sync_pass_dont_compress         5
        vfs.zfs.sync_pass_rewrite               2
        vfs.zfs.snapshot_list_prefetch          0
        vfs.zfs.super_owner                     0
        vfs.zfs.debug                           0
        vfs.zfs.version.ioctl                   3
        vfs.zfs.version.acl                     1
        vfs.zfs.version.spa                     5000
        vfs.zfs.version.zpl                     5
        vfs.zfs.trim.enabled                    1
        vfs.zfs.trim.txg_delay                  32
        vfs.zfs.trim.timeout                    30
        vfs.zfs.trim.max_interval               1

------------------------------------------------------------------------
# dd if=/dev/random of=/storage/test.hdd bs=1G count=40
Code:
40+0 records in
40+0 records out
42949672960 bytes transferred in 965.907122 secs (44465634 bytes/sec)
# dd of=/dev/random if=/storage/test.hdd bs=1G count=40
Code:
40+0 records in
40+0 records out
42949672960 bytes transferred in 323.977933 secs (132569748 bytes/sec)
 
# zfs get all storage
Code:
NAME     PROPERTY              VALUE                      SOURCE
storage  type                  filesystem                 -
storage  creation              вс авг 25  7:52 2013  -
storage  used                  2,51T                      -
storage  available             2,82T                      -
storage  referenced            2,51T                      -
storage  compressratio         1.00x                      -
storage  mounted               yes                        -
storage  quota                 none                       default
storage  reservation           none                       default
storage  recordsize            128K                       default
storage  mountpoint            /storage                   default
storage  sharenfs              off                        default
storage  checksum              fletcher4                  local
storage  compression           lz4                        local
storage  atime                 off                        local
storage  devices               on                         default
storage  exec                  on                         default
storage  setuid                on                         default
storage  readonly              off                        default
storage  jailed                off                        default
storage  snapdir               hidden                     default
storage  aclmode               discard                    default
storage  aclinherit            restricted                 default
storage  canmount              on                         default
storage  xattr                 off                        temporary
storage  copies                1                          default
storage  version               5                          -
storage  utf8only              off                        -
storage  normalization         none                       -
storage  casesensitivity       sensitive                  -
storage  vscan                 off                        default
storage  nbmand                off                        default
storage  sharesmb              off                        default
storage  refquota              none                       default
storage  refreservation        none                       default
storage  primarycache          all                        default
storage  secondarycache        all                        default
storage  usedbysnapshots       0                          -
storage  usedbydataset         2,51T                      -
storage  usedbychildren        200M                       -
storage  usedbyrefreservation  0                          -
storage  logbias               latency                    default
storage  dedup                 off                        default
storage  mlslabel                                         -
storage  sync                  standard                   default
storage  refcompressratio      1.00x                      -
storage  written               2,51T                      -
storage  logicalused           2,52T                      -
storage  logicalreferenced     2,52T                      -
Dedup is off
 
Still haven't seen anything that indicates a performance problem yet. Expectations exceeding reality of hardware choice?
 
raidz vdevs are generally limited to the write I/O of a single disk. Most SATA drives give you approx 100 MBps of write throughput. A 3-disk raidz1 with 85 MBps of write throughput is not "bad" or "slow".

raidz vdevs generally give read throughput roughly equivalent to the aggegrate of data disks. For a 3-disk raidz1 (meaning 2 data disks) you should be able to get approx 200 MBps of read throughput. Getting only 45 MBps is troubling.

However, the make-up of a raidz vdev (meaning the number of drives) makes a difference. The "sweet spot" for raidz1 is (IIRC) 6 disks, and for raidz2 it's 8 disks. There's an old Sun blog (not sure if it's still accessible) that covered the math behind this. Using odd numbers of disks in raidz vdev leads to poor performance, and using numbers other than the "sweet spot" lead to good but not great performance.

For the best throughput, use mirror vdevs or multiple raidz vdevs in a single pool. If you can dig up another disk, switching to a pool using 2x mirror vdevs will give you a lot more IOps and throughput.
 
Thanks for detailed information, phoenix :stud
throAU said:
Still haven't seen anything that indicates a performance problem yet. Expectations exceeding reality of hardware choice?
Yes :)
Thanks for help, guys!
 
Back
Top