FreeBSD 8, AMD64, ZFS Storage server

Internal storage, motherboards, PCI cards, stuff inside the case.

FreeBSD 8, AMD64, ZFS Storage server

Postby dejvnull » 29 Dec 2009, 17:25

Hi!

We are getting 2 new storage servers and have chosen to go the FreeBSD 8/ZFS route. We have looked into a lot of enterprise storage (Sun/NetApp/EMC/Equallogic), but just feel that the bang for the buck is quite low. We know of course that the service agreements on enterprise storage is in a whole different ballpark than what we are planning for. Anyway...

Just wanted to see if anyone has any type of similar setup or had any problems with the included hardware. We have planned on AMD 6-core CPU running x64 version of FreeBSD. But can also switch to Intel 6-core.

Motherboard: Supermicro H8DI3+-F

CPU: AMD 6-core Istanbul 2,2GHz

RAM: 8x2GB RAM or 8x4GB DDR2 RAM REG ECC

Diskcontroller: Adaptec RAID 51645 (20 port) w BBU

Disk Cache HW: Adaptec MaxIQ SSD kit

OS boot: 2 x 150GB WD Velociraptor 10K 2.5" drives

ZFS ZIL: 1x Intel X25-E SSD, 32GB SATAII, SLC

Harddrives: 16 x WD RE4 Green 2TB

Extra NIC: Intel Gigabit ET Quad port server adapter

Chassis: SuperMicro 3U storage server chassis

Please feel free to drop a post regarding the hardware and running FreeBSD 8 on AMD64 and using ZFS.
I have been searching the forums and will continue my search but any info is great.

Best regards to you all and have a nice New Year!

/Dave
dejvnull
Junior Member
 
Posts: 3
Joined: 29 Dec 2009, 16:22
Location: Sweden

Postby phoenix » 29 Dec 2009, 20:57

Ditch the RAID controller if you are using ZFS for everything. Just pick up multiple 8-port controllers (LSI has some nice ones, fully supported in FreeBSD).

You don't need a single ~$1500 RAID controller, when you can use multiple $300 controllers, and put them on separate PCI-X/PCIe buses. One of the main goals of ZFS is to get away from massive, expensive, overly-redundant RAID controllers.

If you are going to split the ZIL onto a separate device, then you *MUST* make it a mirrored vdev. If the ZIL device ever dies, the entire pool goes with it!! ZFSv13 (in FreeBSD 8) doesn't support the removal of ZIL devices.

And, if you search the forums, you'll find my thread on rsbackup, which is our storage server setup using FreeBSD and ZFS. :)

You also don't need 10K RPM drives for the OS. In fact, you can put the entire FreeBSD OS (/ and /usr) onto 2 GB CompactFlash. The data drives should be the fast ones, not the OS drive. Especially since most of it will be in RAM once you boot. :) In fact, using CompactFlash or SD for the OS would give you more data storage space.

2x CF for the OS.
2x SSD for the ZIL.
18x SATA drives for storage.
Freddie

Help for FreeBSD: Handbook, FAQ, man pages, mailing lists.
User avatar
phoenix
MFC'd
 
Posts: 3349
Joined: 17 Nov 2008, 05:43
Location: Kamloops, BC, Canada

Postby wonslung » 02 Jan 2010, 12:27

I've heard that about the ZIL a few times, but i'm curious if this is confirmed or not.

According to the ZFS best practices wiki, while it IS impossible to remove a ZIL device once it has been added, it also says that in the event of ZIL device failure, ZIL function drops back to the main pool.

This is confusing....
wonslung
Member
 
Posts: 850
Joined: 07 May 2009, 00:15

Postby tooker » 03 Jan 2010, 21:55

phoenix
LSI has some nice ones, fully supported in FreeBSD


Could you suggest someone that is supported in 8.0-Release? What about 8204ELP?
Would like to make raid-z zpool via cheap LSI controller & Intel expanders (AXX6DRV3GEXP & AXX4DRV3GEXP)

P.S. MB is Intel S3000AH & Case SC5300
tooker
Junior Member
 
Posts: 11
Joined: 03 Jan 2010, 21:44
Location: Russia, St. Petersburg

Postby phoenix » 04 Jan 2010, 01:18

Here's the results of my research through various manufacturer websites, various online shopping sites, and various FreeBSD mailing list archives and man pages. Taken from an e-mail to our hardware tech.

Code: Select all
The following cards are supported by FreeBSD and Linux (lots of good reviews
from FreeBSD users), and are in the $100-300 range.  Unfortunately, newegg.ca,
ncix.com, and cdw.com all show them as backordered.  :(

LSI SAS 3081E-R    8-port PCIe,  2 mini-SAS connectors
LSI SAS 3080X-R    8-port PCI-X, 2 mini-SAS connectors

SuperMicro AOC-USAS-L8i   8-port PCIe, 2 mini-SAS connectors
SuperMicro AOC-USASLP-L8i 8-port PCIe, 2 mini-SAS connectors
(these two are the same, one's just a low-profile version)

The SuperMicro boards are actually UIO cards, and not true PCIe cards.  They
work in PCIe slots, but require custom backplates in order to work in PCIe
slots.  They can be used without the backplates, though.

There's also the option of going with 8-port 3Ware cards, as they are in the
$400-$600 range, instead of over $1000 for the 12- and 16-port versions.

3Ware 9550SXU-8LP  8-port PCI-X, 8 SATA connectors (not ideal, ML is preferred)
3Ware 9650SE-8ML   8-port PCIe,  2 mini-SAS connectors

If LSI does decide to drop the 3Ware line of cards (really, really hope not),
the next best-supported RAID controllers for Linux/FreeBSD would probably be
Areca. These appear to be about 1/2 the price of the 3Ware controllers.

Areca ARC-1300ix-16  16-port PCIe, 4 mini-SAS connectors
(this is a non-RAID controller, and 8 of the ports are external, so should be treated as an 8-port card)

Areca ARC-1120        8-port PCI-X, 8 SATA connectors (not ideal)

Areca ARC-1130ML     12-port PCI-X, 3 ML-SATA connectors (same as 3Ware 9550)
Areca ARC-1160ML     16-port PCI-X, 4 ML-SATA connectors (same as 3Ware 9550)

Areca ARC-1222        8-port PCIe,  2 mini-SAS connectors
Areca ARC-1231ML     12-port PCIe,  3 mini-SAS connectors
Freddie

Help for FreeBSD: Handbook, FAQ, man pages, mailing lists.
User avatar
phoenix
MFC'd
 
Posts: 3349
Joined: 17 Nov 2008, 05:43
Location: Kamloops, BC, Canada

Postby User23 » 04 Jan 2010, 11:50

My configuration under testing:

Mainboard: http://www.supermicro.com/Aplus/motherboard/Opteron2000/MCP55/H8DME-2.cfm
Raid Controller: 3Ware (now LSI) 9690SA SAS Contr.
Chassis: http://www.supermicro.com/products/chassis/4U/846/SC846E2-R900.cfm
NIC: Intel PCI-X dualport serveradapter

Raid1 with 2x WD2502ABYS 250GB
Raid6 with 10x WD1002FBYS 1 TB

"Performance" Setting for the Controller is set to "Balanced", means the cache on the disks itself should
not be used so the BBU can make his job saving the data on the controller cache in case of power failure.

The 9690SA-8I is compatible with the SAS expanders on the backplane of the chassis.
The SAS-846EL2 backplane has two expanders which allow effective failover and
recovery.

little problems:

3dm (3dm2) Port under FreeBSD seems to be to old for this Controller/Firmware Version = displayed the raid configuration wrong
the commandline tool tw_cli works
i had to upgrade the controller firmware because of the SAS-846EL2 backplane.

big problems:

had to disable the onboard NICs because they just gone away sometimes ... have the same problems with other onboard NICs on nvidia chipsets too.
well i dont care about it and just i added a dualport intel NIC. this will do a great job.

ZFS:

Code: Select all
pool: home
state: ONLINE
scrub: none requested
config:

   NAME        STATE     READ WRITE CKSUM
   home        ONLINE       0     0     0
     da1       ONLINE       0     0     0



Code: Select all
NAME   SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
home  7.25T   479G  6.78T     6%  ONLINE  -


dmesg:

Code: Select all
Copyright (c) 1992-2009 The FreeBSD Project.
Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 1994
   The Regents of the University of California. All rights reserved.
FreeBSD is a registered trademark of The FreeBSD Foundation.
FreeBSD 8.0-RELEASE-p1 #1: Wed Dec  9 12:17:00 CET 2009
    root@host.domain.net:/usr/obj/usr/src/sys/FS
WARNING: WITNESS option enabled, expect reduced performance.
WARNING: DIAGNOSTIC option enabled, expect reduced performance.
Timecounter "i8254" frequency 1193182 Hz quality 0
CPU: Dual-Core AMD Opteron(tm) Processor 2214 (2211.35-MHz K8-class CPU)
  Origin = "AuthenticAMD"  Id = 0x40f12  Stepping = 2
  Features=0x178bfbff<FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CLFLUSH,MMX,FXSR,SSE,SSE2,HTT>
  Features2=0x2001<SSE3,CX16>
  AMD Features=0xea500800<SYSCALL,NX,MMX+,FFXSR,RDTSCP,LM,3DNow!+,3DNow!>
  AMD Features2=0x1f<LAHF,CMP,SVM,ExtAPIC,CR8>
real memory  = 34359738368 (32768 MB)
avail memory = 33164169216 (31627 MB)
ACPI APIC Table: <S M C  OEMAPIC >
FreeBSD/SMP: Multiprocessor System Detected: 4 CPUs
FreeBSD/SMP: 2 package(s) x 2 core(s)
 cpu0 (BSP): APIC ID:  0
 cpu1 (AP): APIC ID:  1
 cpu2 (AP): APIC ID:  2
 cpu3 (AP): APIC ID:  3
ioapic0 <Version 1.1> irqs 0-23 on motherboard
kbd1 at kbdmux0
acpi0: <S M C OEMXSDT> on motherboard
acpi0: [ITHREAD]
acpi0: Power Button (fixed)
acpi0: reservation of fec00000, 1000 (3) failed
acpi0: reservation of fee00000, 1000 (3) failed
acpi0: reservation of 0, a0000 (3) failed
acpi0: reservation of 100000, dff00000 (3) failed
Timecounter "ACPI-fast" frequency 3579545 Hz quality 1000
acpi_timer0: <24-bit timer at 3.579545MHz> port 0x2008-0x200b on acpi0
pcib0: <ACPI Host-PCI bridge> port 0xcf8-0xcff on acpi0
pci0: <ACPI PCI bus> on pcib0
pci0: <memory, RAM> at device 0.0 (no driver attached)
isab0: <PCI-ISA bridge> at device 1.0 on pci0
isa0: <ISA bus> on isab0
ichsmb0: <SMBus controller> port 0xbc00-0xbc3f,0x2d00-0x2d3f,0x2e00-0x2e3f irq 21 at device 1.1 on pci0
ichsmb0: [ITHREAD]
smbus0: <System Management Bus> on ichsmb0
smb0: <SMBus generic I/O> on smbus0
ohci0: <OHCI (generic) USB controller> mem 0xfe8bf000-0xfe8bffff irq 22 at device 2.0 on pci0
ohci0: [ITHREAD]
usbus0: <OHCI (generic) USB controller> on ohci0
ehci0: <EHCI (generic) USB 2.0 controller> mem 0xfe8bec00-0xfe8becff irq 23 at device 2.1 on pci0
ehci0: [ITHREAD]
usbus1: EHCI version 1.0
usbus1: <EHCI (generic) USB 2.0 controller> on ehci0
pcib1: <ACPI PCI-PCI bridge> at device 6.0 on pci0
pci1: <ACPI PCI bus> on pcib1
vgapci0: <VGA-compatible display> port 0xc000-0xc0ff mem 0xf0000000-0xf7ffffff,0xfe9f0000-0xfe9fffff irq 16 at device 5.0 on pci1
pcib2: <ACPI PCI-PCI bridge> at device 10.0 on pci0
pci2: <ACPI PCI bus> on pcib2
pcib3: <ACPI PCI-PCI bridge> at device 0.0 on pci2
pci3: <ACPI PCI bus> on pcib3
pcib4: <ACPI PCI-PCI bridge> at device 0.1 on pci2
pci4: <ACPI PCI bus> on pcib4
pcib5: <ACPI PCI-PCI bridge> at device 13.0 on pci0
pci5: <ACPI PCI bus> on pcib5
pcib6: <ACPI PCI-PCI bridge> at device 14.0 on pci0
pci6: <ACPI PCI bus> on pcib6
3ware device driver for 9000 series storage controllers, version: 3.70.05.001
twa0: <3ware 9000 series Storage Controller> port 0xd800-0xd8ff mem 0xfc000000-0xfdffffff,0xfeaff000-0xfeafffff irq 17 at device 0.0 on pci6
twa0: [ITHREAD]
twa0: INFO: (0x04: 0x0053): Battery capacity test is overdue:
twa0: INFO: (0x15: 0x1300): Controller details:: Model 9690SA-8I, 128 ports, Firmware FH9X 4.10.00.007, BIOS BE9X 4.08.00.002
pcib7: <ACPI PCI-PCI bridge> at device 15.0 on pci0
pci7: <ACPI PCI bus> on pcib7
em0: <Intel(R) PRO/1000 Network Connection 6.9.14> port 0xec00-0xec1f mem 0xfebe0000-0xfebfffff,0xfebc0000-0xfebdffff irq 18 at device 0.0 on pci7
em0: Using MSI interrupt
em0: [FILTER]
em0: Ethernet address: 00:15:17:d2:df:60
em1: <Intel(R) PRO/1000 Network Connection 6.9.14> port 0xe880-0xe89f mem 0xfeb80000-0xfeb9ffff,0xfeb60000-0xfeb7ffff irq 17 at device 0.1 on pci7
em1: Using MSI interrupt
em1: [FILTER]
em1: Ethernet address: 00:15:17:d2:df:61
amdtemp0: <AMD K8 Thermal Sensors> on hostb3
amdtemp1: <AMD K8 Thermal Sensors> on hostb7
acpi_button0: <Power Button> on acpi0
uart0: <16550 or compatible> port 0x3f8-0x3ff irq 4 flags 0x10 on acpi0
uart0: [FILTER]
uart1: <16550 or compatible> port 0x2f8-0x2ff irq 3 on acpi0
uart1: [FILTER]
atrtc0: <AT realtime clock> port 0x70-0x71 irq 8 on acpi0
cpu0: <ACPI CPU> on acpi0
powernow0: <PowerNow! K8> on cpu0
cpu1: <ACPI CPU> on acpi0
powernow1: <PowerNow! K8> on cpu1
cpu2: <ACPI CPU> on acpi0
powernow2: <PowerNow! K8> on cpu2
cpu3: <ACPI CPU> on acpi0
powernow3: <PowerNow! K8> on cpu3
orm0: <ISA Option ROMs> at iomem 0xc0000-0xcafff,0xcb000-0xccfff on isa0
sc0: <System console> at flags 0x100 on isa0
sc0: VGA <16 virtual consoles, flags=0x300>
vga0: <Generic ISA VGA> at port 0x3c0-0x3df iomem 0xa0000-0xbffff on isa0
atkbdc0: <Keyboard controller (i8042)> at port 0x60,0x64 on isa0
atkbd0: <AT Keyboard> irq 1 on atkbdc0
kbd0 at atkbd0
atkbd0: [GIANT-LOCKED]
atkbd0: [ITHREAD]
Timecounters tick every 1.000 msec
usbus0: 12Mbps Full Speed USB v1.0
usbus1: 480Mbps High Speed USB v2.0
ugen0.1: <nVidia> at usbus0
uhub0: <nVidia OHCI root HUB, class 9/0, rev 1.00/1.00, addr 1> on usbus0
ugen1.1: <nVidia> at usbus1
uhub1: <nVidia EHCI root HUB, class 9/0, rev 2.00/1.00, addr 1> on usbus1
uhub0: 10 ports with 10 removable, self powered
...
da0 at twa0 bus 0 target 0 lun 0
da0: <AMCC 9690SA-8I  DISK 4.10> Fixed Direct Access SCSI-5 device
da0: 100.000MB/s transfers
da0: 238408MB (488259584 512 byte sectors: 255H 63S/T 30392C)
SMP: AP CPU #1 Launched!
SMP: AP CPU #2 Launched!
SMP: AP CPU #3 Launched!
WARNING: WITNESS option enabled, expect reduced performance.
WARNING: DIAGNOSTIC option enabled, expect reduced performance.
da1 at twa0 bus 0 target 1 lun 0
da1: <AMCC 9690SA-8I  DISK 4.10> Fixed Direct Access SCSI-5 device
da1: 100.000MB/s transfers
da1: 7629312MB (15624830976 512 byte sectors: 255H 63S/T 972600C)
Root mount waiting for: usbus1
uhub1: 10 ports with 10 removable, self powered
Root mount waiting for: usbus1
ugen1.2: <Peppercon AG> at usbus1
ums0: <Peppercon AG Multidevice, class 0/0, rev 2.00/0.01, addr 2> on usbus1
ums0: 3 buttons and [Z] coordinates ID=0
ukbd0: <Peppercon AG Multidevice, class 0/0, rev 2.00/0.01, addr 2> on usbus1
kbd2 at ukbd0
Trying to mount root from ufs:/dev/da0s1a
ZFS filesystem version 13
ZFS storage pool version 13
User avatar
User23
Member
 
Posts: 336
Joined: 17 Nov 2008, 14:25
Location: Germany near Berlin

Postby User23 » 04 Jan 2010, 11:52

iozone results for 31x 1GB simultanous

iozone 31x 1GB simultanous:

Code: Select all
Record Size 8 KB
        File size set to 1048576 KB
Output is in Kbytes/sec
        Time Resolution = 0.000002 seconds.
        Processor cache size set to 1024 Kbytes.
        Processor cache line size set to 32 bytes.
        File stride size set to 17 * record size.
        Min process = 1
        Max process = 64
        Throughput test with 1 process
        Each process writes a 1048576 Kbyte file in 8 Kbyte records


Children see throughput for 31 initial writers  =  201265.78 KB/sec
        Parent sees throughput for 31 initial writers   =  150463.73 KB/sec
        Min throughput per process                      =    5986.81 KB/sec
        Max throughput per process                      =    7588.98 KB/sec
        Avg throughput per process                      =    6492.44 KB/sec
        Min xfer                                        =  827280.00 KB

        Children see throughput for 31 rewriters        =  118843.87 KB/sec
        Parent sees throughput for 31 rewriters         =  115923.02 KB/sec
        Min throughput per process                      =    3547.24 KB/sec
        Max throughput per process                      =    4006.70 KB/sec
        Avg throughput per process                      =    3833.67 KB/sec
        Min xfer                                        =  928512.00 KB

        Children see throughput for 31 readers          =  345760.32 KB/sec
        Parent sees throughput for 31 readers           =  340252.11 KB/sec
        Min throughput per process                      =    9941.60 KB/sec
        Max throughput per process                      =   11947.37 KB/sec
        Avg throughput per process                      =   11153.56 KB/sec
        Min xfer                                        =  876416.00 KB

        Children see throughput for 31 re-readers       =  332146.74 KB/sec
        Parent sees throughput for 31 re-readers        =  327672.41 KB/sec
        Min throughput per process                      =    4804.59 KB/sec
        Max throughput per process                      =   20229.93 KB/sec
        Avg throughput per process                      =   10714.41 KB/sec
        Min xfer                                        =  252032.00 KB

        Children see throughput for 31 reverse readers  =  348791.73 KB/sec
        Parent sees throughput for 31 reverse readers   =  340139.45 KB/sec
        Min throughput per process                      =    1201.33 KB/sec
        Max throughput per process                      =   14732.41 KB/sec
        Avg throughput per process                      =   11251.35 KB/sec
        Min xfer                                        =   87552.00 KB

        Children see throughput for 31 stride readers   =   23292.33 KB/sec
        Parent sees throughput for 31 stride readers    =   23276.54 KB/sec
        Min throughput per process                      =     595.24 KB/sec
        Max throughput per process                      =     855.33 KB/sec
        Avg throughput per process                      =     751.37 KB/sec
        Min xfer                                        =  730592.00 KB

        Children see throughput for 31 random readers   =    8350.64 KB/sec
        Parent sees throughput for 31 random readers    =    8350.49 KB/sec
        Min throughput per process                      =     267.10 KB/sec
        Max throughput per process                      =     271.54 KB/sec
        Avg throughput per process                      =     269.38 KB/sec
        Min xfer                                        = 1031432.00 KB

        Children see throughput for 31 mixed workload   =    5781.48 KB/sec
        Parent sees throughput for 31 mixed workload    =    5369.59 KB/sec
        Min throughput per process                      =     174.16 KB/sec
        Max throughput per process                      =     198.01 KB/sec
        Avg throughput per process                      =     186.50 KB/sec
        Min xfer                                        =  922288.00 KB

        Children see throughput for 31 random writers   =    3412.19 KB/sec
        Parent sees throughput for 31 random writers    =    3372.15 KB/sec
        Min throughput per process                      =     108.75 KB/sec
        Max throughput per process                      =     111.33 KB/sec
        Avg throughput per process                      =     110.07 KB/sec
        Min xfer                                        = 1024200.00 KB

        Children see throughput for 31 pwrite writers   =  169670.93 KB/sec
        Parent sees throughput for 31 pwrite writers    =  134318.70 KB/sec
        Min throughput per process                      =    4719.29 KB/sec
        Max throughput per process                      =    6281.94 KB/sec
        Avg throughput per process                      =    5473.26 KB/sec
        Min xfer                                        =  787928.00 KB

        Children see throughput for 31 pread readers    =  360126.34 KB/sec
        Parent sees throughput for 31 pread readers     =  353519.89 KB/sec
        Min throughput per process                      =   10226.21 KB/sec
        Max throughput per process                      =   12556.96 KB/sec
        Avg throughput per process                      =   11616.98 KB/sec
        Min xfer                                        =  876416.00 KB
User avatar
User23
Member
 
Posts: 336
Joined: 17 Nov 2008, 14:25
Location: Germany near Berlin

Postby mix_room » 05 Jan 2010, 09:00

User23 wrote:...
Raid1 with 2x WD2502ABYS 250GB
Raid6 with 10x WD1002FBYS 1 TB
...
Code: Select all
   NAME        STATE     READ WRITE CKSUM
   home        ONLINE       0     0     0
     da1       ONLINE       0     0     0


...
Code: Select all
NAME   SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
home  7.25T   479G  6.78T     6%  ONLINE  -

...


Why are you using the RAID-controllers RAID-functionality if you are dumping ZFS on top? Any special reasoning? Why not plug the disks straight into ZFS.
mix_room
Member
 
Posts: 561
Joined: 07 Aug 2009, 16:31

Postby User23 » 05 Jan 2010, 12:33

Well ZFS raidz1/2 does a good job, but it needs much more cpu power.
I would need 8 or 12 cores to get the same performance.

I had to use this controller because of the backplane and it is the only one i have used for a long time without problems. So i know what i have to expect. And if you have (must use) a real hardware raid controller why you want to spend CPU time for a software raid on it?

Why ZFS on top of the raid6?
To use all the other features ZFS brings with it and be more flexible in the future.
And ofc UFS2 cant handle such a big array :-)

I hope the NFS performance of ZFS is not as low as some reports say. lets see ...

---

For example:

I have the following confguration in use for one year now. first it was only one raidz1 with 6x 1TB. later i added a second controller and 6x 2 TB in a second raidz1 to the tank.

Under heavy disk io the cpu is the bottleneck. The performance is great anyway ... no question :)

Intel Q6600 (4x 2,4GHz, 2x 4MB cache)
8GB RAM
2x 3ware 9550 8x SATA (as stable ATA controller)(the 2 free sata ports on each controller could be used later for ZIL devices)

Code: Select all
zpool status
  pool: backup1
 state: ONLINE
 scrub: none requested
config:

   NAME        STATE     READ WRITE CKSUM
   backup1     ONLINE       0     0     0
     raidz1    ONLINE       0     0     0
       da0     ONLINE       0     0     0
       da1     ONLINE       0     0     0
       da2     ONLINE       0     0     0
       da3     ONLINE       0     0     0
       da4     ONLINE       0     0     0
       da5     ONLINE       0     0     0
     raidz1    ONLINE       0     0     0
       da6     ONLINE       0     0     0
       da7     ONLINE       0     0     0
       da8     ONLINE       0     0     0
       da9     ONLINE       0     0     0
       da10    ONLINE       0     0     0
       da11    ONLINE       0     0     0
User avatar
User23
Member
 
Posts: 336
Joined: 17 Nov 2008, 14:25
Location: Germany near Berlin

Postby phoenix » 05 Jan 2010, 18:54

User23 wrote:"Performance" Setting for the Controller is set to "Balanced", means the cache on the disks itself should
not be used so the BBU can make his job saving the data on the controller cache in case of power failure.


If you have a BBU installed, then set this to Performance. Since you have a BBU, you can enable the drive caches, and use the controller cache as well.

If you don't have a BBU, but you do have a good, working UPS configured to do an ordered shutdown, you can also set this to Performance and enable the drive caches.

If you don't have a BBU, but plan on using ZFS, then you can also set this to Performance and enable the drive caches. Also, if using ZFS, don't use the hardware RAID features. Just create "Single Drive" arrays for each disk attached to the controller. Then use the individual drives to create the zpool.

3dm (3dm2) Port under FreeBSD seems to be to old for this Controller/Firmware Version = displayed the raid configuration wrong


Uninstall the port version, and install the FreeBSD package off the CD. It installs into /usr and /etc instead of /usr/local, but it works. There's a whole slew of extra, fancy new features (like Power Consumption and drive Temperature data).

i had to upgrade the controller firmware because of the SAS-846EL2 backplane.


That's a good idea, regardless of the backplane or drives in use. :)

had to disable the onboard NICs because they just gone away sometimes ... have the same problems with other onboard NICs on nvidia chipsets too.
well i dont care about it and just i added a dualport intel NIC. this will do a great job.


nvidia NIC and harddrive chipsets are hit-and-miss in FreeBSD. Disabling all the onboard stuff that you don't use is best.

ZFS:


See above. Using ZFS with a single hardware RAID array like this eliminates 90% of the usefulness of ZFS. It can't do any redundancy checking or self-healing.
Freddie

Help for FreeBSD: Handbook, FAQ, man pages, mailing lists.
User avatar
phoenix
MFC'd
 
Posts: 3349
Joined: 17 Nov 2008, 05:43
Location: Kamloops, BC, Canada

Postby phoenix » 05 Jan 2010, 18:55

User23 wrote:iozone results for 31x 1GB simultanous


Do the same tests, after removing the hardware raid array, and using the individual drives to create the zpool, using multiple raidz (or raidz2) vdevs.
Freddie

Help for FreeBSD: Handbook, FAQ, man pages, mailing lists.
User avatar
phoenix
MFC'd
 
Posts: 3349
Joined: 17 Nov 2008, 05:43
Location: Kamloops, BC, Canada

Postby phoenix » 05 Jan 2010, 19:04

User23 wrote:Well ZFS raidz1/2 does a good job, but it needs much more cpu power.
I would need 8 or 12 cores to get the same performance.


Have you actually done any testing to prove that? ;)

I had to use this controller because of the backplane and it is the only one i have used for a long time without problems. So i know what i have to expect. And if you have (must use) a real hardware raid controller why you want to spend CPU time for a software raid on it?


When you get so much more than just "software RAID", heck yes. :)

Without access to multiple disks, ZFS is pretty much useless. It can't do any self-healing, it can't do any behind-the-scenes data corruption checking/fixing, it can't fix any corrupted files, it can't stripe data across multiple drives for improved performance.

In fact, the only thing you get by using ZFS on a single device is quick-and-easy FS creation, and snapshots. That's less than 10% of the features of ZFS.

This can quickly lead to a corrupted pool, with unrecoverable files.
Freddie

Help for FreeBSD: Handbook, FAQ, man pages, mailing lists.
User avatar
phoenix
MFC'd
 
Posts: 3349
Joined: 17 Nov 2008, 05:43
Location: Kamloops, BC, Canada

Postby User23 » 06 Jan 2010, 10:34

phoenix wrote:If you have a BBU installed, then set this to Performance. Since you have a BBU, you can enable the drive caches, and use the controller cache as well.


The BBU cant save the on disk cache. In worst case you will loose all the data thats currently waits in disk cache for writing. Except ZFS will not use the disk cache for writing ... i did not know that.

phoenix wrote:If you don't have a BBU, but you do have a good, working UPS configured to do an ordered shutdown, you can also set this to Performance and enable the drive caches.


It is more a religion decission :) . I saw failing UPS in the past with my own eyes.
User avatar
User23
Member
 
Posts: 336
Joined: 17 Nov 2008, 14:25
Location: Germany near Berlin

Postby User23 » 06 Jan 2010, 10:54

phoenix wrote:Have you actually done any testing to prove that? ;)


It is in progress ;P

Code: Select all
  pool: home
 state: ONLINE
 scrub: none requested
config:

   NAME        STATE     READ WRITE CKSUM
   home        ONLINE       0     0     0
     raidz2    ONLINE       0     0     0
       da1     ONLINE       0     0     0
       da2     ONLINE       0     0     0
       da3     ONLINE       0     0     0
       da4     ONLINE       0     0     0
       da5     ONLINE       0     0     0
       da6     ONLINE       0     0     0
       da7     ONLINE       0     0     0
       da8     ONLINE       0     0     0
       da9     ONLINE       0     0     0
       da10    ONLINE       0     0     0


First results: Load 9.x+ while writing

Code: Select all
        Children see throughput for 31 initial writers  =  161468.34 KB/sec
        Parent sees throughput for 31 initial writers   =  114066.57 KB/sec
        Min throughput per process                      =    4521.95 KB/sec
        Max throughput per process                      =    6521.20 KB/sec
        Avg throughput per process                      =    5208.66 KB/sec
        Min xfer                                        =  727216.00 KB


Ill posts the full report after its done

When you get so much more than just "software RAID", heck yes. :)


Yes it consumes CPU time like hell...

Without access to multiple disks, ZFS is pretty much useless. It can't do any self-healing, it can't do any behind-the-scenes data corruption checking/fixing, it can't fix any corrupted files, it can't stripe data across multiple drives for improved performance.

In fact, the only thing you get by using ZFS on a single device is quick-and-easy FS creation, and snapshots. That's less than 10% of the features of ZFS.


It is the same problem like: http://hub.opensolaris.org/bin/view/Community+Group+zfs/faq#HDoesZFSworkwithSANattacheddevices

This can quickly lead to a corrupted pool, with unrecoverable files.


This sounds like ZFS is self destructing.
User avatar
User23
Member
 
Posts: 336
Joined: 17 Nov 2008, 14:25
Location: Germany near Berlin

Postby Matty » 06 Jan 2010, 13:02

User23 wrote:It is in progress ;P

Code: Select all
  pool: home
 state: ONLINE
 scrub: none requested
config:

   NAME        STATE     READ WRITE CKSUM
   home        ONLINE       0     0     0
     raidz2    ONLINE       0     0     0
       da1     ONLINE       0     0     0
       da2     ONLINE       0     0     0
       da3     ONLINE       0     0     0
       da4     ONLINE       0     0     0
       da5     ONLINE       0     0     0
       da6     ONLINE       0     0     0
       da7     ONLINE       0     0     0
       da8     ONLINE       0     0     0
       da9     ONLINE       0     0     0
       da10    ONLINE       0     0     0

Wouldn't 2 raidz1 be faster and recommended by sun?
"The recommended number of disks per group is between 3 and 9. If you have more disks, use multiple groups."
Matty
Member
 
Posts: 162
Joined: 18 Nov 2008, 07:17
Location: Breda, The Netherlands

Postby User23 » 06 Jan 2010, 13:36

Matty wrote:
User23 wrote:It is in progress ;P
Wouldn't 2 raidz1 be faster and recommended by sun?
"The recommended number of disks per group is between 3 and 9. If you have more disks, use multiple groups."


True, ill test that 2x raidz1 with 5 disk config and a raidz2 config with 9 disks ... like it is recommended.
User avatar
User23
Member
 
Posts: 336
Joined: 17 Nov 2008, 14:25
Location: Germany near Berlin

Postby phoenix » 06 Jan 2010, 17:17

So, ZFS (in a single 9-drive raidz2) takes a 20% performance hit ... but gives you so much more in return.

Will be interesting to see how the multiple raidz vdevs setups perform in comparison. With 9 drives, you can use the following setups:
  • 1x 9-drive raidz2 vdev
  • 2x 4-drive raidz2 vdevs
  • 3x 3-drive raidz1 vdevs
Freddie

Help for FreeBSD: Handbook, FAQ, man pages, mailing lists.
User avatar
phoenix
MFC'd
 
Posts: 3349
Joined: 17 Nov 2008, 05:43
Location: Kamloops, BC, Canada

Postby dejvnull » 07 Jan 2010, 01:40

Hi again!
We've had a lot of holidays here in Sweden so I haven't followed this thread until today... :-D

Well we went in a bit of a different direction:
Motherboard: Supermicro H8DI3+-F
CPU: AMD 6-core Istanbul 2,2GHz
RAM: 8x4GB DDR2 RAM REG ECC
Diskcontroller: 2 x Adaptec RAID 51245 (12 port) w BBU (BBU is supposed to give better perf on the card)(We will not use RAID from the cards)
OS boot: 2 x 150GB WD Velociraptor (internal disks)
ZFS ZIL: 2 x Intel X25-E SSD, 32GB SATAII (one read, one write)
Harddrives: 22 x WD RE4 Green 2TB
Extra NIC: Intel Gigabit ET Quad port server adapter
Chassis: SuperMicro 4U storage server chassis
OS: FreeBSD 8

We're planning on getting two of these so we can replicate or something between volumes, between the storage systems. We just not sure how this is done yet with BSD/ZFS compared to for example NetApp with its replication license.

And once again thanks for the input! I'll be coming with lots of more questions in the future.

/Dave
dejvnull
Junior Member
 
Posts: 3
Joined: 29 Dec 2009, 16:22
Location: Sweden

Postby phoenix » 07 Jan 2010, 03:55

You can use ZFS send/recv to transfer snapshots between the two servers. Depending on how much data is being written/changed, you can auto-create snapshots every minutes, 10 minutes, hour, etc, and send them to the other server. Quite a few Solaris sites do this to "replicate" the data on two servers.

Rsync works as well (that's what we went with, since ZFS recv in ZFSv6 on FreeBSD couldn't do more than a couple dozen KBps).
Freddie

Help for FreeBSD: Handbook, FAQ, man pages, mailing lists.
User avatar
phoenix
MFC'd
 
Posts: 3349
Joined: 17 Nov 2008, 05:43
Location: Kamloops, BC, Canada

Postby dejvnull » 07 Jan 2010, 08:48

phoenix wrote:You can use ZFS send/recv to transfer snapshots between the two servers. Depending on how much data is being written/changed, you can auto-create snapshots every minutes, 10 minutes, hour, etc, and send them to the other server. Quite a few Solaris sites do this to "replicate" the data on two servers.

Rsync works as well (that's what we went with, since ZFS recv in ZFSv6 on FreeBSD couldn't do more than a couple dozen KBps).


Thanks! Will look into this. We have mostly talked about using rsync but we're not sure this would work either.

We are going to firstly use the storage for VMWare servers from NFS on ZFS (this is what we have planned on). We have 2 identical servers and want to be able to fail over between the two if needed.
I'm just a bit unsure how we will be able to "snapshot" the VMWare servers VMDK files in a easy way while the servers are live.
I know with NetApp you can either create a pretty complex script routine or buy the software that does all the bits for you and some. We hope we will be able to do something similar here, otherwise we'll have to go with more traditional methods.

/Dave
dejvnull
Junior Member
 
Posts: 3
Joined: 29 Dec 2009, 16:22
Location: Sweden

Postby User23 » 07 Jan 2010, 14:51

phoenix wrote:So, ZFS (in a single 9-drive raidz2) takes a 20% performance hit ... but gives you so much more in return.

Will be interesting to see how the multiple raidz vdevs setups perform in comparison. With 9 drives, you can use the following setups:
  • 1x 9-drive raidz2 vdev
  • 2x 4-drive raidz2 vdevs
  • 3x 3-drive raidz1 vdevs


Yes iam interested in too. It will take some days to test that.

It looks like ZFS could perform better than the raidcontroller if configured like recommended :) . Except for writing, the load is always under 2.

Code: Select all
                      ZFS on single hw raid6 device   ZFS 1x raidz2      ZFS 1x raidz2
                     10 disks (not recom.)         10 disks (not recom.)     9 disks
                        7,25TB              7,08TB        6,2TB

        Children see throughput for 31 initial writers  =  201265.78 KB/sec      161468.34 KB/sec   194090.46 KB/sec
        Parent sees throughput for 31 initial writers   =  150463.73 KB/sec      114066.57 KB/sec    96803.54 KB/sec      
        Min throughput per process                      =    5986.81 KB/sec         4521.95 KB/sec     5342.48 KB/sec   
        Max throughput per process                      =    7588.98 KB/sec        6521.20 KB/sec    11659.86 KB/sec   
        Avg throughput per process                      =    6492.44 KB/sec        5208.66 KB/sec     6260.98 KB/sec      
        Min xfer                                        =  827280.00 KB         727216.00 KB      519184.00 KB
User avatar
User23
Member
 
Posts: 336
Joined: 17 Nov 2008, 14:25
Location: Germany near Berlin

Postby User23 » 08 Jan 2010, 15:07

Code: Select all
                      ZFS on single hw raid6 device   ZFS 1x raidz2      ZFS 1x raidz2      ZFS 2x raidz2
                     10 disks (not recom.)         10 disks (not recom.)     9 disks        2x5 disks
                        7,25TB              7,08TB        6,2TB             5,28TB

        Children see throughput for 31 initial writers  =  201265.78 KB/sec      161468.34 KB/sec   194090.46 KB/sec   118971.07 KB/sec
        Parent sees throughput for 31 initial writers   =  150463.73 KB/sec      114066.57 KB/sec    96803.54 KB/sec    71035.95 KB/sec
        Min throughput per process                      =    5986.81 KB/sec         4521.95 KB/sec     5342.48 KB/sec     3111.14 KB/sec
        Max throughput per process                      =    7588.98 KB/sec        6521.20 KB/sec    11659.86 KB/sec     6005.53 KB/sec
        Avg throughput per process                      =    6492.44 KB/sec        5208.66 KB/sec     6260.98 KB/sec     3837.78 KB/sec
        Min xfer                                        =  827280.00 KB         727216.00 KB      519184.00 KB      543256.00 KB

        Children see throughput for 31 rewriters        =  118843.87 KB/sec       88296.12 KB/sec    94851.35 KB/sec    87543.58 KB/sec
        Parent sees throughput for 31 rewriters         =  115923.02 KB/sec       86382.02 KB/sec    92564.80 KB/sec    85627.08 KB/sec
        Min throughput per process                      =    3547.24 KB/sec         2628.55 KB/sec     2859.38 KB/sec     2586.37 KB/sec
        Max throughput per process                      =    4006.70 KB/sec        3024.69 KB/sec     3225.33 KB/sec     3072.57 KB/sec
        Avg throughput per process                      =    3833.67 KB/sec        2848.26 KB/sec     3059.72 KB/sec     2823.99 KB/sec
        Min xfer                                        =  928512.00 KB         911360.00 KB      929664.00 KB      882688.00 KB

        Children see throughput for 31 readers          =  345760.32 KB/sec      355439.88 KB/sec   310864.48 KB/sec   280303.74 KB/sec
        Parent sees throughput for 31 readers           =  340252.11 KB/sec      347992.79 KB/sec   302392.68 KB/sec   274431.44 KB/sec
        Min throughput per process                      =    9941.60 KB/sec         2929.78 KB/sec     5941.07 KB/sec     4764.28 KB/sec
        Max throughput per process                      =   11947.37 KB/sec              13671.29 KB/sec         12443.30 KB/sec    10151.20 KB/sec
        Avg throughput per process                      =   11153.56 KB/sec       11465.80 KB/sec    10027.89 KB/sec     9042.06 KB/sec
        Min xfer                                        =  876416.00 KB         228480.00 KB      515968.00 KB      507776.00 KB

        Children see throughput for 31 re-readers       =  332146.74 KB/sec      365836.72 KB/sec   319717.53 KB/sec   285893.18 KB/sec
        Parent sees throughput for 31 re-readers        =  327672.41 KB/sec      356374.37 KB/sec   309249.20 KB/sec   278385.91 KB/sec
        Min throughput per process                      =    4804.59 KB/sec         4327.40 KB/sec     4874.84 KB/sec     5189.78 KB/sec
        Max throughput per process                      =   20229.93 KB/sec       14428.39 KB/sec    18179.50 KB/sec    11341.52 KB/sec
        Avg throughput per process                      =   10714.41 KB/sec       11801.18 KB/sec    10313.47 KB/sec     9222.36 KB/sec
        Min xfer                                        =  252032.00 KB         316288.00 KB      286592.00 KB      483200.00 KB

        Children see throughput for 31 reverse readers  =  348791.73 KB/sec      378207.58 KB/sec   297222.73 KB/sec   294168.65 KB/sec
        Parent sees throughput for 31 reverse readers   =  340139.45 KB/sec      367847.12 KB/sec   285330.94 KB/sec   283241.22 KB/sec
        Min throughput per process                      =    1201.33 KB/sec         8882.71 KB/sec       50.86 KB/sec     7071.22 KB/sec
        Max throughput per process                      =   14732.41 KB/sec       15323.43 KB/sec    18723.45 KB/sec    11208.89 KB/sec
        Avg throughput per process                      =   11251.35 KB/sec       12200.24 KB/sec     9587.83 KB/sec     9489.31 KB/sec
        Min xfer                                        =   87552.00 KB         614272.00 KB        2944.00 KB      679808.00 KB

        Children see throughput for 31 stride readers   =   23292.33 KB/sec       19494.66 KB/sec    19303.88 KB/sec    17640.89 KB/sec
        Parent sees throughput for 31 stride readers    =   23276.54 KB/sec       19478.48 KB/sec    19293.39 KB/sec    17627.54 KB/sec
        Min throughput per process                      =     595.24 KB/sec          440.73 KB/sec      544.24 KB/sec      544.73 KB/sec
        Max throughput per process                      =     855.33 KB/sec         940.35 KB/sec      729.74 KB/sec      578.91 KB/sec
        Avg throughput per process                      =     751.37 KB/sec         628.86 KB/sec      622.71 KB/sec      569.06 KB/sec
        Min xfer                                        =  730592.00 KB         491480.00 KB      782112.00 KB      986928.00 KB

        Children see throughput for 31 random readers   =    8350.64 KB/sec        3139.91 KB/sec     3185.64 KB/sec     6372.45 KB/sec
        Parent sees throughput for 31 random readers    =    8350.49 KB/sec        3139.90 KB/sec     3185.62 KB/sec     6372.39 KB/sec
        Min throughput per process                      =     267.10 KB/sec           99.68 KB/sec      100.98 KB/sec      202.37 KB/sec
        Max throughput per process                      =     271.54 KB/sec         105.75 KB/sec      104.69 KB/sec      208.62 KB/sec
        Avg throughput per process                      =     269.38 KB/sec         101.29 KB/sec      102.76 KB/sec      205.56 KB/sec
        Min xfer                                        = 1031432.00 KB         988416.00 KB          1011392.00 KB          1017176.00 KB

        Children see throughput for 31 mixed workload   =    5781.48 KB/sec        2543.61 KB/sec     2476.89 KB/sec     4219.44 KB/sec
        Parent sees throughput for 31 mixed workload    =    5369.59 KB/sec        2444.68 KB/sec     2401.83 KB/sec     4052.52 KB/sec
        Min throughput per process                      =     174.16 KB/sec          78.96 KB/sec       77.61 KB/sec           131.44 KB/sec
        Max throughput per process                      =     198.01 KB/sec          85.42 KB/sec       82.78 KB/sec      141.04 KB/sec
        Avg throughput per process                      =     186.50 KB/sec          82.05 KB/sec       79.90 KB/sec      136.11 KB/sec
        Min xfer                                        =  922288.00 KB         969232.00 KB      983040.00 KB      977208.00 KB

        Children see throughput for 31 random writers   =    3412.19 KB/sec        1957.64 KB/sec     1941.00 KB/sec     2526.20 KB/sec
        Parent sees throughput for 31 random writers    =    3372.15 KB/sec        1937.07 KB/sec     1928.09 KB/sec     2500.80 KB/sec
        Min throughput per process                      =     108.75 KB/sec           62.68 KB/sec       62.19 KB/sec       80.77 KB/sec
        Max throughput per process                      =     111.33 KB/sec          63.80 KB/sec       63.01 KB/sec       82.33 KB/sec
        Avg throughput per process                      =     110.07 KB/sec          63.15 KB/sec       62.61 KB/sec       81.49 KB/sec
        Min xfer                                        = 1024200.00 KB             1030208.00 KB          1034928.00 KB          1028768.00 KB

        Children see throughput for 31 pwrite writers   =  169670.93 KB/sec      127665.01 KB/sec   187936.90 KB/sec   107282.57 KB/sec
        Parent sees throughput for 31 pwrite writers    =  134318.70 KB/sec       89630.82 KB/sec   152674.50 KB/sec    63974.87 KB/sec
        Min throughput per process                      =    4719.29 KB/sec         3410.34 KB/sec     5262.72 KB/sec     2586.47 KB/sec
        Max throughput per process                      =    6281.94 KB/sec        5516.22 KB/sec     6980.33 KB/sec     5281.52 KB/sec
        Avg throughput per process                      =    5473.26 KB/sec        4118.23 KB/sec          6062.48 KB/sec     3460.73 KB/sec
        Min xfer                                        =  787928.00 KB         646424.00 KB      795136.00 KB      511616.00 KB

        Children see throughput for 31 pread readers    =  360126.34 KB/sec      338863.82 KB/sec   297671.57 KB/sec   282012.07 KB/sec
        Parent sees throughput for 31 pread readers     =  353519.89 KB/sec      330358.98 KB/sec   289853.82 KB/sec        273999.37 KB/sec
        Min throughput per process                      =   10226.21 KB/sec         8784.75 KB/sec          6710.06 KB/sec     6983.21 KB/sec
        Max throughput per process                      =   12556.96 KB/sec       12120.25 KB/sec    12898.82 KB/sec    11158.35 KB/sec
        Avg throughput per process                      =   11616.98 KB/sec       10931.09 KB/sec          9602.31 KB/sec     9097.16 KB/sec
        Min xfer                                        =  876416.00 KB         786432.00 KB      548992.00 KB      679808.00 KB


raidz1 and mirror results will follow
User avatar
User23
Member
 
Posts: 336
Joined: 17 Nov 2008, 14:25
Location: Germany near Berlin

Postby User23 » 19 Jan 2010, 12:18

raidz1 with 9 drives 7.11TB

Code: Select all
Excel chart generation enabled
   Record Size 8 KB
   File size set to 1048576 KB
   Command line used: iozone -R -l 31 -u 31 -r 8k -s 1024m -F /home/f0 /home/f1 /home/f2 /home/f3 /home/f4 /home/f5 /home/f6 /home/f7
/home/f8 /home/f9 /home/f10 /home/f11 /home/f12 /home/f13 /home/f14 /home/f15 /home/f16 /home/f17 /home/f18 /home/f19 /home/f20 /home/f21
/home/f22 /home/f23 /home/f24 /home/f25 /home/f26 /home/f27 /home/f28 /home/f29 /home/f30 /home/f31 /home/f32 /home/f33 /home/f34 /home /f35
/home/f36 /home/f37 /home/f38 /home/f39 /home/f40 /home/f41 /home/f42 /home/f43 /home/f44 /home/f45 /home/f46 /home/f47 /home/f48 /home/f49
/home/f50 /home/f51 /home/f52 /home/f53 /home/f54 /home/f55 /home/f56 /home/f57 /home/f58 /home/f59 /home/f60 /home/f61 /home/f62 /home/f63

   Output is in Kbytes/sec
   Time Resolution = 0.000002 seconds.
   Processor cache size set to 1024 Kbytes.
   Processor cache line size set to 32 bytes.
   File stride size set to 17 * record size.
   Min process = 31
   Max process = 31
   Throughput test with 31 processes
   Each process writes a 1048576 Kbyte file in 8 Kbyte records

   Children see throughput for 31 initial writers    =  208172.21 KB/sec
   Parent sees throughput for 31 initial writers    =  172595.51 KB/sec
   Min throughput per process          =    5782.47 KB/sec
   Max throughput per process          =    7616.98 KB/sec
   Avg throughput per process          =    6715.23 KB/sec
   Min xfer                =  796976.00 KB

   Children see throughput for 31 rewriters    =  102606.86 KB/sec
   Parent sees throughput for 31 rewriters    =  101171.05 KB/sec
   Min throughput per process          =    3014.62 KB/sec
   Max throughput per process          =    3511.05 KB/sec
   Avg throughput per process          =    3309.90 KB/sec
   Min xfer                =  900352.00 KB

   Children see throughput for 31 readers       =  387198.20 KB/sec
   Parent sees throughput for 31 readers       =  379311.31 KB/sec
   Min throughput per process          =    5154.57 KB/sec
   Max throughput per process          =   15272.38 KB/sec
   Avg throughput per process          =   12490.26 KB/sec
   Min xfer                =  355456.00 KB

   Children see throughput for 31 re-readers    =  320033.11 KB/sec
   Parent sees throughput for 31 re-readers    =  310994.17 KB/sec
   Min throughput per process          =    5401.99 KB/sec
   Max throughput per process          =   25386.70 KB/sec
   Avg throughput per process          =   10323.65 KB/sec
   Min xfer                =  226304.00 KB

   Children see throughput for 31 reverse readers    =  305630.74 KB/sec
   Parent sees throughput for 31 reverse readers    =  293534.26 KB/sec
   Min throughput per process          =    1712.60 KB/sec
   Max throughput per process          =   20117.15 KB/sec
   Avg throughput per process          =    9859.06 KB/sec
   Min xfer                =   90752.00 KB

   Children see throughput for 31 stride readers    =   22283.38 KB/sec
   Parent sees throughput for 31 stride readers    =   22273.16 KB/sec
   Min throughput per process          =     479.02 KB/sec
   Max throughput per process          =     871.18 KB/sec
   Avg throughput per process          =     718.82 KB/sec
   Min xfer                =  576664.00 KB

   Children see throughput for 31 random readers    =    2874.69 KB/sec
   Parent sees throughput for 31 random readers    =    2874.68 KB/sec
   Min throughput per process          =      90.81 KB/sec
   Max throughput per process          =      95.65 KB/sec
   Avg throughput per process          =      92.73 KB/sec
   Min xfer                =  995608.00 KB

   Children see throughput for 31 mixed workload    =    2278.92 KB/sec
   Parent sees throughput for 31 mixed workload    =    2271.76 KB/sec
   Min throughput per process          =      72.73 KB/sec
   Max throughput per process          =      74.09 KB/sec
   Avg throughput per process          =      73.51 KB/sec
   Min xfer                = 1029408.00 KB

   Children see throughput for 31 random writers    =    1965.92 KB/sec
   Parent sees throughput for 31 random writers    =    1951.16 KB/sec
   Min throughput per process          =      63.14 KB/sec
   Max throughput per process          =      63.88 KB/sec
   Avg throughput per process          =      63.42 KB/sec
   Min xfer                = 1036416.00 KB

   Children see throughput for 31 pwrite writers    =  159855.55 KB/sec
   Parent sees throughput for 31 pwrite writers    =  127850.95 KB/sec
   Min throughput per process          =    4240.99 KB/sec
   Max throughput per process          =    6337.74 KB/sec
   Avg throughput per process          =    5156.63 KB/sec
   Min xfer                =  701960.00 KB

   Children see throughput for 31 pread readers    =  368381.51 KB/sec
   Parent sees throughput for 31 pread readers    =  357053.00 KB/sec
   Min throughput per process          =    8721.74 KB/sec
   Max throughput per process          =   14701.04 KB/sec
   Avg throughput per process          =   11883.27 KB/sec
   Min xfer                =  638976.00 KB



2x raidz1 with 2x5 disks 7.13TB

Code: Select all
Record Size 8 KB
   File size set to 1048576 KB
   Command line used: iozone -R -l 31 -u 31 -r 8k -s 1024m -F /home/f0 /home/f1 /home/f2 /home/f3 /home/f4 /home/f5 /home/f6 /home/f7
/home/f8 /home/f9 /home/f10 /home/f11 /home/f12 /home/f13 /home/f14 /home/f15 /home/f16 /home/f17 /home/f18 /home/f19 /home/f20 /home/f21
/home/f22 /home/f23 /home/f24 /home/f25 /home/f26 /home/f27 /home/f28 /home/f29 /home/f30 /home/f31 /home/f32 /home/f33 /home/f34 /home /f35
/home/f36 /home/f37 /home/f38 /home/f39 /home/f40 /home/f41 /home/f42 /home/f43 /home/f44 /home/f45 /home/f46 /home/f47 /home/f48 /home/f49
/home/f50 /home/f51 /home/f52 /home/f53 /home/f54 /home/f55 /home/f56 /home/f57 /home/f58 /home/f59 /home/f60 /home/f61 /home/f62 /home/f63

   Output is in Kbytes/sec
   Time Resolution = 0.000002 seconds.
   Processor cache size set to 1024 Kbytes.
   Processor cache line size set to 32 bytes.
   File stride size set to 17 * record size.
   Min process = 31
   Max process = 31
   Throughput test with 31 processes
   Each process writes a 1048576 Kbyte file in 8 Kbyte records

   Children see throughput for 31 initial writers    =  240188.71 KB/sec
   Parent sees throughput for 31 initial writers    =  181504.13 KB/sec
   Min throughput per process          =    7062.25 KB/sec
   Max throughput per process          =    8601.47 KB/sec
   Avg throughput per process          =    7748.02 KB/sec
   Min xfer                =  861184.00 KB

   Children see throughput for 31 rewriters    =  113480.85 KB/sec
   Parent sees throughput for 31 rewriters    =  111265.15 KB/sec
   Min throughput per process          =    3483.71 KB/sec
   Max throughput per process          =    3888.38 KB/sec
   Avg throughput per process          =    3660.67 KB/sec
   Min xfer                =  939520.00 KB

   Children see throughput for 31 readers       =  307583.62 KB/sec
   Parent sees throughput for 31 readers       =  299550.20 KB/sec
   Min throughput per process          =    4702.65 KB/sec
   Max throughput per process          =   18115.81 KB/sec
   Avg throughput per process          =    9922.05 KB/sec
   Min xfer                =  278912.00 KB

   Children see throughput for 31 re-readers    =  374780.61 KB/sec
   Parent sees throughput for 31 re-readers    =  364328.27 KB/sec
   Min throughput per process          =    4937.61 KB/sec
   Max throughput per process          =   16304.53 KB/sec
   Avg throughput per process          =   12089.70 KB/sec
   Min xfer                =  325504.00 KB

   Children see throughput for 31 reverse readers    =  384149.35 KB/sec
   Parent sees throughput for 31 reverse readers    =  374955.97 KB/sec
   Min throughput per process          =    5885.06 KB/sec
   Max throughput per process          =   15980.62 KB/sec
   Avg throughput per process          =   12391.91 KB/sec
   Min xfer                =  390912.00 KB

   Children see throughput for 31 stride readers    =   23573.38 KB/sec
   Parent sees throughput for 31 stride readers    =   23556.19 KB/sec
   Min throughput per process          =     692.92 KB/sec
   Max throughput per process          =     803.08 KB/sec
   Avg throughput per process          =     760.43 KB/sec
   Min xfer                =  904880.00 KB

   Children see throughput for 31 random readers    =    5180.75 KB/sec
   Parent sees throughput for 31 random readers    =    5180.72 KB/sec
   Min throughput per process          =     164.68 KB/sec
   Max throughput per process          =     168.72 KB/sec
   Avg throughput per process          =     167.12 KB/sec
   Min xfer                = 1023472.00 KB

   Children see throughput for 31 mixed workload    =    4229.80 KB/sec
   Parent sees throughput for 31 mixed workload    =    4049.69 KB/sec
   Min throughput per process          =     131.69 KB/sec
   Max throughput per process          =     142.48 KB/sec
   Avg throughput per process          =     136.45 KB/sec
   Min xfer                =  969168.00 KB

   Children see throughput for 31 random writers    =    3364.70 KB/sec
   Parent sees throughput for 31 random writers    =    3322.79 KB/sec
   Min throughput per process          =     107.54 KB/sec
   Max throughput per process          =     109.82 KB/sec
   Avg throughput per process          =     108.54 KB/sec
   Min xfer                = 1026848.00 KB

   Children see throughput for 31 pwrite writers    =  179130.74 KB/sec
   Parent sees throughput for 31 pwrite writers    =   80142.70 KB/sec
   Min throughput per process          =    4533.41 KB/sec
   Max throughput per process          =   11170.48 KB/sec
   Avg throughput per process          =    5778.41 KB/sec
   Min xfer                =  425600.00 KB

   Children see throughput for 31 pread readers    =  365544.69 KB/sec
   Parent sees throughput for 31 pread readers    =  359612.10 KB/sec
   Min throughput per process          =      42.71 KB/sec
   Max throughput per process          =   13506.08 KB/sec
   Avg throughput per process          =   11791.76 KB/sec
   Min xfer                =    3328.00 KB
User avatar
User23
Member
 
Posts: 336
Joined: 17 Nov 2008, 14:25
Location: Germany near Berlin

Postby User23 » 19 Jan 2010, 12:24

3x raidz1 3x3 disks 5.35T

Code: Select all
Record Size 8 KB
   File size set to 1048576 KB
   Command line used: iozone -R -l 31 -u 31 -r 8k -s 1024m -F /home/f0 /home/f1 /home/f2 /home/f3 /home/f4 /home/f5 /home/f6 /home/f7
/home/f8 /home/f9 /home/f10 /home/f11 /home/f12 /home/f13 /home/f14 /home/f15 /home/f16 /home/f17 /home/f18 /home/f19 /home/f20 /home/f21
/home/f22 /home/f23 /home/f24 /home/f25 /home/f26 /home/f27 /home/f28 /home/f29 /home/f30 /home/f31 /home/f32 /home/f33 /home/f34 /home /f35
/home/f36 /home/f37 /home/f38 /home/f39 /home/f40 /home/f41 /home/f42 /home/f43 /home/f44 /home/f45 /home/f46 /home/f47 /home/f48 /home/f49
/home/f50 /home/f51 /home/f52 /home/f53 /home/f54 /home/f55 /home/f56 /home/f57 /home/f58 /home/f59 /home/f60 /home/f61 /home/f62 /home/f63

   Output is in Kbytes/sec
   Time Resolution = 0.000002 seconds.
   Processor cache size set to 1024 Kbytes.
   Processor cache line size set to 32 bytes.
   File stride size set to 17 * record size.
   Min process = 31
   Max process = 31
   Throughput test with 31 processes
   Each process writes a 1048576 Kbyte file in 8 Kbyte records

   Children see throughput for 31 initial writers    =  194977.18 KB/sec
   Parent sees throughput for 31 initial writers    =  145238.19 KB/sec
   Min throughput per process          =    5577.41 KB/sec
   Max throughput per process          =    7678.55 KB/sec
   Avg throughput per process          =    6289.59 KB/sec
   Min xfer                =  761736.00 KB

   Children see throughput for 31 rewriters    =  107264.31 KB/sec
   Parent sees throughput for 31 rewriters    =  105691.77 KB/sec
   Min throughput per process          =    3106.30 KB/sec
   Max throughput per process          =    3759.90 KB/sec
   Avg throughput per process          =    3460.14 KB/sec
   Min xfer                =  866304.00 KB

   Children see throughput for 31 readers       =  329907.42 KB/sec
   Parent sees throughput for 31 readers       =  325495.85 KB/sec
   Min throughput per process          =    9172.33 KB/sec
   Max throughput per process          =   11446.83 KB/sec
   Avg throughput per process          =   10642.17 KB/sec
   Min xfer                =  843648.00 KB

   Children see throughput for 31 re-readers    =  333976.38 KB/sec
   Parent sees throughput for 31 re-readers    =  323647.98 KB/sec
   Min throughput per process          =    7145.07 KB/sec
   Max throughput per process          =   14048.41 KB/sec
   Avg throughput per process          =   10773.43 KB/sec
   Min xfer                =  548736.00 KB

   Children see throughput for 31 reverse readers    =  348859.71 KB/sec
   Parent sees throughput for 31 reverse readers    =  339545.68 KB/sec
   Min throughput per process          =     153.09 KB/sec
   Max throughput per process          =   14841.71 KB/sec
   Avg throughput per process          =   11253.54 KB/sec
   Min xfer                =   10880.00 KB

   Children see throughput for 31 stride readers    =   21995.84 KB/sec
   Parent sees throughput for 31 stride readers    =   21976.20 KB/sec
   Min throughput per process          =     618.73 KB/sec
   Max throughput per process          =     818.59 KB/sec
   Avg throughput per process          =     709.54 KB/sec
   Min xfer                =  792664.00 KB

   Children see throughput for 31 random readers    =    7795.43 KB/sec
   Parent sees throughput for 31 random readers    =    7795.35 KB/sec
   Min throughput per process          =     249.06 KB/sec
   Max throughput per process          =     254.20 KB/sec
   Avg throughput per process          =     251.47 KB/sec
   Min xfer                = 1027400.00 KB

   Children see throughput for 31 mixed workload    =    4849.54 KB/sec
   Parent sees throughput for 31 mixed workload    =    4673.46 KB/sec
   Min throughput per process          =     151.49 KB/sec
   Max throughput per process          =     161.06 KB/sec
   Avg throughput per process          =     156.44 KB/sec
   Min xfer                =  986280.00 KB

   Children see throughput for 31 random writers    =    2961.60 KB/sec
   Parent sees throughput for 31 random writers    =    2934.99 KB/sec
   Min throughput per process          =      94.59 KB/sec
   Max throughput per process          =      96.31 KB/sec
   Avg throughput per process          =      95.54 KB/sec
   Min xfer                = 1029824.00 KB

   Children see throughput for 31 pwrite writers    =  161924.70 KB/sec
   Parent sees throughput for 31 pwrite writers    =  116085.19 KB/sec
   Min throughput per process          =    4450.95 KB/sec
   Max throughput per process          =    6703.67 KB/sec
   Avg throughput per process          =    5223.38 KB/sec
   Min xfer                =  693496.00 KB

   Children see throughput for 31 pread readers    =  306150.73 KB/sec
   Parent sees throughput for 31 pread readers    =  300004.69 KB/sec
   Min throughput per process          =    7810.73 KB/sec
   Max throughput per process          =   12104.51 KB/sec
   Avg throughput per process          =    9875.83 KB/sec
   Min xfer                =  679936.00 KB


5x mirror 5x2 disks 4.46T

Code: Select all
Record Size 8 KB
   File size set to 1048576 KB
   Command line used: iozone -R -l 31 -u 31 -r 8k -s 1024m -F /home/f0 /home/f1 /home/f2 /home/f3 /home/f4 /home/f5 /home/f6 /home/f7
/home/f8 /home/f9 /home/f10 /home/f11 /home/f12 /home/f13 /home/f14 /home/f15 /home/f16 /home/f17 /home/f18 /home/f19 /home/f20 /home/f21
/home/f22 /home/f23 /home/f24 /home/f25 /home/f26 /home/f27 /home/f28 /home/f29 /home/f30 /home/f31 /home/f32 /home/f33 /home/f34 /home /f35
/home/f36 /home/f37 /home/f38 /home/f39 /home/f40 /home/f41 /home/f42 /home/f43 /home/f44 /home/f45 /home/f46 /home/f47 /home/f48 /home/f49
/home/f50 /home/f51 /home/f52 /home/f53 /home/f54 /home/f55 /home/f56 /home/f57 /home/f58 /home/f59 /home/f60 /home/f61 /home/f62 /home/f63

   Output is in Kbytes/sec
   Time Resolution = 0.000002 seconds.
   Processor cache size set to 1024 Kbytes.
   Processor cache line size set to 32 bytes.
   File stride size set to 17 * record size.
   Min process = 31
   Max process = 31
   Throughput test with 31 processes
   Each process writes a 1048576 Kbyte file in 8 Kbyte records

   Children see throughput for 31 initial writers    =  165517.31 KB/sec
   Parent sees throughput for 31 initial writers    =  113314.35 KB/sec
   Min throughput per process          =    4822.53 KB/sec
   Max throughput per process          =    7166.15 KB/sec
   Avg throughput per process          =    5339.27 KB/sec
   Min xfer                =  705792.00 KB

   Children see throughput for 31 rewriters    =  110261.11 KB/sec
   Parent sees throughput for 31 rewriters    =  107713.00 KB/sec
   Min throughput per process          =    3149.57 KB/sec
   Max throughput per process          =    4134.87 KB/sec
   Avg throughput per process          =    3556.81 KB/sec
   Min xfer                =  798720.00 KB

   Children see throughput for 31 readers       =  298561.78 KB/sec
   Parent sees throughput for 31 readers       =  289834.54 KB/sec
   Min throughput per process          =    9011.21 KB/sec
   Max throughput per process          =   10670.23 KB/sec
   Avg throughput per process          =    9631.03 KB/sec
   Min xfer                =  885760.00 KB

   Children see throughput for 31 re-readers    =  300249.48 KB/sec
   Parent sees throughput for 31 re-readers    =  293567.68 KB/sec
   Min throughput per process          =    8516.27 KB/sec
   Max throughput per process          =   11470.81 KB/sec
   Avg throughput per process          =    9685.47 KB/sec
   Min xfer                =  781568.00 KB

   Children see throughput for 31 reverse readers    =  319173.17 KB/sec
   Parent sees throughput for 31 reverse readers    =  309450.49 KB/sec
   Min throughput per process          =     153.64 KB/sec
   Max throughput per process          =   13751.18 KB/sec
   Avg throughput per process          =   10295.91 KB/sec
   Min xfer                =   11776.00 KB

   Children see throughput for 31 stride readers    =   20836.42 KB/sec
   Parent sees throughput for 31 stride readers    =   20820.73 KB/sec
   Min throughput per process          =     650.30 KB/sec
   Max throughput per process          =     690.90 KB/sec
   Avg throughput per process          =     672.14 KB/sec
   Min xfer                =  987072.00 KB

   Children see throughput for 31 random readers    =   11431.42 KB/sec
   Parent sees throughput for 31 random readers    =   11431.29 KB/sec
   Min throughput per process          =     365.59 KB/sec
   Max throughput per process          =     372.43 KB/sec
   Avg throughput per process          =     368.76 KB/sec
   Min xfer                = 1029344.00 KB

   Children see throughput for 31 mixed workload    =    8038.54 KB/sec
   Parent sees throughput for 31 mixed workload    =    7542.01 KB/sec
   Min throughput per process          =     245.69 KB/sec
   Max throughput per process          =     273.78 KB/sec
   Avg throughput per process          =     259.31 KB/sec
   Min xfer                =  940976.00 KB

   Children see throughput for 31 random writers    =    5404.38 KB/sec
   Parent sees throughput for 31 random writers    =    5326.68 KB/sec
   Min throughput per process          =     172.30 KB/sec
   Max throughput per process          =     176.74 KB/sec
   Avg throughput per process          =     174.33 KB/sec
   Min xfer                = 1022264.00 KB

   Children see throughput for 31 pwrite writers    =  110918.72 KB/sec
   Parent sees throughput for 31 pwrite writers    =   41819.30 KB/sec
   Min throughput per process          =    2468.24 KB/sec
   Max throughput per process          =    9939.37 KB/sec
   Avg throughput per process          =    3578.02 KB/sec
   Min xfer                =  259600.00 KB

   Children see throughput for 31 pread readers    =  300026.61 KB/sec
   Parent sees throughput for 31 pread readers    =  291201.70 KB/sec
   Min throughput per process          =    8681.60 KB/sec
   Max throughput per process          =   11120.08 KB/sec
   Avg throughput per process          =    9678.28 KB/sec
   Min xfer                =  843648.00 KB
User avatar
User23
Member
 
Posts: 336
Joined: 17 Nov 2008, 14:25
Location: Germany near Berlin

Postby vermaden » 19 Jan 2010, 12:27

dejvnull wrote:Harddrives: 16 x WD RE4 Green 2TB


There is no such disk as WD RE4 Green, its either WD RE4 (which you linked to) or WD Green: [color="Green"]http://westerndigital.com/en/products/Products.asp?DriveID=773[/color]

EDIT:

@dejvnull

I must appologize, I did not knew that RE-GP version exists, my bad :/
Religions, worst damnation of mankind.
"FreeBSD has always been the operating system that GNU/Linux should have been." Frank Pohlmann, IBM
[FILE][color="DarkSlateBlue"]http://vermaden.blogspot.com[/color][/FILE]
User avatar
vermaden
Giant Locked
 
Posts: 2316
Joined: 16 Nov 2008, 19:37
Location: pl_PL.lodz


Return to System Hardware

Who is online

Users browsing this forum: No registered users and 0 guests