Optimal ZFS configuration for this PowerEdge?

PowerEdge R720xd
Dell PERC H710P (LSI 2208, 1GB of NVFlash) - no passthrough or JBOD option.
2x OS drives (back)
12x data drives (front)

The 2x OS drives in the back are in a hardware RAID1, the OS is FreeBSD 9.1 RC3 on UFS. I have no issue with this.

I'm just wondering about the data drives. I wish to use ZFS. Not so much because of data reliability, but mostly because of the ACL support with Samba (it will server as a file server for Windows systems, managed by Active Directory, and ZFS/Samba seems to work great with Windows permissions).

Since there is no JBOD or passthrough option on the RAID card, I have each data drive configured in a single, RAID 0 configuration. The default configuration for any RAID had "Write Back" cache enabled.

- Should I disable *all* caching?

- Each drive is configured as a single, RAID 0 volume. Is this a *bad* idea? Does this "break" the checksuming / scrub feature of ZFS?

On drive failure/replacement, I have to do the following:

1) bring the drive online with "MegaCli"
2) bring the pool back online with "zpool online"

Another question is the layout of the pool. For testing and ease of use, I have it in a single vdev, 1x12 drives in raidz3. I've read (in many places) that this is bad due to *performance* reasons.

- What are the "real" downsides of doing 1x12 in raidz3? Is it just read/write performance? Less reliability? Issues with resilvering or scrubbing?

Should I switch to something like a 6x2 raidz2 or 4x3 raidz1?

These are 3TB disks. I'm looking for reliable space, not so much performance. Some configuration options:

Code:
1x12 raidz3, 3 parity = 27TB
     pool
      raidz3
          drive1
          drive2
          drive3
          drive4
          drive5
          drive6
          drive7
          drive8
          drive9
          drive10
          drive11
          drive12

2x6 raidz2, 4 parity = 24TB
     pool
      raidz2
          drive1
          drive2
          drive3
          drive4
          drive5
          drive6
      raidz2
          drive7
          drive8
          drive9
          drive10
          drive11
          drive12

3x4 raidz1, 3 parity = 27TB
     pool
      raidz1
          drive1
          drive2
          drive3
          drive4
      raidz1
          drive5
          drive6
          drive7
          drive8
      raidz1
          drive9
          drive10
          drive11
          drive12

4x3 raidz1, 4 parity = 24TB
     pool
      raidz1
          drive1
          drive2
          drive3
      raidz1
          drive4
          drive5
          drive6
      raidz1
          drive7
          drive8
          drive9
      raidz1
          drive10
          drive11
          drive12
 
I guess the second part of the post would probably fit better in the Storage forum. I can't figure out how to edit the post.
 
Back
Top