Solved zfs on gstripe?

Is it possible or advisable to create zfs pool on top of a gstripe? Say I have 3 disks of equal size and that speed is more important than redundancy, so instead of creating raidz1 having 2/3 space (or whatever), can I create gstripe with 3 disk worth of space, and then create a zpool on top of it? To combine the advantages of gstripe (3 disks worth of space) and zfs (compression, snapshots, etc)?
 
Is it possible or advisable to create zfs pool on top of a gstripe?
Possible, yes. Advisable, no. For the same reason it's not advisable to run ZFS on top of a hardware RAID. Gstripe is software RAID 0. You're better off using ZFS's own striped sets. There's zero advantage in using gstripe(8) and running ZFS on top of it, only a lot of disadvantages.
 
You may as well create a ZFS stripe. If speed is a concern and not so much space, then I would be intrigued to see how much faster a 3 disk stripe actually is than a 3 disk RAID-Z.

Note that while you may not be worried about redundancy, you are increasing the risk of failure by 3 times.
 
Losing 1/3 disk space due to parity kind of sucks. Hence gstripe on 3 disks with no space lost + plain zfs on the whole resulting "disk" for its compression+snapshot advantages, and no space lost. Am I missing something?
 
Hence gstripe on 3 disks with no space lost + zfs on the whole resulting "disk" for its compression+snapshot advantages, and no space lost. Am I missing something?
Forget gstripe(8) and just use a ZFS striped set. It's the same thing.
 
I'm a little bit confused here because I remember once when I created a zpool: "zpool create data da1 da2 da3" zpool status then showed as:
Code:
        data        ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            da1     ONLINE       0     0     0
            da2     ONLINE       0     0     0
            da3     ONLINE       0     0     0

note that I didn't specify raidz. And pool size is close to equal to 3x450GB size in zpool list:
mypool 1.30T

I don't think it will withstand 1 disk failing, will it? Then why such a misguiding name.
 
That's not correct. The command you showed creates a striped set from da1, da2 and da3. A RAID-Z is created with zpool create data raidz da1 da2 da3.

A RAID-Z can recover from a single disk failure, a striped set can't recover from anything, one disk failing will fail the entire pool.
 
Are you absolutely sure you didn't specify raidz? Since day 1 the default has been a stripe, you have to specify raidz/mirror, and I can't see how it could do otherwise.

Code:
# zpool create test md0 md1 md2
# zpool status test
  pool: test
state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        test        ONLINE       0     0     0
          md0       ONLINE       0     0     0
          md1       ONLINE       0     0     0
          md2       ONLINE       0     0     0

errors: No known data errors
 
Thanks, but why does zpool list show usable space as 1.30T? That's 3x450GB disks. Shouldn't it be a little under 1TB due to redundancy (parity)? Considering that zpool status displays the disks under raidz (see above).
 
note that I didn't specify raidz. And pool size is close to equal to 3x450GB size in zpool list:
mypool 1.30T

This is a bit of a confusing part about ZFS.

The zpool command is only interested in dealing with physical pool details. In this case, you physically have a pool of 3x450GB, which can store 1,350GB. (Ignoring and losses to metadata, unit conversion, etc).

However, when the zfs layer puts data on this pool, it will write data & parity at the same time, so a write of 10MB will take 15MB of pool space.

I'm personally not a fan of the way zpool shows raid-z space, especially as with mirrors it shows the "post-mirroring" space availability. The take away is, use the zfs list to see how much user data you have and how much space is available.
 
Actually zpool list is invaluable when you want to see how much space is really in use when deduplication is turned on - df doesn't see the post-dedup size at all, and simply increases total disk size past its physical limits or decreases it back from time to time :) But now I see that zpool list doesn't always fit the bill. Not with raidz, as in this case.
 
I don't think zfs manpages mention the fact that plain zfs create foo da0 da1 da2 will stripe writes on all 3 disks. At least it wasn't obvious, at least to me ) But now I see that it may be the case:

zpool(8)
A pool can have any number of virtual devices at the top of the
configuration (known as "root" vdevs). Data is dynamically distributed
across all top-level devices to balance data among devices. As new
virtual devices are added, ZFS automatically places data on the newly
available devices.

Thanks to everyone!
 
Code:
     Example 3 Creating a ZFS Storage Pool by Using Partitions

       The following command creates an unmirrored pool using two GPT
       partitions.

         # zpool create tank da0p3 da1p3
 
Back
Top