ZFS Expanding zfs pool with another one raid

I don't know if this is even possible, excuse me for that, anyway,
I got a pool, it's 2 disks 2TB mirrored.

root@freedom:~ # zpool status tank
Code:
  pool: tank
state: ONLINE
  scan: scrub repaired 0 in 2h27m with 0 errors on Fri Sep 18 12:23:03 2015
config:

        NAME        STATE     READ WRITE CKSUM
        tank        ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            ada2p1  ONLINE       0     0     0
            ada3p1  ONLINE       0     0     0

errors: No known data errors

It's running low, not full yet but soon will be:
root@freedom:~ # df -h
Code:
Filesystem            Size    Used   Avail Capacity  Mounted on
tank                  1.8T    1.3T    499G    72%    /tank

I got ....bad past (and bad luck) with Raid5 (under linux at least), so I want to play it safe.
I want to add another one mirror in the pool. Is something like that possible and how ?

Let's say I get another 2 disks, 2 x 4TB disks and make them mirror.

Can I add the second mirror in the same pool ? So the pool with name "tank" would be ~6TB ?
 
Partition the new disks so that you have (assuming the new disks are ada4 and ada5) ada4p1 and ada5p1 partitions. Make sure you do 4k alignment properly on the new disks since they are very likely 4k sector disks.

https://wiki.freebsd.org/RootOnZFS/GPTZFSBoot/9.0-RELEASE

(Ignore the root on ZFS part but pay attention to the note at item 7. about vfs.zfs.min_auto_ashift sysctl(8) on FreeBSD 10.1 and later)

Then issue:

# zpool add tank mirror ada4p1 ada5p1

That will give you a RAID 0+1 pool, a striped pool of two mirror vdevs.

Note that there is no way to undo the creation of a multi-vdev pool, make sure you get it right or you'll have to recreate your pool from scratch using a backup if you mess up something.
 
If you're using FreeBSD 10.1 or later there is no need to partition your disks to be used in ZFS pool that is not used for booting, you can just set the vfs.zfs.min_auto_ashift sysctl(8) and use the raw disks:

zpool add tank mirror ada4 ada5
 
If you're using FreeBSD 10.1 or later there is no need to partition your disks to be used in ZFS pool that is not used for booting, you can just set the vfs.zfs.min_auto_ashift sysctl(8) and use the raw disks:

zpool add tank mirror ada4 ada5

Yeap, when I first created the pool /tank did that, I was afraid maybe I did something wrong and made partitions after that and recreate the pool. :)

That will give you a RAID 0+1 pool, a striped pool of two mirror vdevs.

Note that there is no way to undo the creation of a multi-vdev pool, make sure you get it right or you'll have to recreate your pool from scratch using a backup if you mess up something.

That's the limit (just wondering, never done this before don't laugh)? I mean, can I have 2 raidz of 3 disks each for example and that's it's limit? Or it can grow more (3 raidz for example in same tank)? Or (in case I got a PCI SATA card for example) I can add multiple raidz / mirrors in the same tank?

For example having in the future 3 raidz of 3 disks each, doable?
(maybe stupid question but never touched ZFS in my life before and I just started liking it).
 
You can always add more VDEVs, ZFS is flexible in that regard. You could now expand your pool with an additional mirror to make it a striped pool of three mirror VDEVs without any problems. You can mix different types of VDEVs but as far I know it's not a recommended practice.
 
That's the limit (just wondering, never done this before don't laugh) ? I mean, can I have 2 raidz of 3 disks each for example and that's it's limit ? Or it can grow more (3 raidz for example in same tank)
Or (in case I got a pci sata card for example) I can add multiple raidz / mirrors in the same tank ?

You can have as many vdevs in a system as you have drive bay slots. And you can add them at any time.

You can also mix vdev types in a pool (raidz1 + raidz2 + raidz3 + mirror, etc). However, that is NOT recommended. It's much better to keep the vdev types the same within a pool.

Here's our largest storage box:
Code:
$ zpool status storage
  pool: storage
 state: ONLINE
  scan: scrub in progress since Sat Aug 15 16:04:30 2015
        35.1T scanned out of 149T at 11.0M/s, (scan is slow, no estimated time)
        0 repaired, 23.49% done
config:

    NAME             STATE     READ WRITE CKSUM
    storage          ONLINE       0     0     0
     raidz2-0       ONLINE       0     0     0
       gpt/disk-a1  ONLINE       0     0     0
       gpt/disk-a2  ONLINE       0     0     0
       gpt/disk-a3  ONLINE       0     0     0
       gpt/disk-a4  ONLINE       0     0     0
       gpt/disk-a5  ONLINE       0     0     0
       gpt/disk-a6  ONLINE       0     0     0
     raidz2-1       ONLINE       0     0     0
       gpt/disk-b1  ONLINE       0     0     0
       gpt/disk-b2  ONLINE       0     0     0
       gpt/disk-b3  ONLINE       0     0     0
       gpt/disk-b4  ONLINE       0     0     0
       gpt/disk-b5  ONLINE       0     0     0
       gpt/disk-b6  ONLINE       0     0     0
     raidz2-2       ONLINE       0     0     0
       gpt/disk-c1  ONLINE       0     0     0
       gpt/disk-c2  ONLINE       0     0     0
       gpt/disk-c3  ONLINE       0     0     0
       gpt/disk-c4  ONLINE       0     0     0
       gpt/disk-c5  ONLINE       0     0     0
       gpt/disk-c6  ONLINE       0     0     0
     raidz2-3       ONLINE       0     0     0
       gpt/disk-d1  ONLINE       0     0     0
       gpt/disk-d2  ONLINE       0     0     0
       gpt/disk-d3  ONLINE       0     0     0
       gpt/disk-d4  ONLINE       0     0     0
       gpt/disk-d5  ONLINE       0     0     0
       gpt/disk-d6  ONLINE       0     0     0
     raidz2-4       ONLINE       0     0     0
       gpt/disk-e1  ONLINE       0     0     0
       gpt/disk-e2  ONLINE       0     0     0
       gpt/disk-e3  ONLINE       0     0     0
       gpt/disk-e4  ONLINE       0     0     0
       gpt/disk-e5  ONLINE       0     0     0
       gpt/disk-e6  ONLINE       0     0     0
     raidz2-5       ONLINE       0     0     0
       gpt/disk-f1  ONLINE       0     0     0
       gpt/disk-f2  ONLINE       0     0     0
       gpt/disk-f3  ONLINE       0     0     0
       gpt/disk-f4  ONLINE       0     0     0
       gpt/disk-f5  ONLINE       0     0     0
       gpt/disk-f6  ONLINE       0     0     0
     raidz2-6       ONLINE       0     0     0
       gpt/disk-g1  ONLINE       0     0     0
       gpt/disk-g2  ONLINE       0     0     0
       gpt/disk-g3  ONLINE       0     0     0
       gpt/disk-g4  ONLINE       0     0     0
       gpt/disk-g5  ONLINE       0     0     0
       gpt/disk-g6  ONLINE       0     0     0
     raidz2-8       ONLINE       0     0     0
       gpt/disk-i1  ONLINE       0     0     0
       gpt/disk-i2  ONLINE       0     0     0
       gpt/disk-i3  ONLINE       0     0     0
       gpt/disk-i4  ONLINE       0     0     0
       gpt/disk-i5  ONLINE       0     0     0
       gpt/disk-i6  ONLINE       0     0     0
     raidz2-9       ONLINE       0     0     0
       gpt/disk-j1  ONLINE       0     0     0
       gpt/disk-j2  ONLINE       0     0     0
       gpt/disk-j3  ONLINE       0     0     0
       gpt/disk-j4  ONLINE       0     0     0
       gpt/disk-j5  ONLINE       0     0     0
       gpt/disk-j6  ONLINE       0     0     0
     raidz2-10      ONLINE       0     0     0
       gpt/disk-k1  ONLINE       0     0     0
       gpt/disk-k2  ONLINE       0     0     0
       gpt/disk-k3  ONLINE       0     0     0
       gpt/disk-k4  ONLINE       0     0     0
       gpt/disk-k5  ONLINE       0     0     0
       gpt/disk-k6  ONLINE       0     0     0
     raidz2-11      ONLINE       0     0     0
       gpt/disk-l1  ONLINE       0     0     0
       gpt/disk-l2  ONLINE       0     0     0
       gpt/disk-l3  ONLINE       0     0     0
       gpt/disk-l4  ONLINE       0     0     0
       gpt/disk-l5  ONLINE       0     0     0
       gpt/disk-l6  ONLINE       0     0     0
     raidz2-12      ONLINE       0     0     0
       gpt/disk-m1  ONLINE       0     0     0
       gpt/disk-m2  ONLINE       0     0     0
       gpt/disk-m3  ONLINE       0     0     0
       gpt/disk-m4  ONLINE       0     0     0
       gpt/disk-m5  ONLINE       0     0     0
       gpt/disk-m6  ONLINE       0     0     0
     raidz2-13      ONLINE       0     0     0
       gpt/disk-n1  ONLINE       0     0     0
       gpt/disk-n2  ONLINE       0     0     0
       gpt/disk-n3  ONLINE       0     0     0
       gpt/disk-n4  ONLINE       0     0     0
       gpt/disk-n5  ONLINE       0     0     0
       gpt/disk-n6  ONLINE       0     0     0
     raidz2-14      ONLINE       0     0     0
       gpt/disk-o1  ONLINE       0     0     0
       gpt/disk-o2  ONLINE       0     0     0
       gpt/disk-o3  ONLINE       0     0     0
       gpt/disk-o4  ONLINE       0     0     0
       gpt/disk-o5  ONLINE       0     0     0
       gpt/disk-o6  ONLINE       0     0     0
     raidz2-15      ONLINE       0     0     0
       gpt/disk-p4  ONLINE       0     0     0
       gpt/disk-p5  ONLINE       0     0     0
       gpt/disk-p6  ONLINE       0     0     0
       gpt/disk-h4  ONLINE       0     0     0
       gpt/disk-h5  ONLINE       0     0     0
       gpt/disk-h6  ONLINE       0     0     0
    logs
     mirror-7       ONLINE       0     0     0
       gpt/log0     ONLINE       0     0     0
       gpt/log2     ONLINE       0     0     0
    cache
     gpt/cache1     ONLINE       0     0     0
     gpt/cache3     ONLINE       0     0     0

errors: No known data errors

That's 90 SATA disks, split into multiple 6-disk raidz2 vdevs, with some SSDs thrown in for LOG/L2ARC devices. :)

And the box is setup that it can handle another 90 disks directly (4 drive pods directly connected to SATA controllers), and another 180 disks if we daisy-chain the drive pods. :)
 
Back
Top