How do I expand a zfs pool?

So here's my spec:

Code:
$ gpart show 
=>        40  3907029088  ada1  GPT  (1.8T)
          40        1024     1  freebsd-boot  (512K)
        1064         984        - free -  (492K)
        2048     4194304     2  freebsd-swap  (2.0G)
     4196352  3902832640     3  freebsd-zfs  (1.8T)
  3907028992         136        - free -  (68K)

=>        40  3907029088  ada3  GPT  (1.8T)
          40        1024     1  freebsd-boot  (512K)
        1064         984        - free -  (492K)
        2048     4194304     2  freebsd-swap  (2.0G)
     4196352  3902832640     3  freebsd-zfs  (1.8T)
  3907028992         136        - free -  (68K)

=>        40  3907029088  ada2  GPT  (1.8T)
          40        1024     1  freebsd-boot  (512K)
        1064         984        - free -  (492K)
        2048     4194304     2  freebsd-swap  (2.0G)
     4196352  3902832640     3  freebsd-zfs  (1.8T)
  3907028992         136        - free -  (68K)

=>       40  160836400  ada0  GPT  (77G)
         40       1024     1  freebsd-boot  (512K)
       1064        984        - free -  (492K)
       2048    4194304     2  freebsd-swap  (2.0G)
    4196352  156639232     3  freebsd-zfs  (75G)
  160835584        856        - free -  (428K)
With this being my zfs status:
Code:
$ zpool list
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
zroot   298G  3.86G   294G        -         -     0%     1%  1.00x    ONLINE  -

$ zpool status
  pool: zroot
 state: ONLINE
config:

    NAME        STATE     READ WRITE CKSUM
    zroot       ONLINE       0     0     0
     raidz1-0  ONLINE       0     0     0
       ada0p3  ONLINE       0     0     0
       ada1p3  ONLINE       0     0     0
       ada2p3  ONLINE       0     0     0
       ada3p3  ONLINE       0     0     0

errors: No known data errors
$ zpool get autoexpand zroot
NAME   PROPERTY    VALUE   SOURCE
zroot  autoexpand  on      local

I expect to have at least 1.8 T space on my machine given that I want to use RAID5, but it only provides ~80G space which is the space I have on my smallest drive, could you please tell me how am I supposed to fix that?

Thank you.
 
I expect to have at least 1.8 T space on my machine given that I want to use RAID5, but it only provides ~80G space which is the space I have on my smallest drive, could you please tell me how am I supposed to fix that?
Working as intended: ZFS can only guarantee the safety of your data by basing the raidz capacity on its smallest member. Your two options here are replace that 80GB disk with a larger one, or remake your pool without the 80GB drive and use only the 2TBs.
 
Replacing the 80G would be my option. Since ZFS knows about "only" 80G the resilver process should be relatively quick and you won't lose any data.
Remaking the pool without the 80G device means you need to have solid backups (which one should anyway).
 
Thanks, I gave it a try by offlining the device I should remove and here's what I have:
Code:
  pool: zroot
 state: DEGRADED
status: One or more devices are faulted in response to persistent errors.
    Sufficient replicas exist for the pool to continue functioning in a
    degraded state.
action: Replace the faulted device, or use 'zpool clear' to mark the device
    repaired.
config:

    NAME        STATE     READ WRITE CKSUM
    zroot       DEGRADED     0     0     0
     raidz1-0  DEGRADED     0     0     0
       ada0p3  FAULTED      0     0     0  external device fault
       ada1p3  ONLINE       0     0     0
       ada2p3  ONLINE       0     0     0
       ada3p3  ONLINE       0     0     0

errors: No known data errors

Can I just remove the hard drive with some command or do I have to physically remove it? Or even worse I have to reinstall FreeBSD?
 
My understanding of raidz1 is that since you initially created it with 4 drives you cannot remove a drive (the 80G) and still be able to loose another drive.
The ideal solution would be to have a good backup, and then just zfs send | zfs recv into a mirror of more than 80G. Then you should be able to reconstruct the raidz1 with the 3 1.8 TB drives.
 
You created the zpool with 4 devices, you need to have 4 devices in the pool. So Yes, you need to physically replace the 80G device with another 2T device partitioned the same way; if you lose another device while faulted I believe you lose all your data.

If you have the physical space and a new drive, I would probably do a zpool clear to bring the 80G back in, let is resilver, shutdown/power down the box, physically add the new drive, power back up into single user, partition the new disk (lets call it ada4), then you should be able to do something like

zpool replace zroot ada0p3 ada4p3

to replace the old 80G with the new 1.8T partition. Let it finish resilver the zpool, then shutdown and physically remove the 80G device.
 
Back
Top