ZFS If Possible, when does a Functional Drive Removed from a RAIDZ2 Array go Stale?

Scenario:

There are five vdevs that are SATA Hard Disks. The array is fully operational and online. The file systems are mounted, but there is little activity besides the typical programs expected on a FreeBSD machine.

I remove one disk. Is there ever a point where the disk becomes so out of sync that it cannot be reintroduced back into the array?
 
Is there ever a point where the disk becomes so out of sync that it cannot be reintroduced back into the array?
It's out of sync the very second you remove it from pool. And no, it's never so far out of sync it cannot be added again. You can completely wipe the drive and re-add it again. The sync is going to happen, regardless if it's been 10 seconds or 10 years. The sync will obviously be done quicker when it's only been 10 seconds.
 
As SirDice pointed out, you can always re-add a drive to a pool - the delta just gets bigger and eventually the resilver will be essentially a full rebuild of the data on the drive. Depending on how large the providers are, there might be still a small benefit from starting the resilver at some old point in time rather than a full resilver, which is horribly slow for RAIDZ...

But what I'm more curious about: why does a system with "little activity" need a pool of 5 RAIDZ2 vdevs? Considering the 'rule of 2' for RAIDZ and a hence minimum vdev size of 4 drives (which would be completely pointless for RAIDZ2 as mirrors would be vastly more efficient), that would be a minimum of 20 drives for a mostly idling system...
 
As SirDice pointed out, you can always re-add a drive to a pool - the delta just gets bigger and eventually the resilver will be essentially a full rebuild of the data on the drive. Depending on how large the providers are, there might be still a small benefit from starting the resilver at some old point in time rather than a full resilver, which is horribly slow for RAIDZ...

But what I'm more curious about: why does a system with "little activity" need a pool of 5 RAIDZ2 vdevs? Considering the 'rule of 2' for RAIDZ and a hence minimum vdev size of 4 drives (which would be completely pointless for RAIDZ2 as mirrors would be vastly more efficient), that would be a minimum of 20 drives for a mostly idling system...
When I first set up my system, I didn't plan out how to build the array to maximize utilization or redundancy. All I understood was "RAIDZ can take a loss of 1 disk, and RAIDZ2 can take a loss of two disks."

I describe the system as under a light load with "little activity" as I'm the only one using it. It's primary role is storage, though it runs a couple bhyve VMs for fun and experimentation, and presents that storage through both
samba and a Nextcloud instance on the machine.

I only have five drives in use, and they're all in a RAIDZ2 array. I suppose this stems from my lack of understanding of what a vdev really is.

Here's the pool config, for example:

Code:
 NAME                                   STATE     READ WRITE CKSUM
        pool                                   ONLINE       0     0     0
          raidz2-0                             ONLINE       0     0     0
            gpt/RAIDZ2_DSK4_20230902[DRIVESN]  ONLINE       0     0     0
            gpt/RAIDZ2_DSK3_20230902[DRIVESN]  ONLINE       0     0     0
            gpt/RAIDZ2_DSK2_20230902[DRIVESN]  ONLINE       0     0     0
            gpt/RAIDZ2_DSK1_20230902[DRIVESN]  ONLINE       0     0     0
            ada8                               ONLINE       0     0     0

I've redacted the Serial Numbers by replacing with [DRIVESN], but that's the current state of the pool.
 
Back
Top