Long story short, this is what I got from a v14.0 machine (maybe v14.1, I'm not sure):
The disks are all 960GB Kingston's enterprise SATA SSDs (Link: here) and are supposed to be resilient to faults. But life is strange and things happen.
How should I go about replacing the disk? What is the safest procedure?
Code:
ZFS Pool Status:
pool: tank
state: DEGRADED
status: One or more devices could not be used because the label is missing or
invalid. Sufficient replicas exist for the pool to continue
functioning in a degraded state.
action: Replace the device using 'zpool replace'.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-4J
config:
NAME STATE READ WRITE CKSUM
tank DEGRADED 0 0 0
raidz3-0 DEGRADED 0 0 0
ada0 ONLINE 0 0 0
ada1 ONLINE 0 0 0
ada2 ONLINE 0 0 0
ada3 ONLINE 0 0 0
15046335961317935290 FAULTED 0 0 0 was /dev/ada4
ada4 ONLINE 0 0 0
ada5 ONLINE 0 0 0
errors: No known data errors
The disks are all 960GB Kingston's enterprise SATA SSDs (Link: here) and are supposed to be resilient to faults. But life is strange and things happen.
How should I go about replacing the disk? What is the safest procedure?