Hello.
I have a home server running FreeBSD since forever. After I upgraded to FreeBSD 10.1 (by reinstalling everything on mostly new hardware) 1.5 years ago and reshuffled my storage volumes, I got an unrecoverable error in one file (I had storage pool running without redundancy for a while by then). I removed the file (it wasn't valuable for me) and moved to a new raidz1 setup on 3 disks. Half a year later I upgraded to 10.2 and didn't upgrade any further yet.
Recently I had a power glitch (I think) and ZFS began complaining that the drive that used to have that deleted file is FAULTED and the file suddenly became available again in '--head--' snapshot. I've deleted the snapshot, scrubbed all data and now everything is fine except the fact that this drive is in FAULTED state. I know that ZFS is trying to warn me about possible future damage this drive can make to my files, but I'm sure it was my fault that I didn't do everything right during migration to new system back in 2015. So I want to clear FAULTED state from the disk and see my zpool ONLINE, not DEGRADED.
I tried doing
What can I do to trick ZFS in believing that this disk is OK?
Relevant
I have a home server running FreeBSD since forever. After I upgraded to FreeBSD 10.1 (by reinstalling everything on mostly new hardware) 1.5 years ago and reshuffled my storage volumes, I got an unrecoverable error in one file (I had storage pool running without redundancy for a while by then). I removed the file (it wasn't valuable for me) and moved to a new raidz1 setup on 3 disks. Half a year later I upgraded to 10.2 and didn't upgrade any further yet.
Recently I had a power glitch (I think) and ZFS began complaining that the drive that used to have that deleted file is FAULTED and the file suddenly became available again in '--head--' snapshot. I've deleted the snapshot, scrubbed all data and now everything is fine except the fact that this drive is in FAULTED state. I know that ZFS is trying to warn me about possible future damage this drive can make to my files, but I'm sure it was my fault that I didn't do everything right during migration to new system back in 2015. So I want to clear FAULTED state from the disk and see my zpool ONLINE, not DEGRADED.
I tried doing
zpool clear POOL DRIVE
, that did zero READ and WRITE columns in zpool status
but the disk is still in FAULTED state.What can I do to trick ZFS in believing that this disk is OK?
Relevant
zpool status
:
Code:
pool: stuff
state: DEGRADED
status: One or more devices are faulted in response to persistent errors.
Sufficient replicas exist for the pool to continue functioning in a
degraded state.
action: Replace the faulted device, or use 'zpool clear' to mark the device
repaired.
scan: scrub repaired 0 in 1h27m with 0 errors on Fri Aug 19 13:44:22 2016
config:
NAME STATE READ WRITE CKSUM
stuff DEGRADED 0 0 0
raidz1-0 DEGRADED 0 0 0
gptid/54e55c16-5275-11e5-bf1a-10c37b9dc3be ONLINE 0 0 0
ada2p3 FAULTED 0 0 0 too many errors
ada1p8 ONLINE 0 0 0
logs
gptid/92809934-5276-11e5-bf1a-10c37b9dc3be ONLINE 0 0 0
ada2p2 FAULTED 0 0 0 too many errors
ada1p7 ONLINE 0 0 0