ZFS zpool SUSPENDED. Can't zpool clear or zpool destroy

I have a 3TB external hard drive mounted as ZFS. The power on the drive accidentally got unplugged and now it says
zpool: pool I/O is currently suspended

Tried
zpool clear -nF external
and
zpool clear external

Just sits blankly for hours

The data was not irreplaceable so I gave up and just tried
zpool destroy -f external

but nothing happens.

Rebooting does nothing as well.

The drive has some jails and data on it using iocage. Tried turning off iocage and rebooting but still can't do anything with the drive.

Ideas?

Tried
 
In order to help people to understand what happen, please post your FreeBSD version and the result of zpool status.

Did you mean "unplugged" instead of "plugged"?
 
Sorry guys I meant it is an external drive formatted in ZFS and it got unplugged by mistake.

I went to SSH into the machine and it was inaccessible.
I let the zpool clear -nF external run for 24 hours just to see if it would do anything and it seemed like it crashed the machine.

I had to manually restart with the power button and when it came back online the drive was restored. I don't really understand what happaned.

Code:
root@stitch:~ # zpool status
  pool: external
 state: ONLINE
config:

        NAME        STATE     READ WRITE CKSUM
        external    ONLINE       0     0     0
          da0       ONLINE       0     0     0

errors: No known data errors
errors: No known data errors

Is there a log or zfs command I can run to see what exactly the result of the zpool status was?
 
zpool history -i external will show some detail that might interest you, although it's nothing like a history of status.

… please post your FreeBSD version …

incorporeal please do.

(From your ultimately forcing off the computer I guess, it's not 13⋯ or greater.)

… external drive formatted in ZFS and it got unplugged by mistake. …

Easily done.

… cat … occasionally tread on the massive power button of the dock … zpool status consequently reports catastrophic failure of the pool that's on a mobile hard disk drive connected via USB. Numerous reported catastrophes but never a truly permanent error. Thank you, OpenZFS. …

A little more detail.

Where (in your current case) you now have No known data errors, in a future case you might find permanent errors.

Depending on how the pool is used at the time of disconnection of its single disk, the errors might be in (a) metadata and/or (b) more alarmingly, data. Either way, a successful scrub will probably lead to no known data errors – in other words, the errors were not truly permanent.

Hint: avoid writing to the pool until after completion of the scrub.

… I went to SSH into the machine and it was inaccessible.
I let the zpool clear -nF external run for 24 hours just to see if it would do anything and it seemed like it crashed the machine. …

Some systems cope better than others with accidental disconnections of single-disk pools. (USB in your case? I guess so.) Reconnecting the disk will not improve things and in this situation, it's likely that any zpool ⋯ command will fail – the apparently endless run is a symptom, not a cause.
 
… (From your ultimately forcing off the computer I guess, it's not 13⋯ or greater.) …

… 13.0-RELEASE-p2 …

Thanks. FreeBSD 14.0-CURRENT here, I'm vaguely aware of things being more tolerant with 'recent' FreeBSD, but I can't say exactly when it started.

That said, I probably did have to force off the computer, just once, a few months ago, following a ZFS-related incident.
 
I experienced similar situation few times in the past (as this is USB drive) and reboot seemed to be necessary, but I'm wondering if there is a way to unsuspend the device without rebooting.
# zpool status -v hdbackup
pool: hdbackup
state: SUSPENDED
status: One or more devices are faulted in response to IO failures.
action: Make sure the affected devices are connected, then run 'zpool clear'.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-HC
scan: scrub repaired 0B in 04:56:48 with 0 errors on Tue Nov 28 23:02:33 2023
config:

NAME STATE READ WRITE CKSUM
hdbackup UNAVAIL 0 0 0 insufficient replicas
gpt/hdbackup REMOVED 0 0 0

errors: List of errors unavailable: pool I/O is currently suspended
# zpool history -i hdbackup
History for 'hdbackup':
cannot get history for 'hdbackup': pool I/O is currently suspended
Exit 1
the device is visible again:
# ls -al /dev/gpt/hdbackup
crw-r----- 1 root operator 0xf6 Dec 23 16:43 /dev/gpt/hdbackup
Edit: zpool clear hangs similarly as for the OP even though the drive is back. After searching more, it looks like this might be related: https://github.com/openzfs/zfs/issues/5242
 
Back
Top