ZFS How recover pool that can no longer be mounted

I unfortunately dropped a piece of metal in the wrong spot on my motherboard which caused the computer to shutdown instantly.

Upon restarting the machine (TrueNAS Core 14 ) kernel would panic when it came to mounting my pool (12 disks, 2 zdev raidz2)

Booting latest FreeBSD 15 or Linux, no panic, but an I/O error when running zpool import.

Tried `zpool import -F` to rollback a bit, same error.

I tried zpool import -T to a txg I had found that matched the time at which the server turned off. After 4 days of intense disk activity, I got the same dreadful

Code:
# zpool import -N -o readonly=on -f -R /mnt -F -T 43683300 pool
cannot import 'pool': one or more devices is currently unavailable
Now, I can see all my dataset in there using zdb. So something is there.

so back on the FreeBSD 15 live disk.

Code:
root@:~ # zdb -ed pool
Dataset mos [META], ID 0, cr_txg 4, 1.26G, 1139 objects
Dataset pool/home/angela [ZPL], ID 108, cr_txg 177, 264K, 14 objects
Dataset pool/home/davenard [ZPL], ID 128890, cr_txg 14309410, 296K, 20 objects
Dataset pool/home/jyavenard [ZPL], ID 102, cr_txg 164, 272G, 417952 objects
Dataset pool/home [ZPL], ID 96, cr_txg 132, 272K, 19 objects
Dataset pool/.system/syslog-a5d713b37bdf437fb541f59b157cd837 [ZPL], ID 3716, cr_txg 28193766, 192K, 7 objects
Dataset pool/.system/cores [ZPL], ID 3462, cr_txg 28193761, 145M, 8 objects
Dataset pool/.system/webui [ZPL], ID 3588, cr_txg 28193772, 192K, 7 objects
Dataset pool/.system/[email]samba4@update--2025-01-12-04-41--13.0-U6.2[/email] [ZPL], ID 55293, cr_txg 37082320, 871K, 110 objects
Dataset pool/.system/[email]samba4@update--2024-01-22-11-22--13.0-U5.3[/email] [ZPL], ID 105724, cr_txg 30991476, 919K, 182 objects
Dataset pool/.system/[email]samba4@update--2025-09-01-01-21--13.0-U6.7[/email] [ZPL], ID 385, cr_txg 41060008, 887K, 98 objects
Dataset pool/.system/[email]samba4@update--2024-07-07-23-59--13.0-U6.1[/email] [ZPL], ID 66894, cr_txg 33864198, 887K, 151 objects
Dataset pool/.system/[email]samba4@update--2025-03-16-11-12--13.0-U6.4[/email] [ZPL], ID 10681, cr_txg 38168608, 935K, 104 objects
Dataset pool/.system/samba4 [ZPL], ID 392, cr_txg 28193764, 983K, 94 objects
Dataset pool/.system/services [ZPL], ID 3205, cr_txg 28193774, 192K, 7 objects
Dataset pool/.system/configs-a5d713b37bdf437fb541f59b157cd837 [ZPL], ID 2822, cr_txg 28193770, 315M, 2511 objects
Dataset pool/.system/rrd-a5d713b37bdf437fb541f59b157cd837 [ZPL], ID 2437, cr_txg 28193768, 132M, 2085 objects
Dataset pool/.system [ZPL], ID 657, cr_txg 28193759, 14.9M, 53 objects
Dataset pool/data/web [ZPL], ID 8386, cr_txg 27924358, 253G, 130139 objects
Dataset pool/data/music [ZPL], ID 126, cr_txg 393, 176K, 7 objects
Dataset pool/data/photos [ZPL], ID 162, cr_txg 456, 176K, 7 objects
Dataset pool/data/images [ZPL], ID 120, cr_txg 384, 390M, 1560 objects
Dataset pool/data/videos/movies [ZPL], ID 144, cr_txg 422, 103G, 537 objects
Dataset pool/data/videos/trailers [ZPL], ID 156, cr_txg 445, 176K, 7 objects
Dataset pool/data/videos/TV [ZPL], ID 138, cr_txg 413, 85.5G, 437 objects
Dataset pool/data/videos/recordings [ZPL], ID 150, cr_txg 433, 6.66T, 10635 objects
Dataset pool/data/videos [ZPL], ID 132, cr_txg 404, 264K, 16 objects
Dataset pool/data [ZPL], ID 114, cr_txg 376, 256K, 12 objects
Dataset pool/downloads [ZPL], ID 180, cr_txg 520, 91.5G, 3246 objects
Dataset pool/backup/www.avenard.org [ZPL], ID 629, cr_txg 2558121, 262G, 496406 objects
Dataset pool/backup/DominiquesiPro [ZPL], ID 128837, cr_txg 14309316, 192K, 7 objects
Dataset pool/backup/jya7980xe [ZPL], ID 697, cr_txg 5820092, 2.97T, 73579 objects
Dataset pool/backup/macbookair13/backup [ZPL], ID 1208, cr_txg 24973734, 125G, 3963 objects
Dataset pool/backup/macbookair13/jyavenard [ZPL], ID 86854, cr_txg 43590857, 192K, 7 objects
Dataset pool/backup/macbookair13 [ZPL], ID 790, cr_txg 4369423, 328K, 17 objects
Dataset pool/backup/hass [ZPL], ID 847, cr_txg 8743534, 189G, 5100 objects
Dataset pool/backup/lenovo13 [ZPL], ID 90349, cr_txg 20217124, 10.6G, 117550 objects
Dataset pool/backup/mediaserver [ZPL], ID 174, cr_txg 496, 57.5G, 1215429 objects
Dataset pool/backup/mythtv [ZPL], ID 186, cr_txg 532, 1.38G, 33 objects
Dataset pool/backup/macbookpro15 [ZPL], ID 1099, cr_txg 11690440, 881G, 80362 objects
Dataset pool/backup/mba13m2/backup [ZPL], ID 2831, cr_txg 30992041, 200K, 8 objects
Dataset pool/backup/mba13m2 [ZPL], ID 108692, cr_txg 30991189, 200K, 9 objects
Dataset pool/backup/mbp14m1/backup [ZPL], ID 108948, cr_txg 43611379, 32.3G, 37 objects
Dataset pool/backup/mbp14m1 [ZPL], ID 109455, cr_txg 43611231, 192K, 8 objects
Dataset pool/backup [ZPL], ID 168, cr_txg 488, 240K, 18 objects
Dataset pool/guest [ZPL], ID 91261, cr_txg 28039850, 979M, 9 objects
Dataset pool/vms/jira-w2jhhc_jira_clone0 [ZVOL], ID 774, cr_txg 2732864, 2.94G, 2 objects
Dataset pool/vms/hass-radar@clone_radar [ZVOL], ID 28845, cr_txg 28123669, 13.8G, 2 objects
Dataset pool/vms/hass-radar [ZVOL], ID 31234, cr_txg 28123615, 7.42G, 2 objects
Dataset pool/vms/hass-xgsk1@hass-2023-08-06_23-26 [ZVOL], ID 20705, cr_txg 28116748, 12.7G, 2 objects
Dataset pool/vms/hass-xgsk1@clone_radar [ZVOL], ID 28487, cr_txg 28123606, 13.8G, 2 objects
Dataset pool/vms/hass-xgsk1 [ZVOL], ID 52894, cr_txg 27893757, 7.01G, 2 objects
Dataset pool/vms/ubuntu-n8n5qq [ZVOL], ID 1086, cr_txg 27910227, 112K, 2 objects
Dataset pool/vms/mediaserver-evrl33@mediaserverl-2023-08-06_23-27 [ZVOL], ID 23681, cr_txg 28116752, 184G, 2 objects
Dataset pool/vms/mediaserver-evrl33 [ZVOL], ID 2323, cr_txg 28088242, 129G, 2 objects
Dataset pool/vms/jira-w2jhhc@jira_clone0 [ZVOL], ID 769, cr_txg 2732863, 2.02G, 2 objects
Dataset pool/vms/jira-w2jhhc [ZVOL], ID 766, cr_txg 2730812, 275G, 2 objects
Dataset pool/vms [ZPL], ID 728, cr_txg 2720017, 208K, 13 objects
Dataset pool/jails [ZPL], ID 584, cr_txg 318366, 184K, 10 objects
Dataset pool/iocage/download/13.2-RELEASE [ZPL], ID 8573, cr_txg 27924283, 256M, 10 objects
Dataset pool/iocage/download/11.2-RELEASE [ZPL], ID 590, cr_txg 322397, 272M, 12 objects
Dataset pool/iocage/download/11.3-RELEASE [ZPL], ID 700, cr_txg 8477876, 289M, 12 objects
Dataset pool/iocage/download/13.1-RELEASE [ZPL], ID 9278, cr_txg 27924952, 251M, 10 objects
Dataset pool/iocage/download/12.1-RELEASE [ZPL], ID 985, cr_txg 12596611, 371M, 11 objects
Dataset pool/iocage/download [ZPL], ID 541, cr_txg 258516, 192K, 12 objects
Dataset pool/iocage/releases/12.1-RELEASE/root [ZPL], ID 1028, cr_txg 12596623, 1.95G, 103912 objects
Dataset pool/iocage/releases/12.1-RELEASE [ZPL], ID 1019, cr_txg 12596622, 192K, 8 objects
Dataset pool/iocage/releases/13.1-RELEASE/root [ZPL], ID 9287, cr_txg 27924958, 892M, 17195 objects
Dataset pool/iocage/releases/13.1-RELEASE [ZPL], ID 9352, cr_txg 27924957, 192K, 8 objects
Dataset pool/iocage/releases/11.3-RELEASE/root [ZPL], ID 761, cr_txg 8477899, 1.51G, 98901 objects
Dataset pool/iocage/releases/11.3-RELEASE [ZPL], ID 755, cr_txg 8477898, 176K, 8 objects
Dataset pool/iocage/releases/11.2-RELEASE/root@transmission [ZPL], ID 605, cr_txg 322723, 1.43G, 95620 objects
Dataset pool/iocage/releases/11.2-RELEASE/root@shell [ZPL], ID 606, cr_txg 2573567, 1.43G, 95620 objects
Dataset pool/iocage/releases/11.2-RELEASE/root@sendmail [ZPL], ID 703, cr_txg 2575463, 1.43G, 95620 objects
Dataset pool/iocage/releases/11.2-RELEASE/root@couchpotato [ZPL], ID 646, cr_txg 475106, 1.43G, 95620 objects
Dataset pool/iocage/releases/11.2-RELEASE/root@teslamate [ZPL], ID 731, cr_txg 3937568, 1.43G, 95620 objects
Dataset pool/iocage/releases/11.2-RELEASE/root [ZPL], ID 602, cr_txg 322588, 1.50G, 97717 objects
Dataset pool/iocage/releases/11.2-RELEASE [ZPL], ID 596, cr_txg 322587, 176K, 8 objects
Dataset pool/iocage/releases/13.2-RELEASE/root@web [ZPL], ID 852, cr_txg 27943081, 777M, 17100 objects
Dataset pool/iocage/releases/13.2-RELEASE/root [ZPL], ID 8663, cr_txg 27924289, 777M, 17100 objects
Dataset pool/iocage/releases/13.2-RELEASE [ZPL], ID 8656, cr_txg 27924288, 192K, 8 objects
Dataset pool/iocage/releases [ZPL], ID 565, cr_txg 258524, 192K, 12 objects
Dataset pool/iocage/templates [ZPL], ID 571, cr_txg 258526, 176K, 7 objects
Dataset pool/iocage/jails/web/root@jail-2023-08-06_23-26 [ZPL], ID 20831, cr_txg 28116744, 4.97G, 329930 objects
Dataset pool/iocage/jails/web/root [ZPL], ID 924, cr_txg 27943083, 7.18G, 400180 objects
Dataset pool/iocage/jails/web@jail-2023-08-06_23-26 [ZPL], ID 20829, cr_txg 28116744, 208K, 10 objects
Dataset pool/iocage/jails/web [ZPL], ID 314, cr_txg 27943082, 224K, 10 objects
Dataset pool/iocage/jails/shell/root@ioc_update_11.3-RELEASE-p14_2023-08-04_20-15-49 [ZPL], ID 21977, cr_txg 28085309, 5.19G, 400595 objects
Dataset pool/iocage/jails/shell/root@ioc_update_11.2-RELEASE-p15_2020-07-12_13-07-09 [ZPL], ID 668, cr_txg 8494174, 2.79G, 151462 objects
Dataset pool/iocage/jails/shell/root@ioc_update_11.2-RELEASE-p9 [ZPL], ID 685, cr_txg 4560311, 2.55G, 140608 objects
Dataset pool/iocage/jails/shell/root@ioc_update_11.2-RELEASE-p15_2020-07-12_13-07-54 [ZPL], ID 674, cr_txg 8494183, 2.79G, 151469 objects
Dataset pool/iocage/jails/shell/root@ioc_update_11.3-RELEASE-p11_2023-07-26_09-08-55 [ZPL], ID 7931, cr_txg 27924398, 3.33G, 169462 objects
Dataset pool/iocage/jails/shell/root@jail-2023-08-06_23-26 [ZPL], ID 20835, cr_txg 28116744, 7.76G, 557372 objects
Dataset pool/iocage/jails/shell/root [ZPL], ID 642, cr_txg 2573569, 8.18G, 572968 objects
Dataset pool/iocage/jails/shell@ioc_update_11.3-RELEASE-p14_2023-08-04_20-15-49 [ZPL], ID 21975, cr_txg 28085309, 232K, 11 objects
Dataset pool/iocage/jails/shell@ioc_update_11.2-RELEASE-p15_2020-07-12_13-07-09 [ZPL], ID 525, cr_txg 8494174, 216K, 11 objects
Dataset pool/iocage/jails/shell@ioc_update_11.2-RELEASE-p9 [ZPL], ID 623, cr_txg 4560311, 192K, 10 objects
Dataset pool/iocage/jails/shell@ioc_update_11.3-RELEASE-p11_2023-07-26_09-08-55 [ZPL], ID 7929, cr_txg 27924398, 216K, 11 objects
Dataset pool/iocage/jails/shell@jail-2023-08-06_23-26 [ZPL], ID 20833, cr_txg 28116744, 216K, 11 objects
Dataset pool/iocage/jails/shell@ioc_update_11.2-RELEASE-p15_2020-07-12_13-07-54 [ZPL], ID 672, cr_txg 8494183, 216K, 11 objects
Dataset pool/iocage/jails/shell [ZPL], ID 635, cr_txg 2573568, 216K, 11 objects
Dataset pool/iocage/jails/sendmail/root@ioc_update_12.2-RELEASE-p15_2022-08-18_16-10-51 [ZPL], ID 60164, cr_txg 21674773, 28.4G, 600674 objects
Dataset pool/iocage/jails/sendmail/root@ioc_update_12.2-RELEASE-p15_2022-08-18_16-12-58 [ZPL], ID 59945, cr_txg 21674799, 28.4G, 600674 objects
Dataset pool/iocage/jails/sendmail/root@ioc_update_12.2-RELEASE-p15_2022-08-18_16-12-36 [ZPL], ID 60050, cr_txg 21674794, 28.4G, 600674 objects
Dataset pool/iocage/jails/sendmail/root@ioc_update_12.2-RELEASE-p15_2023-07-26_08-52-27 [ZPL], ID 8537, cr_txg 27924186, 30.8G, 601340 objects
Dataset pool/iocage/jails/sendmail/root@jail-2023-08-06_23-26 [ZPL], ID 20839, cr_txg 28116744, 30.8G, 601339 objects
Dataset pool/iocage/jails/sendmail/root@ioc_update_12.2-RELEASE-p4_2022-08-18_16-09-07 [ZPL], ID 59519, cr_txg 21674752, 28.4G, 600379 objects
Dataset pool/iocage/jails/sendmail/root@ioc_update_12.2-RELEASE-p4_2021-03-08_02-17-22 [ZPL], ID 824, cr_txg 12596552, 25.4G, 586619 objects
Dataset pool/iocage/jails/sendmail/root@ioc_update_11.2-RELEASE-p9_2020-07-12_13-32-40 [ZPL], ID 236, cr_txg 8494512, 2.98G, 139433 objects
Dataset pool/iocage/jails/sendmail/root@ioc_update_11.3-RELEASE-p11_2021-03-07_23-21-30 [ZPL], ID 414, cr_txg 12594445, 22.6G, 452470 objects
Dataset pool/iocage/jails/sendmail/root@ioc_update_12.2-RELEASE-p15_2022-08-18_16-13-40 [ZPL], ID 60172, cr_txg 21674808, 28.4G, 600611 objects
Dataset pool/iocage/jails/sendmail/root [ZPL], ID 714, cr_txg 2575465, 35.3G, 602246 objects
Dataset pool/iocage/jails/sendmail@ioc_update_12.2-RELEASE-p15_2022-08-18_16-10-51 [ZPL], ID 60041, cr_txg 21674773, 224K, 11 objects
Dataset pool/iocage/jails/sendmail@jail-2023-08-06_23-26 [ZPL], ID 20837, cr_txg 28116744, 208K, 11 objects
Dataset pool/iocage/jails/sendmail@ioc_update_12.2-RELEASE-p15_2023-07-26_08-52-27 [ZPL], ID 8535, cr_txg 27924186, 208K, 11 objects
Dataset pool/iocage/jails/sendmail@ioc_update_12.2-RELEASE-p15_2022-08-18_16-12-36 [ZPL], ID 59935, cr_txg 21674794, 224K, 11 objects
Dataset pool/iocage/jails/sendmail@ioc_update_12.2-RELEASE-p15_2022-08-18_16-12-58 [ZPL], ID 59943, cr_txg 21674799, 224K, 11 objects
Dataset pool/iocage/jails/sendmail@ioc_update_11.2-RELEASE-p9_2020-07-12_13-32-40 [ZPL], ID 193, cr_txg 8494512, 208K, 11 objects
Dataset pool/iocage/jails/sendmail@ioc_update_12.2-RELEASE-p15_2022-08-18_16-13-40 [ZPL], ID 59951, cr_txg 21674808, 224K, 11 objects
Dataset pool/iocage/jails/sendmail@ioc_update_12.2-RELEASE-p4_2021-03-08_02-17-22 [ZPL], ID 822, cr_txg 12596552, 224K, 11 objects
Dataset pool/iocage/jails/sendmail@ioc_update_11.3-RELEASE-p11_2021-03-07_23-21-30 [ZPL], ID 796, cr_txg 12594445, 208K, 11 objects
Dataset pool/iocage/jails/sendmail@ioc_update_12.2-RELEASE-p4_2022-08-18_16-09-07 [ZPL], ID 60034, cr_txg 21674752, 224K, 11 objects
Dataset pool/iocage/jails/sendmail [ZPL], ID 708, cr_txg 2575464, 208K, 11 objects
Dataset pool/iocage/jails@jail-2023-08-06_23-26 [ZPL], ID 20827, cr_txg 28116744, 192K, 10 objects
Dataset pool/iocage/jails [ZPL], ID 553, cr_txg 258520, 192K, 10 objects
Dataset pool/iocage/log [ZPL], ID 559, cr_txg 258522, 384K, 11 objects
Dataset pool/iocage/images [ZPL], ID 547, cr_txg 258518, 176K, 7 objects
Dataset pool/iocage [ZPL], ID 535, cr_txg 258514, 10.6M, 483 objects
Dataset pool [ZPL], ID 21, cr_txg 1, 240K, 15 objects
MOS object 753 (DSL dir clones) leaked
Verified large_blocks feature refcount of 0 is correct
Verified large_dnode feature refcount of 0 is correct
Verified sha512 feature refcount of 0 is correct
Verified skein feature refcount of 0 is correct
Verified userobj_accounting feature refcount of 100 is correct
Verified encryption feature refcount of 0 is correct
Verified project_quota feature refcount of 100 is correct
Verified redaction_bookmarks feature refcount of 0 is correct
Verified redacted_datasets feature refcount of 0 is correct
Verified bookmark_written feature refcount of 0 is correct
Verified livelist feature refcount of 0 is correct
Verified zstd_compress feature refcount of 0 is correct
and to retrieve MOS configuration:

Code:
root@:~ # zdb -eC pool

MOS Configuration:
        version: 5000
        name: 'pool'
        state: 0
        txg: 43203194
        pool_guid: 9742808535407341325
        errata: 0
        hostid: 623965209
        hostname: 'supernas.local'
        com.delphix:has_per_vdev_zaps
        vdev_children: 2
        vdev_tree:
            type: 'root'
            id: 0
            guid: 9742808535407341325
            create_txg: 4
            children[0]:
Does this last one indicate the last txg is 43203194 ?

when I ran a command I found on this post https://forums.freebsd.org/threads/zfs-pool-got-corrupted-kernel-panic-after-import.76485/

Code:
zdb -ul /dev/da0 > /tmp/uberblocks.txt` 
gave me much later txg : 
failed to unpack label 0
failed to unpack label 1
failed to unpack label 2
------------------------------------
LABEL 2 (Bad label cksum)
------------------------------------
    Uberblock[64]
        magic = 0000000000bab10c
        version = 5000
        txg = 43683328
        guid_sum = 9645404203117630058
        timestamp = 1769858073 UTC = Sat Jan 31 11:14:33 2026
        bp = DVA[0]=<0:148d2c4f5000:3000> DVA[1]=<1:113301fac000:3000> DVA[2]=<0:ffba8ced000:3000> [L0 DMU objset] fletcher4 uncompressed unencrypted LE contiguous unique triple size=800L/800P birth=43683328L/43683328P fill=1138 cksum=00000002a033deb1:000004e0ab7140fd:00048aa393ecc229:02d37c89285a7f87
        mmp_magic = 00000000a11cea11
        mmp_delay = 0
        mmp_valid = 0
        checkpoint_txg = 0
        raidz_reflow state=0 off=0
        labels = 2 3
I'd like to retry mounting the pool with a rollback as described https://www.perforce.com/blog/pdx/openzfs-pool-import-recovery but they don't indicate how they determinate the latest "good" txg (in their example it was 50).

Right now I'm running the command `zpool import -o readonly=on -f -R /mnt -T 43683330 pool` but I don't know how long that's going to take again.

Help will be greatly appreciated.
TIA
 
WARNING 1: Seeing that you have received no response in more than 72 hours, with nothing but good intentions in my heart, especially helping you, I outsourced your case to a non-subject expert-in-training and below the horizontal line is what it had to say.

WARNING 2: Use it all only as an inspiration or starting investigative point and check everything using official sources. That it sounds certain doesn't mean it is; these non-subjects are trained to first and foremost sound certain.



Hazard notice (read this first)
Data-loss risk. zpool import -F, -X, -T, and --rewind-to-checkpoint can discard transactions; discarded data is unrecoverable. Per zpool-import(8)(), -T implies -F and -X and is documented as “extremely hazardous” and “last resort.”

1) Interpret the txg values already seen
- The txg: 43203194 shown under “MOS Configuration” in zdb -eC pool is the txg of the configuration/uberblock selected for that view. It is not guaranteed to be the latest uberblock present on every disk.
- zdb -ul /dev/da0 showing txg = 43683328 means that disk still has a later uberblock recorded in its labels. Import must find a txg that is usable across enough devices in each top-level vdev (for 2× RAIDZ2: at least N-2 devices per RAIDZ2 vdev). See zpool-import(8)() and zdb(8)().

Timestamp conversion for the posted uberblock:
Code:
# date -r 1769858073 -u
# date -r 1769858073
(That is 2026-01-31 11:14:33 UTC.)

2) Avoid -T as the first tool (it triggers the long “rewind scan” behavior)
Using -T commonly forces a very long scan because it implies -F and extreme rewind ( -X). This matches the “days of intense disk activity” symptom.

Start with a dry-run recovery:
Code:
# zpool import -d /dev -o readonly=on -N -R /mnt -f -F -n pool
Notes (see zpool-import(8)()):
- -F tries to make a damaged pool importable by discarding recent transactions.
- -n with -F performs analysis but does not do the recovery.
- -o readonly=on enforces a read-only import (property documented in zpoolprops(7)()).
- -R /mnt sets altroot and also sets cachefile=none to avoid writing a cachefile.

If this dry-run still ends with “one or more devices is currently unavailable,” treat the primary issue as device availability/label readability, not txg selection.

3) Identify which devices are “unavailable” (before chasing txgs)
Ensure the import scan uses the same naming scheme TrueNAS used (often /dev/gptid/*):
Code:
# ls -l /dev/gptid 2>/dev/null | sed -n '1,120p'
# zpool import -d /dev
# zpool import -d /dev/gptid

Dump on-disk configuration directly:
Code:
# zdb -eC -p /dev pool | less
# zdb -eC -p /dev/gptid pool | less

Look specifically for:
- A separate log device (SLOG). If a log vdev is missing, import may require -m (“missing log device”). See zpool-import(8)().
- Any special/dedup vdevs. Missing special vdevs are typically fatal for normal import; recovery becomes “copy out what can be read.”

4) Determine a realistic “latest good txg” from uberblocks (fast, offline)
4.1 Scan uberblock txgs across all disks

On the FreeBSD 15 live system:
Code:
# cat > /tmp/txg-scan.sh <<'SH'
#!/bin/sh
set -eu
out="/tmp/uberblocks.$$.txt"
trap 'rm -f "$out"' EXIT

for d in /dev/da*; do
  [ -c "$d" ] || continue
  # Collect all txg values zdb can read from labels; pick the max for that disk.
  max_txg=$(
    zdb -ul "$d" 2>/dev/null |
      awk '/txg =/ {print $3}' |
      sort -n |
      tail -1
  )
  printf '%s\t%s\n' "$d" "${max_txg:-NA}"
done | sort -k2,2n
SH
# sh /tmp/txg-scan.sh | tee /tmp/txg-scan.out
This uses zdb -u (uberblock display) in label mode. See zdb(8)().

Interpretation:
- Disks showing NA (or very low maxima vs. others) often have unreadable labels and may be causing “unavailable.”
- A candidate recovery txg must be <= the lowest max-txg among the disks that must participate, and must exist on enough disks per RAIDZ2 vdev.

4.2 Use zdb to validate a candidate txg and capture import debug
Use a plausible txg (often the minimum of the “good” disks), then:
Code:
# zdb -e -p /dev -d -G -t 43683328 pool | head -200
Notes (see zdb(8)()):
- -t sets the highest txg used while searching for uberblocks.
- -G dumps the zfs_dbgmsg buffer (import/load debug messages).

If zdb can enumerate datasets cleanly at/below that txg and debug output indicates it loaded successfully, that txg is a strong candidate for a read-only import attempt.

5) Import attempts in the safest order
5.1 Preferred: let -F choose (read-only, no mounts), after a dry-run succeeds

Code:
# zpool import -d /dev -o readonly=on -N -R /mnt -f -F pool
See zpool-import(8)().

5.2 If a missing log device is present in the topology
Code:
# zpool import -d /dev -o readonly=on -N -R /mnt -f -m -F pool
-m permits import with a missing log device; recent transactions may be lost. See zpool-import(8)().

5.3 Last resort: exact txg (-T) only after the above steps yield a specific txg
Code:
# zpool import -d /dev -o readonly=on -N -R /mnt -f -T 43683328 pool
-T implies -F and -X and is hazardous by definition. See zpool-import(8)().

6) If import still fails: copy-out without importing (practical fallback)
If zdb -ed pool can list datasets, metadata is at least partially readable. zdb can generate a backup stream for a numeric objset ID via -B, documented as useful in recovery scenarios when normal import is not possible. See zdb(8)().

Example pattern (requires a separate destination pool with space):
Code:
# zdb -B -e -p /dev pool/102 > /mnt/recovery/pool_home_jyavenard.send
# zfs receive -uF newpool/recovered/home_jyavenard < /mnt/recovery/pool_home_jyavenard.send

7) Verification once the pool imports (even read-only)
Code:
# zpool status pool
# zfs list -r pool | head -200

Keep the pool read-only until data is copied elsewhere; readonly=on is the import-time control for that (see zpoolprops(7)()).
 
The command above failed with an IO error; so I re-ran with the txg 43683330 that showed with the command
$ zdb -u -G -e pool

Uberblock:
magic = 0000000000bab10c
version = 5000
txg = 43683330
guid_sum = 9645404203117630058
timestamp = 1769858083 UTC = Sat Jan 31 11:14:43 2026
bp = DVA[0]=<0:148d31382000:3000> DVA[1]=<1:1133021fe000:3000> DVA[2]=<0:ffba8cf9000:3000> [L0 DMU objset] fletcher4 uncompressed unencrypted LE contiguous unique triple size=800L/800P birth=43683330L/43683330P fill=1139 cksum=00000003e17053e0:0000072b9b92aa59:0006a3fdca7e1abb:041c286f44997990
mmp_magic = 00000000a11cea11
mmp_delay = 0
mmp_valid = 0
checkpoint_txg = 0
raidz_reflow state=0 off=0

zpool import -o readonly=on -f -R /mnt -T 43683330 pool
It's been 5 days and it's still going with all disks showing 100% activity (as shown by gstat -a)

Is there a way to check the progress of that command ?
I'm starting to wonder if this is ever going to finish.
 
6) If import still fails: copy-out without importing (practical fallback)
If zdb -ed pool can list datasets, metadata is at least partially readable. zdb can generate a backup stream for a numeric objset ID via -B, documented as useful in recovery scenarios when normal import is not possible. See zdb(8)().

I only see this message now, after I had tried with a later txg :(

I have found that post that gave me hope too https://forums.freebsd.org/threads/...-out-of-the-corrupted-zpool.94480/post-670906 ; but after I had started the pool import and ctrl-C did nothing and I was also scared that interrupting this operation could leave everything in a broken state, and while today nothing is working re: import , at least I can see all my dataset.

This is inline with your comment and that looks quite hopeful.

Unfortunately, I didn't read that "Data-loss risk" note before it was too late. When it earlier failed it didn't seem to have impacted anything that I could read with the zdb command so finger crossed.

The txg date is 2 minutes after the original PC shutdown (and is the time after I power cycled the server to make it restart).
 
It's now been close to 3 weeks, and zfs import still hasn't completed. I'm wondering if it ever will.

AlfredoLlaquet if I was to reboot the machine now, could it be any worse than it was earlier?

Thanks
 
IMO it may be time to cut your losses and restore from backup.

I had a situation similar to yours. It was my testbed machine. It was backed up and the data was duplicated from my primary build server which also acts as they primary NFS server for my network. Recovering data was relatively simple.

What happened was a stick of RAM had gone bad, likely improperly installed. That caused memory corruption that corrupted the UFS filesystems and the ZFS pool. There was no recovery except for restore from backups.

For those of you who would say, ZFS creates checksum from which to recover data. Those checksums assume RAM is 100% healthy. A bad RAM stick is "upstream" to any ZFS checksum. Checksummed bad data results in checksums of bad data. In other words it checksummed what it saw. Corruption.

Dropping some metal thing on the motherboard resulted in "all bets are off." If whatever in RAM is corrupted, whether by a a bad or improperly installed RAM stick or a short on the MB, expect serious and unrecoverable corruption. Sad to say you may have to cut your losses and rebuild from backups.

If you don't have backups, lesson learned. Back up your systems. I back mine up here at home once a month. In addition the servers downstairs use GEOM mirrors for UFS and ZFS mirrors for zpools. That takes care of hardware errors. ZFS snapshots take care of my fat fingering something. And backups handle the rest.

Laptops are not mirrored but their data is also rsynced to one of the servers downdstairs. So after a recovery from backup anything important like MH mail directories, git repos and other current work are rsynced to a server in my basement. Though recovery stings a little, it's not catastrophic. Just painful.
 
This uses zdb -u (uberblock display) in label mode. See zdb(8)().

Interpretation:
- Disks showing NA (or very low maxima vs. others) often have unreadable labels and may be causing “unavailable.”
- A candidate recovery txg must be <= the lowest max-txg among the disks that must participate, and must exist on enough disks per RAIDZ2 vdev.

ok so I rebooted the machine.
/dev/da0p1 NA
/dev/da10p1 NA
/dev/da11p1 NA
/dev/da12 NA
/dev/da12s1 NA
/dev/da12s2 NA
/dev/da12s2a NA
/dev/da1p1 NA
/dev/da2p1 NA
/dev/da3p1 NA
/dev/da4p1 NA
/dev/da5p1 NA
/dev/da6p1 NA
/dev/da7p1 NA
/dev/da8p1 NA
/dev/da9p1 NA
/dev/da0 43683330
/dev/da0p2 43683330
/dev/da1 43683330
/dev/da10 43683330
/dev/da10p2 43683330
/dev/da11 43683330
/dev/da11p2 43683330
/dev/da1p2 43683330
/dev/da2 43683330
/dev/da2p2 43683330
/dev/da3 43683330
/dev/da3p2 43683330
/dev/da4 43683330
/dev/da4p2 43683330
/dev/da5 43683330
/dev/da5p2 43683330
/dev/da6 43683330
/dev/da6p2 43683330
/dev/da7 43683330
/dev/da7p2 43683330
/dev/da8 43683330
/dev/da8p2 43683330
/dev/da9 43683330
/dev/da9p2 43683330
so at least I know that 43683330 wasn't a bad txg to use, and all my disks are there

I wasn't using a log device.

# zpool import -d /dev -o readonly=on -N -R /mnt -f -m -F pool
cannot import 'pool': I/O error
Destroy and re-create the pool from
a backup source.

And last zdb command
root@:~ # zdb -e -p /dev -d -G -t 43683330 pool | head -200
Dataset mos [META], ID 0, cr_txg 4, 1.26G, 1139 objects
Dataset pool/home/angela [ZPL], ID 108, cr_txg 177, 264K, 14 objects
Dataset pool/home/davenard [ZPL], ID 128890, cr_txg 14309410, 296K, 20 objects
Dataset pool/home/jyavenard [ZPL], ID 102, cr_txg 164, 272G, 417952 objects
Dataset pool/home [ZPL], ID 96, cr_txg 132, 272K, 19 objects
Dataset pool/.system/syslog-a5d713b37bdf437fb541f59b157cd837 [ZPL], ID 3716, cr_txg 28193766, 192K, 7 objects
Dataset pool/.system/cores [ZPL], ID 3462, cr_txg 28193761, 145M, 8 objects
Dataset pool/.system/webui [ZPL], ID 3588, cr_txg 28193772, 192K, 7 objects
Dataset pool/.system/samba4@update--2025-01-12-04-41--13.0-U6.2 [ZPL], ID 55293, cr_txg 37082320, 871K, 110 objects
Dataset pool/.system/samba4@update--2024-01-22-11-22--13.0-U5.3 [ZPL], ID 105724, cr_txg 30991476, 919K, 182 objects
Dataset pool/.system/samba4@update--2025-09-01-01-21--13.0-U6.7 [ZPL], ID 385, cr_txg 41060008, 887K, 98 objects
Dataset pool/.system/samba4@update--2024-07-07-23-59--13.0-U6.1 [ZPL], ID 66894, cr_txg 33864198, 887K, 151 objects
Dataset pool/.system/samba4@update--2025-03-16-11-12--13.0-U6.4 [ZPL], ID 10681, cr_txg 38168608, 935K, 104 objects
Dataset pool/.system/samba4 [ZPL], ID 392, cr_txg 28193764, 983K, 94 objects
Dataset pool/.system/services [ZPL], ID 3205, cr_txg 28193774, 192K, 7 objects
Dataset pool/.system/configs-a5d713b37bdf437fb541f59b157cd837 [ZPL], ID 2822, cr_txg 28193770, 315M, 2511 objects
Dataset pool/.system/rrd-a5d713b37bdf437fb541f59b157cd837 [ZPL], ID 2437, cr_txg 28193768, 132M, 2085 objects
Dataset pool/.system [ZPL], ID 657, cr_txg 28193759, 14.9M, 53 objects
Dataset pool/data/web [ZPL], ID 8386, cr_txg 27924358, 253G, 130139 objects
Dataset pool/data/music [ZPL], ID 126, cr_txg 393, 176K, 7 objects
Dataset pool/data/photos [ZPL], ID 162, cr_txg 456, 176K, 7 objects
Dataset pool/data/images [ZPL], ID 120, cr_txg 384, 390M, 1560 objects
Dataset pool/data/videos/movies [ZPL], ID 144, cr_txg 422, 103G, 537 objects
Dataset pool/data/videos/trailers [ZPL], ID 156, cr_txg 445, 176K, 7 objects
Dataset pool/data/videos/TV [ZPL], ID 138, cr_txg 413, 85.5G, 437 objects
Dataset pool/data/videos/recordings [ZPL], ID 150, cr_txg 433, 6.66T, 10635 objects
Dataset pool/data/videos [ZPL], ID 132, cr_txg 404, 264K, 16 objects
Dataset pool/data [ZPL], ID 114, cr_txg 376, 256K, 12 objects
Dataset pool/downloads [ZPL], ID 180, cr_txg 520, 91.5G, 3246 objects
Dataset pool/backup/www.avenard.org [ZPL], ID 629, cr_txg 2558121, 262G, 496406 objects
Dataset pool/backup/DominiquesiPro [ZPL], ID 128837, cr_txg 14309316, 192K, 7 objects
Dataset pool/backup/jya7980xe [ZPL], ID 697, cr_txg 5820092, 2.97T, 73579 objects
Dataset pool/backup/macbookair13/backup [ZPL], ID 1208, cr_txg 24973734, 125G, 3963 objects
Dataset pool/backup/macbookair13/jyavenard [ZPL], ID 86854, cr_txg 43590857, 192K, 7 objects
Dataset pool/backup/macbookair13 [ZPL], ID 790, cr_txg 4369423, 328K, 17 objects
Dataset pool/backup/hass [ZPL], ID 847, cr_txg 8743534, 189G, 5100 objects
Dataset pool/backup/lenovo13 [ZPL], ID 90349, cr_txg 20217124, 10.6G, 117550 objects
Dataset pool/backup/mediaserver [ZPL], ID 174, cr_txg 496, 57.5G, 1215429 objects
Dataset pool/backup/mythtv [ZPL], ID 186, cr_txg 532, 1.38G, 33 objects
Dataset pool/backup/macbookpro15 [ZPL], ID 1099, cr_txg 11690440, 881G, 80362 objects
Dataset pool/backup/mba13m2/backup [ZPL], ID 2831, cr_txg 30992041, 200K, 8 objects
Dataset pool/backup/mba13m2 [ZPL], ID 108692, cr_txg 30991189, 200K, 9 objects
Dataset pool/backup/mbp14m1/backup [ZPL], ID 108948, cr_txg 43611379, 32.3G, 37 objects
Dataset pool/backup/mbp14m1 [ZPL], ID 109455, cr_txg 43611231, 192K, 8 objects
Dataset pool/backup [ZPL], ID 168, cr_txg 488, 240K, 18 objects
Dataset pool/guest [ZPL], ID 91261, cr_txg 28039850, 979M, 9 objects
Dataset pool/vms/jira-w2jhhc_jira_clone0 [ZVOL], ID 774, cr_txg 2732864, 2.94G, 2 objects
Dataset pool/vms/hass-radar@clone_radar [ZVOL], ID 28845, cr_txg 28123669, 13.8G, 2 objects
Dataset pool/vms/hass-radar [ZVOL], ID 31234, cr_txg 28123615, 7.42G, 2 objects
Dataset pool/vms/hass-xgsk1@hass-2023-08-06_23-26 [ZVOL], ID 20705, cr_txg 28116748, 12.7G, 2 objects
Dataset pool/vms/hass-xgsk1@clone_radar [ZVOL], ID 28487, cr_txg 28123606, 13.8G, 2 objects
Dataset pool/vms/hass-xgsk1 [ZVOL], ID 52894, cr_txg 27893757, 7.01G, 2 objects
Dataset pool/vms/ubuntu-n8n5qq [ZVOL], ID 1086, cr_txg 27910227, 112K, 2 objects
Dataset pool/vms/mediaserver-evrl33@mediaserverl-2023-08-06_23-27 [ZVOL], ID 23681, cr_txg 28116752, 184G, 2 objects
Dataset pool/vms/mediaserver-evrl33 [ZVOL], ID 2323, cr_txg 28088242, 129G, 2 objects
Dataset pool/vms/jira-w2jhhc@jira_clone0 [ZVOL], ID 769, cr_txg 2732863, 2.02G, 2 objects
Dataset pool/vms/jira-w2jhhc [ZVOL], ID 766, cr_txg 2730812, 275G, 2 objects
Dataset pool/vms [ZPL], ID 728, cr_txg 2720017, 208K, 13 objects
Dataset pool/jails [ZPL], ID 584, cr_txg 318366, 184K, 10 objects
Dataset pool/iocage/download/13.2-RELEASE [ZPL], ID 8573, cr_txg 27924283, 256M, 10 objects
Dataset pool/iocage/download/11.2-RELEASE [ZPL], ID 590, cr_txg 322397, 272M, 12 objects
Dataset pool/iocage/download/11.3-RELEASE [ZPL], ID 700, cr_txg 8477876, 289M, 12 objects
Dataset pool/iocage/download/13.1-RELEASE [ZPL], ID 9278, cr_txg 27924952, 251M, 10 objects
Dataset pool/iocage/download/12.1-RELEASE [ZPL], ID 985, cr_txg 12596611, 371M, 11 objects
Dataset pool/iocage/download [ZPL], ID 541, cr_txg 258516, 192K, 12 objects
Dataset pool/iocage/releases/12.1-RELEASE/root [ZPL], ID 1028, cr_txg 12596623, 1.95G, 103912 objects
Dataset pool/iocage/releases/12.1-RELEASE [ZPL], ID 1019, cr_txg 12596622, 192K, 8 objects
Dataset pool/iocage/releases/13.1-RELEASE/root [ZPL], ID 9287, cr_txg 27924958, 892M, 17195 objects
Dataset pool/iocage/releases/13.1-RELEASE [ZPL], ID 9352, cr_txg 27924957, 192K, 8 objects
Dataset pool/iocage/releases/11.3-RELEASE/root [ZPL], ID 761, cr_txg 8477899, 1.51G, 98901 objects
Dataset pool/iocage/releases/11.3-RELEASE [ZPL], ID 755, cr_txg 8477898, 176K, 8 objects
Dataset pool/iocage/releases/11.2-RELEASE/root@transmission [ZPL], ID 605, cr_txg 322723, 1.43G, 95620 objects
Dataset pool/iocage/releases/11.2-RELEASE/root@shell [ZPL], ID 606, cr_txg 2573567, 1.43G, 95620 objects
Dataset pool/iocage/releases/11.2-RELEASE/root@sendmail [ZPL], ID 703, cr_txg 2575463, 1.43G, 95620 objects
Dataset pool/iocage/releases/11.2-RELEASE/root@couchpotato [ZPL], ID 646, cr_txg 475106, 1.43G, 95620 objects
Dataset pool/iocage/releases/11.2-RELEASE/root@teslamate [ZPL], ID 731, cr_txg 3937568, 1.43G, 95620 objects
Dataset pool/iocage/releases/11.2-RELEASE/root [ZPL], ID 602, cr_txg 322588, 1.50G, 97717 objects
Dataset pool/iocage/releases/11.2-RELEASE [ZPL], ID 596, cr_txg 322587, 176K, 8 objects
Dataset pool/iocage/releases/13.2-RELEASE/root@web [ZPL], ID 852, cr_txg 27943081, 777M, 17100 objects
Dataset pool/iocage/releases/13.2-RELEASE/root [ZPL], ID 8663, cr_txg 27924289, 777M, 17100 objects
Dataset pool/iocage/releases/13.2-RELEASE [ZPL], ID 8656, cr_txg 27924288, 192K, 8 objects
Dataset pool/iocage/releases [ZPL], ID 565, cr_txg 258524, 192K, 12 objects
Dataset pool/iocage/templates [ZPL], ID 571, cr_txg 258526, 176K, 7 objects
Dataset pool/iocage/jails/web/root@jail-2023-08-06_23-26 [ZPL], ID 20831, cr_txg 28116744, 4.97G, 329930 objects
Dataset pool/iocage/jails/web/root [ZPL], ID 924, cr_txg 27943083, 7.18G, 400180 objects
Dataset pool/iocage/jails/web@jail-2023-08-06_23-26 [ZPL], ID 20829, cr_txg 28116744, 208K, 10 objects
Dataset pool/iocage/jails/web [ZPL], ID 314, cr_txg 27943082, 224K, 10 objects
Dataset pool/iocage/jails/shell/root@ioc_update_11.3-RELEASE-p14_2023-08-04_20-15-49 [ZPL], ID 21977, cr_txg 28085309, 5.19G, 400595 objects
Dataset pool/iocage/jails/shell/root@ioc_update_11.2-RELEASE-p15_2020-07-12_13-07-09 [ZPL], ID 668, cr_txg 8494174, 2.79G, 151462 objects
Dataset pool/iocage/jails/shell/root@ioc_update_11.2-RELEASE-p9 [ZPL], ID 685, cr_txg 4560311, 2.55G, 140608 objects
Dataset pool/iocage/jails/shell/root@ioc_update_11.2-RELEASE-p15_2020-07-12_13-07-54 [ZPL], ID 674, cr_txg 8494183, 2.79G, 151469 objects
Dataset pool/iocage/jails/shell/root@ioc_update_11.3-RELEASE-p11_2023-07-26_09-08-55 [ZPL], ID 7931, cr_txg 27924398, 3.33G, 169462 objects
Dataset pool/iocage/jails/shell/root@jail-2023-08-06_23-26 [ZPL], ID 20835, cr_txg 28116744, 7.76G, 557372 objects
Dataset pool/iocage/jails/shell/root [ZPL], ID 642, cr_txg 2573569, 8.18G, 572968 objects
Dataset pool/iocage/jails/shell@ioc_update_11.3-RELEASE-p14_2023-08-04_20-15-49 [ZPL], ID 21975, cr_txg 28085309, 232K, 11 objects
Dataset pool/iocage/jails/shell@ioc_update_11.2-RELEASE-p15_2020-07-12_13-07-09 [ZPL], ID 525, cr_txg 8494174, 216K, 11 objects
Dataset pool/iocage/jails/shell@ioc_update_11.2-RELEASE-p9 [ZPL], ID 623, cr_txg 4560311, 192K, 10 objects
Dataset pool/iocage/jails/shell@ioc_update_11.3-RELEASE-p11_2023-07-26_09-08-55 [ZPL], ID 7929, cr_txg 27924398, 216K, 11 objects
Dataset pool/iocage/jails/shell@jail-2023-08-06_23-26 [ZPL], ID 20833, cr_txg 28116744, 216K, 11 objects
Dataset pool/iocage/jails/shell@ioc_update_11.2-RELEASE-p15_2020-07-12_13-07-54 [ZPL], ID 672, cr_txg 8494183, 216K, 11 objects
Dataset pool/iocage/jails/shell [ZPL], ID 635, cr_txg 2573568, 216K, 11 objects
Dataset pool/iocage/jails/sendmail/root@ioc_update_12.2-RELEASE-p15_2022-08-18_16-10-51 [ZPL], ID 60164, cr_txg 21674773, 28.4G, 600674 objects
Dataset pool/iocage/jails/sendmail/root@ioc_update_12.2-RELEASE-p15_2022-08-18_16-12-58 [ZPL], ID 59945, cr_txg 21674799, 28.4G, 600674 objects
Dataset pool/iocage/jails/sendmail/root@ioc_update_12.2-RELEASE-p15_2022-08-18_16-12-36 [ZPL], ID 60050, cr_txg 21674794, 28.4G, 600674 objects
Dataset pool/iocage/jails/sendmail/root@ioc_update_12.2-RELEASE-p15_2023-07-26_08-52-27 [ZPL], ID 8537, cr_txg 27924186, 30.8G, 601340 objects
Dataset pool/iocage/jails/sendmail/root@jail-2023-08-06_23-26 [ZPL], ID 20839, cr_txg 28116744, 30.8G, 601339 objects
Dataset pool/iocage/jails/sendmail/root@ioc_update_12.2-RELEASE-p4_2022-08-18_16-09-07 [ZPL], ID 59519, cr_txg 21674752, 28.4G, 600379 objects
Dataset pool/iocage/jails/sendmail/root@ioc_update_12.2-RELEASE-p4_2021-03-08_02-17-22 [ZPL], ID 824, cr_txg 12596552, 25.4G, 586619 objects
Dataset pool/iocage/jails/sendmail/root@ioc_update_11.2-RELEASE-p9_2020-07-12_13-32-40 [ZPL], ID 236, cr_txg 8494512, 2.98G, 139433 objects
Dataset pool/iocage/jails/sendmail/root@ioc_update_11.3-RELEASE-p11_2021-03-07_23-21-30 [ZPL], ID 414, cr_txg 12594445, 22.6G, 452470 objects
Dataset pool/iocage/jails/sendmail/root@ioc_update_12.2-RELEASE-p15_2022-08-18_16-13-40 [ZPL], ID 60172, cr_txg 21674808, 28.4G, 600611 objects
Dataset pool/iocage/jails/sendmail/root [ZPL], ID 714, cr_txg 2575465, 35.3G, 602246 objects
Dataset pool/iocage/jails/sendmail@ioc_update_12.2-RELEASE-p15_2022-08-18_16-10-51 [ZPL], ID 60041, cr_txg 21674773, 224K, 11 objects
Dataset pool/iocage/jails/sendmail@jail-2023-08-06_23-26 [ZPL], ID 20837, cr_txg 28116744, 208K, 11 objects
Dataset pool/iocage/jails/sendmail@ioc_update_12.2-RELEASE-p15_2023-07-26_08-52-27 [ZPL], ID 8535, cr_txg 27924186, 208K, 11 objects
Dataset pool/iocage/jails/sendmail@ioc_update_12.2-RELEASE-p15_2022-08-18_16-12-36 [ZPL], ID 59935, cr_txg 21674794, 224K, 11 objects
Dataset pool/iocage/jails/sendmail@ioc_update_12.2-RELEASE-p15_2022-08-18_16-12-58 [ZPL], ID 59943, cr_txg 21674799, 224K, 11 objects
Dataset pool/iocage/jails/sendmail@ioc_update_11.2-RELEASE-p9_2020-07-12_13-32-40 [ZPL], ID 193, cr_txg 8494512, 208K, 11 objects
Dataset pool/iocage/jails/sendmail@ioc_update_12.2-RELEASE-p15_2022-08-18_16-13-40 [ZPL], ID 59951, cr_txg 21674808, 224K, 11 objects
Dataset pool/iocage/jails/sendmail@ioc_update_12.2-RELEASE-p4_2021-03-08_02-17-22 [ZPL], ID 822, cr_txg 12596552, 224K, 11 objects
Dataset pool/iocage/jails/sendmail@ioc_update_11.3-RELEASE-p11_2021-03-07_23-21-30 [ZPL], ID 796, cr_txg 12594445, 208K, 11 objects
Dataset pool/iocage/jails/sendmail@ioc_update_12.2-RELEASE-p4_2022-08-18_16-09-07 [ZPL], ID 60034, cr_txg 21674752, 224K, 11 objects
Dataset pool/iocage/jails/sendmail [ZPL], ID 708, cr_txg 2575464, 208K, 11 objects
Dataset pool/iocage/jails@jail-2023-08-06_23-26 [ZPL], ID 20827, cr_txg 28116744, 192K, 10 objects

ZFS_DBGMSG(zdb) START:
metaslab.c:1819:spa_set_allocator(): spa allocator: dynamic
spa.c:6988:spa_import(): spa_import: importing pool
spa_misc.c:432:spa_load_note(): spa_load(pool, config trusted): LOADING
vdev.c:185:vdev_dbgmsg(): disk vdev '/dev/da2p2': probe done, cant_read=0 cant_write=1
vdev.c:185:vdev_dbgmsg(): disk vdev '/dev/da3p2': probe done, cant_read=0 cant_write=1
vdev.c:185:vdev_dbgmsg(): disk vdev '/dev/da0p2': probe done, cant_read=0 cant_write=1
vdev.c:185:vdev_dbgmsg(): disk vdev '/dev/da11p2': probe done, cant_read=0 cant_write=1
vdev.c:185:vdev_dbgmsg(): disk vdev '/dev/da6p2': probe done, cant_read=0 cant_write=1
vdev.c:185:vdev_dbgmsg(): disk vdev '/dev/da8p2': probe done, cant_read=0 cant_write=1
vdev.c:185:vdev_dbgmsg(): disk vdev '/dev/da1p2': probe done, cant_read=0 cant_write=1
vdev.c:185:vdev_dbgmsg(): disk vdev '/dev/da7p2': probe done, cant_read=0 cant_write=1
vdev.c:185:vdev_dbgmsg(): disk vdev '/dev/da5p2': probe done, cant_read=0 cant_write=1
vdev.c:185:vdev_dbgmsg(): disk vdev '/dev/da10p2': probe done, cant_read=0 cant_write=1
vdev.c:185:vdev_dbgmsg(): disk vdev '/dev/da9p2': probe done, cant_read=0 cant_write=1
vdev.c:185:vdev_dbgmsg(): disk vdev '/dev/da4p2': probe done, cant_read=0 cant_write=1
vdev.c:185:vdev_dbgmsg(): disk vdev '/dev/da8p2': best uberblock found for spa pool. txg 43683330
spa_misc.c:432:spa_load_note(): spa_load(pool, config untrusted): using uberblock with txg=43683330
vdev.c:2654:vdev_update_path(): vdev_copy_path: vdev 11369669688009571944: vdev_path changed from '/dev/gptid/ea047ece-4566-11ee-b214-3cecef479fec' to '/dev/da8p2'
vdev.c:2654:vdev_update_path(): vdev_copy_path: vdev 1198292726028928119: vdev_path changed from '/dev/gptid/c6f7100b-a3c5-11ef-8af1-3cecef479fec' to '/dev/da6p2'
vdev.c:2654:vdev_update_path(): vdev_copy_path: vdev 15592414607002456058: vdev_path changed from '/dev/gptid/b7a58c52-a60c-11ef-8af1-3cecef479fec' to '/dev/da10p2'
vdev.c:2654:vdev_update_path(): vdev_copy_path: vdev 1250835966808176117: vdev_path changed from '/dev/gptid/79c8635f-a6cd-11ef-8af1-3cecef479fec' to '/dev/da11p2'
vdev.c:2654:vdev_update_path(): vdev_copy_path: vdev 8753532412643394705: vdev_path changed from '/dev/gptid/815ae4fa-592f-11ee-9b0b-3cecef479fec' to '/dev/da0p2'
vdev.c:2654:vdev_update_path(): vdev_copy_path: vdev 15830379629328594911: vdev_path changed from '/dev/gptid/e0298a14-b9d3-11eb-b10e-002590875a70' to '/dev/da1p2'
vdev.c:2654:vdev_update_path(): vdev_copy_path: vdev 8308003226539821342: vdev_path changed from '/dev/gptid/1f040af3-a6cb-11ef-8af1-3cecef479fec' to '/dev/da2p2'
vdev.c:2654:vdev_update_path(): vdev_copy_path: vdev 4714444845032097250: vdev_path changed from '/dev/gptid/8db0b67c-a608-11ef-8af1-3cecef479fec' to '/dev/da3p2'
vdev.c:2654:vdev_update_path(): vdev_copy_path: vdev 12841370224329237326: vdev_path changed from '/dev/gptid/9cd8fe8c-a3ba-11ef-8af1-3cecef479fec' to '/dev/da4p2'
vdev.c:2654:vdev_update_path(): vdev_copy_path: vdev 7020140988808757239: vdev_path changed from '/dev/gptid/089d4978-a7e6-11ef-8af1-3cecef479fec' to '/dev/da5p2'
vdev.c:2654:vdev_update_path(): vdev_copy_path: vdev 4673164318213954266: vdev_path changed from '/dev/gptid/d9978713-4a52-11ee-9b0b-3cecef479fec' to '/dev/da9p2'
vdev.c:2654:vdev_update_path(): vdev_copy_path: vdev 16979738115652578185: vdev_path changed from '/dev/gptid/5d020481-a72c-11ef-8af1-3cecef479fec' to '/dev/da7p2'
vdev.c:185:vdev_dbgmsg(): disk vdev '/dev/da3p2': probe done, cant_read=0 cant_write=1
vdev.c:185:vdev_dbgmsg(): disk vdev '/dev/da2p2': probe done, cant_read=0 cant_write=1
vdev.c:185:vdev_dbgmsg(): disk vdev '/dev/da9p2': probe done, cant_read=0 cant_write=1
vdev.c:185:vdev_dbgmsg(): disk vdev '/dev/da10p2': probe done, cant_read=0 cant_write=1
vdev.c:185:vdev_dbgmsg(): disk vdev '/dev/da11p2': probe done, cant_read=0 cant_write=1
vdev.c:185:vdev_dbgmsg(): disk vdev '/dev/da6p2': probe done, cant_read=0 cant_write=1
vdev.c:185:vdev_dbgmsg(): disk vdev '/dev/da1p2': probe done, cant_read=0 cant_write=1
vdev.c:185:vdev_dbgmsg(): disk vdev '/dev/da5p2': probe done, cant_read=0 cant_write=1
vdev.c:185:vdev_dbgmsg(): disk vdev '/dev/da0p2': probe done, cant_read=0 cant_write=1
vdev.c:185:vdev_dbgmsg(): disk vdev '/dev/da4p2': probe done, cant_read=0 cant_write=1
vdev.c:185:vdev_dbgmsg(): disk vdev '/dev/da7p2': probe done, cant_read=0 cant_write=1
vdev.c:185:vdev_dbgmsg(): disk vdev '/dev/da8p2': probe done, cant_read=0 cant_write=1
spa_misc.c:2470:spa_import_progress_set_notes_impl(): 'pool' Loading checkpoint txg
spa_misc.c:2470:spa_import_progress_set_notes_impl(): 'pool' Loading indirect vdev metadata
spa_misc.c:2470:spa_import_progress_set_notes_impl(): 'pool' Checking feature flags
spa_misc.c:2470:spa_import_progress_set_notes_impl(): 'pool' Loading special MOS directories
spa_misc.c:2470:spa_import_progress_set_notes_impl(): 'pool' Loading properties
spa_misc.c:2470:spa_import_progress_set_notes_impl(): 'pool' Loading AUX vdevs
spa_misc.c:2470:spa_import_progress_set_notes_impl(): 'pool' Loading vdev metadata
spa_misc.c:2470:spa_import_progress_set_notes_impl(): 'pool' Loading dedup tables
spa_misc.c:2470:spa_import_progress_set_notes_impl(): 'pool' Loading BRT
spa_misc.c:2470:spa_import_progress_set_notes_impl(): 'pool' Verifying Log Devices
spa_misc.c:2470:spa_import_progress_set_notes_impl(): 'pool' Verifying pool data
spa_misc.c:2470:spa_import_progress_set_notes_impl(): 'pool' Calculating deflated space
spa_misc.c:2470:spa_import_progress_set_notes_impl(): 'pool' Starting import
spa.c:9549:spa_async_request(): spa=pool async request task=2048
spa_misc.c:432:spa_load_note(): spa_load(pool, config trusted): LOADED
spa.c:9549:spa_async_request(): spa=pool async request task=32
ZFS_DBGMSG(zdb) END
Dataset pool/iocage/jails [ZPL], ID 553, cr_txg 258520, 192K, 10 objects
Dataset pool/iocage/log [ZPL], ID 559, cr_txg 258522, 384K, 11 objects
Dataset pool/iocage/images [ZPL], ID 547, cr_txg 258518, 176K, 7 objects
Dataset pool/iocage [ZPL], ID 535, cr_txg 258514, 10.6M, 483 objects
Dataset pool [ZPL], ID 21, cr_txg 1, 240K, 15 objects
MOS object 753 (DSL dir clones) leaked
Verified large_blocks feature refcount of 0 is correct
Verified large_dnode feature refcount of 0 is correct
Verified sha512 feature refcount of 0 is correct
Verified skein feature refcount of 0 is correct
Verified userobj_accounting feature refcount of 100 is correct
Verified encryption feature refcount of 0 is correct
Verified project_quota feature refcount of 100 is correct
Verified redaction_bookmarks feature refcount of 0 is correct
Verified redacted_datasets feature refcount of 0 is correct
Verified bookmark_written feature refcount of 0 is correct
Verified livelist feature refcount of 0 is correct
Verified zstd_compress feature refcount of 0 is correct

No obvious error here, so I'll try the zdb send | zfs receive next.
 
root@:~ # zdb -B -e -p /dev pool/home > /dev/null
dump_backup: dmu_send_obj: No such file or directory
may have to resigned to the fact that this is seriously f***ed ...
sigh. no it just means I passed the wrong arguments.
 
jyavenard, take a look at the following blog post and forum posts, they might be helpful:



 
Thanks.

I'll have a try with the sysctl options that bypass the metadata checks.

I'm happy to report that I've successfully managed to send all my datasets but 2 to another machine.
One failed with
ASSERT at /usr/src/sys/contrib/openzfs/module/zfs/arc.c:5871:arc_read()
!embedded_bp || BPE_GET_ETYPE(bp) == BP_EMBEDDED_TYPE_DATA
PID: 5502 COMM: zdb
TID: 135897 NAME: send_traverse_thread
time: command terminated abnormally
Abort trap
The other was the 6TB of mythtv recordings that I don't care about one bit and I didn't have enough capacity on my other zfs based machine.

I manually did for each dataset,
on receiver
nc -l 9999 | zfs recv pool/jya_recovery/pool/downloads
on the sender:
zdb -B -e -p /dev pool/180 | nc -w 10 192.168.11.205 9999
 
sysctl vfs.zfs.spa.load_verify_metadata=0
sysctl vfs.zfs.spa.load_verify_data=0
that also did the trick it seems.
can see all my files but the one dataset I couldn't send earlier

Edit: I ran a pool scrub shortly after; that killed everything. now I can't even use zdb on it.
 
Back
Top