- Thread Starter
- #151
Also, as I had suspected, that pool is also named zroot. If you runzpool import
you should see the pool.
Code:
sudo zpool import
no pools available to import
Last edited by a moderator:
Also, as I had suspected, that pool is also named zroot. If you runzpool import
you should see the pool.
sudo zpool import
no pools available to import
Not sure I feel comfortable deleting and starting from scrath given that some of the data is corrupted on the SSD I'll be backing up from.I'm only guessing now but I think because it was clone it's messing up the status. If you don't need data on that da0 disk (backup disk to be) I'd rather delete it and start from scratch.
If data is corrupted you're done, nothing will help you. I would personally prefer filesystem backup at this point than zfs. But in the end it's your choice as sysadmin. zfs send does provide you with this option.I was hoping to use zfs for file permissions, etc being the same, and possibly easier. If data is corrupted (as it seems) maybe zfs is a better option than rsync ?
rsync -avxH ..
is enough. Note the -x not to cross FS, maybe something you actually want. zfs send --raw
to send it as is.I wonder if his zpool from 468G freebsd-zfs partition can be copied that way to a 7G USB flash drive. It will have to be a different disk.Question is: do you need data on that da0 disk? The one you want to put your data on. If not I'm suggesting to delete it and create new backup pool where you push your data. Don't name it the same one as your zroot so you can import it to the current machine.
Also if you don't care about the data on da0 you could even do dd if=/dev/ada0 of=/dev/da0 and let it do 1:1 backup.
path:
which doesn't correspond to the provider queried in zdb.I tried this - didn't workYou can try this zdb option. Would do it after a backup.
--all-reconstruction
zdb --all-reconstruction
zdb: illegal option -- -
Usage: zdb [-AbcdDFGhikLMPsvXy] [-e [-V] [-p <path> ...]] [-I <inflight I/Os>]
[-o <var>=<value>]... [-.....
I did rsync - with different options - seemed like it did copy files on the face of it.Of course rsync does preserve ownership/permissions, even file flags and ACLs. For basic setuprsync -avxH ..
is enough. Note the -x not to cross FS, maybe something you actually want.
Yes, until I can fix the ssd for good - the da0/hdd is an older version which went onto become the ssd/corrupted - so has some version history of the data on the corrupted drive.Question is: do you need data on that da0 disk?
zpool import
zpool list -v
zpool status -x
zfs mount -a
zfs list
kenv | egrep "currdev|mountfrom|kernel_path|kernelname"
A few commands:
Code:zpool import zpool list -v zpool status -x zfs mount -a zfs list kenv | egrep "currdev|mountfrom|kernel_path|kernelname"
sudo zpool import
Password:
no pools available to import
zpool list -v
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
zroot 456G 289G 167G - - 48% 63% 1.00x ONLINE -
ada0p3.eli 456G 289G 167G - - 48% 63.3% - ONLINE
sudo zpool status -x
pool: zroot
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
entire pool from backup.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A
scan: scrub repaired 0B in 00:27:07 with 2 errors on Wed Dec 21 04:23:43 2022
config:
NAME STATE READ WRITE CKSUM
zroot ONLINE 0 0 0
ada0p3.eli ONLINE 0 0 0
errors: 1 data errors, use '-v' for a list
zfs mount -a
zfs list
NAME USED AVAIL REFER MOUNTPOINT
zroot 289G 153G 88K /zroot
zroot/ROOT 116G 153G 88K none
zroot/ROOT/12.3-RELEASE-p1_2022-03-18_164224 8K 153G 17.5G /
zroot/ROOT/12.3-RELEASE-p3_2022-03-23_175807 8K 153G 17.4G /
zroot/ROOT/12.3-RELEASE-p4_2022-04-06_232036 8K 153G 17.9G /
zroot/ROOT/12.3-RELEASE-p5_2022-08-10_011525 8K 153G 27.0G /
zroot/ROOT/12.3-RELEASE-p6_2022-09-03_171127 8K 153G 29.3G /
zroot/ROOT/12.3-RELEASE-p7_2022-09-10_230907 8K 153G 29.7G /
zroot/ROOT/12.3-to-13.1 8K 153G 29.6G /
zroot/ROOT/13.0-RELEASE-p11_2022-07-01_213226 8K 153G 23.7G /
zroot/ROOT/13.1-RELEASE-p2_2022-09-10_231247 700K 153G 29.8G /
zroot/ROOT/13.1-RELEASE-p2_2022-09-10_232433 644M 153G 29.0G /
zroot/ROOT/13.1-RELEASE-p2_2022-09-11_220401 716K 153G 29.1G /
zroot/ROOT/13.1-RELEASE-p2_2022-11-06_014830 856K 153G 26.7G /
zroot/ROOT/13.1-RELEASE-p3_2022-11-16_133737 123M 153G 26.8G /
zroot/ROOT/13.1-RELEASE-p4_2022-12-03_162103 114G 153G 29.0G /
zroot/ROOT/13.1-RELEASE-p4_2022-12-10_051051 527M 153G 29.0G /
zroot/ROOT/13.1-RELEASE-p4_2022-12-10_061442 574M 153G 29.0G /
zroot/ROOT/13.1-RELEASE-p4_2022-12-10_180529 8K 153G 28.8G /
zroot/ROOT/13.1-p2-after-destroying 65.6M 153G 23.0G /
zroot/tmp 62.5M 153G 62.5M none
zroot/usr 172G 153G 88K /usr
zroot/usr/home 170G 153G 170G /usr/home
zroot/usr/ports 1.50G 153G 1.50G /usr/ports
zroot/usr/src 771M 153G 771M /usr/src
zroot/var 269M 153G 88K /var
zroot/var/audit 88K 153G 88K /var/audit
zroot/var/crash 267M 153G 267M /var/crash
zroot/var/log 1.91M 153G 1.91M /var/log
zroot/var/mail 112K 153G 112K /var/mail
zroot/var/tmp 88K 153G 88K /var/tmp
kenv | egrep "currdev|mountfrom|kernel_path|kernelname"
currdev="zfs:zroot/ROOT/13.1-RELEASE-p4_2022-12-03_162103:"
kernel_path="/boot/kernel"
kernelname="/boot/kernel/kernel"
vfs.root.mountfrom="zfs:zroot/ROOT/13.1-RELEASE-p4_2022-12-03_162103"
If I understood correctly , the "-Y" option does the same and zpool needs to be specifiedYou can try this zdb option. Would do it after a backup.
--all-reconstruction
sudo zdb -Y zroot
and it terminated like this - not sure if there are any zfs/zdb experts :Dataset zroot/ROOT/13.1-RELEASE-p4_2022-12-03_162103 [ZPL], ID 590, cr_txg 6764419, 29.0G, 904448 objects
ZIL header: claim_txg 0, claim_blk_seq 0, claim_lr_seq 0 replay_seq 0, flags 0x0
TX_CREATE len 120, txg 6857403, seq 53
TX_WRITE len 4288, txg 6857403, seq 54
TX_SETATTR len 184, txg 6857403, seq 55
TX_CREATE len 120, txg 6857535, seq 56
TX_WRITE len 4288, txg 6857535, seq 57
TX_SETATTR len 184, txg 6857535, seq 58
Total 6
TX_CREATE 2
TX_WRITE 2
TX_SETATTR 2
Object lvl iblk dblk dsize dnsize lsize %full type
0 6 128K 16K 350M 512 895M 49.33 DMU dnode
-1 1 128K 1.50K 8K 512 1.50K 100.00 ZFS user/group/project used
-2 1 128K 2K 8K 512 2K 100.00 ZFS user/group/project used
Dnode slots:
Total used: 0
Max used: 0
Percent empty: nan
dmu_object_next() = 97
Abort trap
sudo zdb -c zroot
and that too didn't seem to run properly Traversing all blocks to verify metadata checksums and verify nothing leaked ...
loading concrete vdev 0, metaslab 227 of 228 ...
47.4G completed (1101MB/s) estimated time remaining: 0hr 03min 44sec zdb_blkptr_cb: Got error 97 reading <94, 3, 0, f> -- skipping
288G completed (1672MB/s) estimated time remaining: 0hr 00min 00sec
Error counts:
errno count
97 1
No leaks (block sum matches space maps exactly)
bp count: 7355397
ganged count: 28006
bp logical: 421186207744 avg: 57262
bp physical: 300170215936 avg: 40809 compression: 1.40
bp allocated: 310002585600 avg: 42146 compression: 1.36
bp deduped: 0 ref>1: 0 deduplication: 1.00
Normal class: 310002225152 used: 63.59%
Embedded log class 360448 used: 0.02%
additional, non-pointer bps of type 0: 377741
Dittoed blocks on same vdev: 640087
zpool status -v
Ended up reinstalling the OS onto the SSD/corruptedCode:zpool status -v