System panic

I'm only guessing now but I think because it was clone it's messing up the status. If you don't need data on that da0 disk (backup disk to be) I'd rather delete it and start from scratch.
 
I'm only guessing now but I think because it was clone it's messing up the status. If you don't need data on that da0 disk (backup disk to be) I'd rather delete it and start from scratch.
Not sure I feel comfortable deleting and starting from scrath given that some of the data is corrupted on the SSD I'll be backing up from.

Yes the HDD is an older version - maybe I shouldn't import it as you said - but then how else can I mount it?
 
I was hoping to use zfs for file permissions, etc being the same, and possibly easier. If data is corrupted (as it seems) maybe zfs is a better option than rsync ?
If data is corrupted you're done, nothing will help you. I would personally prefer filesystem backup at this point than zfs. But in the end it's your choice as sysadmin. zfs send does provide you with this option.

Of course rsync does preserve ownership/permissions, even file flags and ACLs. For basic setup rsync -avxH .. is enough. Note the -x not to cross FS, maybe something you actually want.

Question is: do you need data on that da0 disk? The one you want to put your data on. If not I'm suggesting to delete it and create new backup pool where you push your data. Don't name it the same one as your zroot so you can import it to the current machine.
Also if you don't care about the data on da0 you could even do dd if=/dev/ada0 of=/dev/da0 and let it do 1:1 backup.
 
Question is: do you need data on that da0 disk? The one you want to put your data on. If not I'm suggesting to delete it and create new backup pool where you push your data. Don't name it the same one as your zroot so you can import it to the current machine.
Also if you don't care about the data on da0 you could even do dd if=/dev/ada0 of=/dev/da0 and let it do 1:1 backup.
I wonder if his zpool from 468G freebsd-zfs partition can be copied that way to a 7G USB flash drive. It will have to be a different disk.
 
When you look at his zdb output you can spot a problem - the pool in a backup is exact copy of the running pool. Red flag in that output is path: which doesn't correspond to the provider queried in zdb.
The disk that is supposed to be a backup disk is a clone (of sort) of the current running disk and hence this problem. In private conversation I suggested him to boot to live FreeBSD media and change the name (zpool import .. ) and its guid (zpool reguid).
Then once it's done boot back to the system and do the zdb comparison again. ZFS will not import the pool if there's a problem but zdb is good way to check.

This all is done to a) avoid buying new backup disk b) not to touch older data on the old disk c) try to backup current data.
 
Yes, ZFS naming must be paid attention to first and foremost. With 2 pools named the same (just started using ZFS back then) I ended up with unresponsive system. And there was no way out of it except just disconnect the offending drive.
 
You can try this zdb option. Would do it after a backup.

--all-reconstruction
I tried this - didn't work
Code:
zdb --all-reconstruction
zdb: illegal option -- -
Usage:    zdb [-AbcdDFGhikLMPsvXy] [-e [-V] [-p <path> ...]] [-I <inflight I/Os>]
        [-o <var>=<value>]... [-.....
Of course rsync does preserve ownership/permissions, even file flags and ACLs. For basic setup rsync -avxH .. is enough. Note the -x not to cross FS, maybe something you actually want.
I did rsync - with different options - seemed like it did copy files on the face of it.
Question is: do you need data on that da0 disk?
Yes, until I can fix the ssd for good - the da0/hdd is an older version which went onto become the ssd/corrupted - so has some version history of the data on the corrupted drive.
 
This zfs corruption is really messed up - can't even see home folder under previous snapshots when I mount them!

Also doesn't let show me programs like Firefox when I try to run them from the launcher on xmonad - although the terminal recognizes correctly that there's a program called "firefox" on the system
 
A few commands:
Code:
zpool import
zpool list -v
zpool status -x
zfs mount -a
zfs list
kenv | egrep "currdev|mountfrom|kernel_path|kernelname"
 
A few commands:
Code:
zpool import
zpool list -v
zpool status -x
zfs mount -a
zfs list
kenv | egrep "currdev|mountfrom|kernel_path|kernelname"
Code:
sudo zpool import
Password:
no pools available to import
Code:
zpool list -v
NAME           SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
zroot          456G   289G   167G        -         -    48%    63%  1.00x    ONLINE  -
  ada0p3.eli   456G   289G   167G        -         -    48%  63.3%      -    ONLINE
Code:
sudo zpool status -x
  pool: zroot
 state: ONLINE
status: One or more devices has experienced an error resulting in data
    corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
    entire pool from backup.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A
  scan: scrub repaired 0B in 00:27:07 with 2 errors on Wed Dec 21 04:23:43 2022
config:

    NAME          STATE     READ WRITE CKSUM
    zroot         ONLINE       0     0     0
      ada0p3.eli  ONLINE       0     0     0

errors: 1 data errors, use '-v' for a list
zfs mount -a
Code:
zfs list
NAME                                            USED  AVAIL     REFER  MOUNTPOINT
zroot                                           289G   153G       88K  /zroot
zroot/ROOT                                      116G   153G       88K  none
zroot/ROOT/12.3-RELEASE-p1_2022-03-18_164224      8K   153G     17.5G  /
zroot/ROOT/12.3-RELEASE-p3_2022-03-23_175807      8K   153G     17.4G  /
zroot/ROOT/12.3-RELEASE-p4_2022-04-06_232036      8K   153G     17.9G  /
zroot/ROOT/12.3-RELEASE-p5_2022-08-10_011525      8K   153G     27.0G  /
zroot/ROOT/12.3-RELEASE-p6_2022-09-03_171127      8K   153G     29.3G  /
zroot/ROOT/12.3-RELEASE-p7_2022-09-10_230907      8K   153G     29.7G  /
zroot/ROOT/12.3-to-13.1                           8K   153G     29.6G  /
zroot/ROOT/13.0-RELEASE-p11_2022-07-01_213226     8K   153G     23.7G  /
zroot/ROOT/13.1-RELEASE-p2_2022-09-10_231247    700K   153G     29.8G  /
zroot/ROOT/13.1-RELEASE-p2_2022-09-10_232433    644M   153G     29.0G  /
zroot/ROOT/13.1-RELEASE-p2_2022-09-11_220401    716K   153G     29.1G  /
zroot/ROOT/13.1-RELEASE-p2_2022-11-06_014830    856K   153G     26.7G  /
zroot/ROOT/13.1-RELEASE-p3_2022-11-16_133737    123M   153G     26.8G  /
zroot/ROOT/13.1-RELEASE-p4_2022-12-03_162103    114G   153G     29.0G  /
zroot/ROOT/13.1-RELEASE-p4_2022-12-10_051051    527M   153G     29.0G  /
zroot/ROOT/13.1-RELEASE-p4_2022-12-10_061442    574M   153G     29.0G  /
zroot/ROOT/13.1-RELEASE-p4_2022-12-10_180529      8K   153G     28.8G  /
zroot/ROOT/13.1-p2-after-destroying            65.6M   153G     23.0G  /
zroot/tmp                                      62.5M   153G     62.5M  none
zroot/usr                                       172G   153G       88K  /usr
zroot/usr/home                                  170G   153G      170G  /usr/home
zroot/usr/ports                                1.50G   153G     1.50G  /usr/ports
zroot/usr/src                                   771M   153G      771M  /usr/src
zroot/var                                       269M   153G       88K  /var
zroot/var/audit                                  88K   153G       88K  /var/audit
zroot/var/crash                                 267M   153G      267M  /var/crash
zroot/var/log                                  1.91M   153G     1.91M  /var/log
zroot/var/mail                                  112K   153G      112K  /var/mail
zroot/var/tmp                                    88K   153G       88K  /var/tmp
Code:
kenv | egrep "currdev|mountfrom|kernel_path|kernelname"
currdev="zfs:zroot/ROOT/13.1-RELEASE-p4_2022-12-03_162103:"
kernel_path="/boot/kernel"
kernelname="/boot/kernel/kernel"
vfs.root.mountfrom="zfs:zroot/ROOT/13.1-RELEASE-p4_2022-12-03_162103"
 
Does any of this help?

Think I am going to proceed with reinstalling the OS (with zfs) onto the corrupted disk/SSD in the next couple of hours. So far no resolution seems to have come about nor the bug report shows any progress.
 
You can try this zdb option. Would do it after a backup.

--all-reconstruction
If I understood correctly , the "-Y" option does the same and zpool needs to be specified

so I ran this sudo zdb -Y zroot and it terminated like this - not sure if there are any zfs/zdb experts :
Code:
Dataset zroot/ROOT/13.1-RELEASE-p4_2022-12-03_162103 [ZPL], ID 590, cr_txg 6764419, 29.0G, 904448 objects

    ZIL header: claim_txg 0, claim_blk_seq 0, claim_lr_seq 0 replay_seq 0, flags 0x0

        TX_CREATE           len    120, txg 6857403, seq 53
        TX_WRITE            len   4288, txg 6857403, seq 54
        TX_SETATTR          len    184, txg 6857403, seq 55
        TX_CREATE           len    120, txg 6857535, seq 56
        TX_WRITE            len   4288, txg 6857535, seq 57
        TX_SETATTR          len    184, txg 6857535, seq 58
        Total               6
        TX_CREATE           2
        TX_WRITE            2
        TX_SETATTR          2


    Object  lvl   iblk   dblk  dsize  dnsize  lsize   %full  type
         0    6   128K    16K   350M     512   895M   49.33  DMU dnode
        -1    1   128K  1.50K     8K     512  1.50K  100.00  ZFS user/group/project used
        -2    1   128K     2K     8K     512     2K  100.00  ZFS user/group/project used

    Dnode slots:
    Total used:             0
    Max used:               0
    Percent empty:        nan

dmu_object_next() = 97
Abort trap

There was also this command I ran sudo zdb -c zroot and that too didn't seem to run properly

Code:
Traversing all blocks to verify metadata checksums and verify nothing leaked ...

loading concrete vdev 0, metaslab 227 of 228 ...
47.4G completed (1101MB/s) estimated time remaining: 0hr 03min 44sec        zdb_blkptr_cb: Got error 97 reading <94, 3, 0, f>  -- skipping
 288G completed (1672MB/s) estimated time remaining: 0hr 00min 00sec        
Error counts:

    errno  count
       97  1

    No leaks (block sum matches space maps exactly)

    bp count:               7355397
    ganged count:             28006
    bp logical:        421186207744      avg:  57262
    bp physical:       300170215936      avg:  40809     compression:   1.40
    bp allocated:      310002585600      avg:  42146     compression:   1.36
    bp deduped:                   0    ref>1:      0   deduplication:   1.00
    Normal class:      310002225152     used: 63.59%
    Embedded log class         360448     used:  0.02%

    additional, non-pointer bps of type 0:     377741
    Dittoed blocks on same vdev: 640087

So something is definitely up with that 97 / error no 97 on zfs - any experts who know what it is and if its fixable ? 👀
 
Back
Top