ZFS zfs panic: solaris assert

Hi there
I have a small home server with a ZFS pool created in FreeBSD 11.1 and updated to 12.0 -> 12.1. A few days ago, the server rebooted with panic
Code:
panic: solaris assert: size <= (1ULL << 24) (0x1401000 <= 0x1000000), file: /usr/src/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/abd.c, line: 296
cpuid = 3
time = 1574981747
KDB: stack backtrace:
#0 0xffffffff80c1d297 at kdb_backtrace+0x67
#1 0xffffffff80bd05cd at vpanic+0x19d
#2 0xffffffff80bd0423 at panic+0x43
#3 0xffffffff82a6e22c at assfail3+0x2c
#4 0xffffffff8284a8f7 at abd_alloc+0x67
#5 0xffffffff82850319 at arc_hdr_alloc_pabd+0x99
#6 0xffffffff8284d554 at arc_hdr_alloc+0x124
#7 0xffffffff8284ef13 at arc_read+0x243
#8 0xffffffff8287942d at traverse_prefetch_metadata+0xbd
#9 0xffffffff828788cc at traverse_visitbp+0x3dc
#10 0xffffffff82878930 at traverse_visitbp+0x440
#11 0xffffffff82878930 at traverse_visitbp+0x440
#12 0xffffffff82878930 at traverse_visitbp+0x440
#13 0xffffffff82878930 at traverse_visitbp+0x440
#14 0xffffffff82879513 at traverse_dnode+0xd3
#15 0xffffffff82878c30 at traverse_visitbp+0x740
#16 0xffffffff828780a7 at traverse_impl+0x317
#17 0xffffffff8287837c at traverse_pool+0x14c
Uptime: 1m0s
I tested the memory and drives and they look OK. Then I installed FreeBSD 12.1 on a USB drive
zpool import

Code:
   pool: zroot
     id: 5722521002676846505
  state: ONLINE
 status: Some supported features are not enabled on the pool.
 action: The pool can be imported using its name or numeric identifier, though
        some features will not be available without an explicit 'zpool upgrade'.
 config:

        zroot                 ONLINE
          raidz1-0            ONLINE
            gpt/D0-33GTM0VGS  ONLINE
            gpt/D1-Z1E3Q7KP   ONLINE
            gpt/D2-Z4ZARG1V   ONLINE

I tried to import the pool but got the same panic
zpool import -o readonly -f -F -N -R /pool zroot

zdb -ue zroot
Code:
Uberblock:
    magic = 0000000000bab10c
    version = 5000
    txg = 11751485
    guid_sum = 14638353410936556308
    timestamp = 1574776136 UTC = Tue Nov 26 16:48:56 2019
    checkpoint_txg = 0
zdb -l /dev/gpt/D0-33GTM0VGS
Code:
------------------------------------
LABEL 0
------------------------------------
    version: 5000
    name: 'zroot'
    state: 0
    txg: 11750848
    pool_guid: 5722521002676846505
    hostid: 1550328424
    hostname: ''
    top_guid: 10071190156355008053
    guid: 4016597551985842896
    vdev_children: 1
    vdev_tree:
        type: 'raidz'
        id: 0
        guid: 10071190156355008053
        nparity: 1
        metaslab_array: 39
        metaslab_shift: 35
        ashift: 12
        asize: 5997325713408
        is_log: 0
        create_txg: 4
        children[0]:
            type: 'disk'
            id: 0
            guid: 4016597551985842896
            path: '/dev/gpt/D0-33GTM0VGS'
            whole_disk: 1
            DTL: 70406
            create_txg: 4
        children[1]:
            type: 'disk'
            id: 1
            guid: 18015782253695591313
            path: '/dev/gpt/D1-Z1E3Q7KP'
            whole_disk: 1
            DTL: 70405
            create_txg: 4
        children[2]:
            type: 'disk'
            id: 2
            guid: 13705750593642370773
            path: '/dev/gpt/D2-Z4ZARG1V'
            whole_disk: 1
            DTL: 70323
            create_txg: 4
    features_for_read:
        com.delphix:hole_birth
        com.delphix:embedded_data
------------------------------------
LABEL 1
------------------------------------
    version: 5000
    name: 'zroot'
    state: 0
    txg: 11750848
    pool_guid: 5722521002676846505
    hostid: 1550328424
    hostname: ''
    top_guid: 10071190156355008053
    guid: 4016597551985842896
    vdev_children: 1
    vdev_tree:
        type: 'raidz'
        id: 0
        guid: 10071190156355008053
        nparity: 1
        metaslab_array: 39
        metaslab_shift: 35
        ashift: 12
        asize: 5997325713408
        is_log: 0
        create_txg: 4
        children[0]:
            type: 'disk'
            id: 0
            guid: 4016597551985842896
            path: '/dev/gpt/D0-33GTM0VGS'
            whole_disk: 1
            DTL: 70406
            create_txg: 4
        children[1]:
            type: 'disk'
            id: 1
            guid: 18015782253695591313
            path: '/dev/gpt/D1-Z1E3Q7KP'
            whole_disk: 1
            DTL: 70405
            create_txg: 4
        children[2]:
            type: 'disk'
            id: 2
            guid: 13705750593642370773
            path: '/dev/gpt/D2-Z4ZARG1V'
            whole_disk: 1
            DTL: 70323
            create_txg: 4
    features_for_read:
        com.delphix:hole_birth
        com.delphix:embedded_data
------------------------------------
LABEL 2
------------------------------------
    version: 5000
    name: 'zroot'
    state: 0
    txg: 11750848
    pool_guid: 5722521002676846505
    hostid: 1550328424
    hostname: ''
    top_guid: 10071190156355008053
    guid: 4016597551985842896
    vdev_children: 1
    vdev_tree:
        type: 'raidz'
        id: 0
        guid: 10071190156355008053
        nparity: 1
        metaslab_array: 39
        metaslab_shift: 35
        ashift: 12
        asize: 5997325713408
        is_log: 0
        create_txg: 4
        children[0]:
            type: 'disk'
            id: 0
            guid: 4016597551985842896
            path: '/dev/gpt/D0-33GTM0VGS'
            whole_disk: 1
            DTL: 70406
            create_txg: 4
        children[1]:
            type: 'disk'
            id: 1
            guid: 18015782253695591313
            path: '/dev/gpt/D1-Z1E3Q7KP'
            whole_disk: 1
            DTL: 70405
            create_txg: 4
        children[2]:
            type: 'disk'
            id: 2
            guid: 13705750593642370773
            path: '/dev/gpt/D2-Z4ZARG1V'
            whole_disk: 1
            DTL: 70323
            create_txg: 4
    features_for_read:
        com.delphix:hole_birth
        com.delphix:embedded_data
------------------------------------
LABEL 3
------------------------------------
    version: 5000
    name: 'zroot'
    state: 0
    txg: 11750848
    pool_guid: 5722521002676846505
    hostid: 1550328424
    hostname: ''
    top_guid: 10071190156355008053
    guid: 4016597551985842896
    vdev_children: 1
    vdev_tree:
        type: 'raidz'
        id: 0
        guid: 10071190156355008053
        nparity: 1
        metaslab_array: 39
        metaslab_shift: 35
        ashift: 12
        asize: 5997325713408
        is_log: 0
        create_txg: 4
        children[0]:
            type: 'disk'
            id: 0
            guid: 4016597551985842896
            path: '/dev/gpt/D0-33GTM0VGS'
            whole_disk: 1
            DTL: 70406
            create_txg: 4
        children[1]:
            type: 'disk'
            id: 1
            guid: 18015782253695591313
            path: '/dev/gpt/D1-Z1E3Q7KP'
            whole_disk: 1
            DTL: 70405
            create_txg: 4
        children[2]:
            type: 'disk'
            id: 2
            guid: 13705750593642370773
            path: '/dev/gpt/D2-Z4ZARG1V'
            whole_disk: 1
            DTL: 70323
            create_txg: 4
    features_for_read:
        com.delphix:hole_birth
        com.delphix:embedded_data
I found this thread
zdb -AAA -F -X zroot
Code:
...
                            capacity   operations   bandwidth  ---- errors ----
description                used avail  read write  read write  read write cksum
zroot                     1.47T 3.97T    82     0  351K     0     0     0    30
  raidz1                  1.47T 3.97T    82     0  351K     0     0     0   121
    /dev/gpt/D0-33GTM0VGS                27     0  146K     0     0     0     0
    /dev/gpt/D1-Z1E3Q7KP                 27     0  146K     0     0     0     0
    /dev/gpt/D2-Z4ZARG1V                 27     0  146K     0     0     0     0
    ...
Is it possible to somehow restore the pool or data?
 
A: Figuring out the scope of this assert (I would call it a bug) is probably for the developer mailing lists.
B: To read the pool, the easiest option might be to do it from a 12.0 installation, because according to what you wrote, that's the last time it worked. I don't know how hard it would be to abandon the rest of the OS install and go to 12.0 through for you, depends on what other stuff is on the machine.
 
ivans807 Thread you found was a GP fault ; here you have assertation failure. In other words developer decided that the condition that was asserted is so bad it's better to panic immediately. You'll find more answers if you open PR and/or try to contact people in mailing list.
 
So what result?
Code:
panic: solaris assert: nvlist_lookup_uint64(configs, ZPOOL_CONFIG_POOL_TXG, &txg) == 0, file: /usr/src/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/spa.c, line: 5222
cpuid = 11
time = 36
KDB: stack backtrace: ...
I have same problem on my FreeBSD 12.1.
Is it possible to recover?
 
Was a PR ever opened for this? I have the same issue, trying to install 11.3 with an encrypted ZFS mirror as zroot. Unencrypted works fine.

And yes, I have to use 11.3 as this is for a customer which is still in the process of evaluating 13.1
 
The entire file disappeared from FreeBSD by now. I can't find an assert like it in the other files. I think the chances to get help for the old version are slim.
 
I have to use 11.3 as this is for a customer which is still in the process of evaluating 13.1
They'd better hurry up. FreeBSD 13.1 will be end-of-life in a few months (three months after the release of 13.2). So by the time they're done evaluating it'll be end-of-life and they'll have to start over.

 
Back
Top