zpool import causes core dump and boot loop

Hi all,

My FreeBSD 13.0-STABLE machine is rebooting itself as soon as I run zpool import poolname
This is a pool that was running perfectly fine on a 12.2-STABLE system that was upgraded from src to 13.0-STABLE and zpool upgarde run on the pool

Any help appreciated.

Many thanks,

Gary Hayers
 
I've imported pools regularly and haven't gotten an issue. So I'm wondering what's different. Maybe the pool name is the same as the one you booted from? Can you post the output of zpool status and zpool import before actually importing the pool?
 
Thanks for the reply,

I can import the pool readonly so it's not an issue with the naming of the pool, the boot pool is zroot and the pool I'm trying to import is called home0

Code:
zpool status
  pool: home0
 state: ONLINE
  scan: scrub repaired 0B in 00:01:03 with 0 errors on Wed Nov 11 13:49:09 2020
config:

        NAME        STATE     READ WRITE CKSUM
        home0       ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            ada1    ONLINE       0     0     0
            ada2    ONLINE       0     0     0

errors: No known data errors

  pool: zroot
 state: ONLINE
config:

        NAME        STATE     READ WRITE CKSUM
        zroot       ONLINE       0     0     0
          ada0p3    ONLINE       0     0     0

errors: No known data errors
If I export the pool then run zpool import home0 I get the following:
Code:
panic: VERIFY3(sa.sa_magic == SA_MAGIC) failed (3100794 == 3100762)

cpuid = 2
time = 1617913173
KDB: stack backtrace:
#0 0xffffffff80c57b15 at kdb_backtrace+0x65
#1 0xffffffff80c0a4a1 at vpanic+0x181
#2 0xffffffff8213c11a at spl_panic+0x3a
#3 0xffffffff822da30f at zpl_get_file_info+0x1cf
#4 0xffffffff821a5359 at dmu_objset_userquota_get_ids+0x319
#5 0xffffffff821b94c4 at dnode_setdirty+0x34
#6 0xffffffff8219069a at dbuf_dirty+0x8ea
#7 0xffffffff821a6f00 at dmu_objset_space_upgrade+0x40
#8 0xffffffff821a59a1 at dmu_objset_id_quota_upgrade_cb+0x151
#9 0xffffffff821a6def at dmu_objset_upgrade_task_cb+0x7f
#10 0xffffffff8213db8f at taskq_run+0x1f
#11 0xffffffff80c6b7b1 at taskqueue_run_locked+0x181
#12 0xffffffff80c6cacc at taskqueue_thread_loop+0xac
#13 0xffffffff80bc865e at fork_exit+0x7e
#14 0xffffffff8106377e at fork_trampoline+0xe
Uptime: 4s
Dumping 968 out of 16202 MB:..2%..12%..22%..32%..42%..52%..62%..72%..81%..91%
Many thanks,

Gary Hayers
 
If I export the pool then run zpool import home0 I get the following:
You can try also importing without mounting or/and read only.
-N Import the pool without mounting any file systems.
and
readonly=on | off
If set to on, pool will be imported in read-only mode with the
following restrictions:

• Synchronous data in the intent log will not be accessible

• Properties of the pool can not be changed

• Datasets of this pool can only be mounted read-only

• To write to a read-only pool, a export and import of the pool
is required.
I am not telling this will solve the problem, but may help to diagnose.

EDIT: Seems that you have created the mirror on unpartitioned disks. If so, this may be the cause.
 
If you can only import it r/o then I'd start checking the status of your hardware. See, the moment you import a ZFS pool the system will immediately start writing (system) data, data which is important for the system and if it fails you get a bit of a cascading effect (depending on the seriousness).

Making me conclude that you might have a disk problem on your hard, so I'd start there.
 
Seems that you have created the mirror on unpartitioned disks. If so, this may be the cause.
This shouldn't be a problem for data disks. You need partitions on a system pool (the one you're booting from) because you need a freebsd-boot and/or efi partition to boot from.

Code:
root@hosaka:~ # zpool status stor10k
  pool: stor10k
 state: ONLINE
status: Some supported and requested features are not enabled on the pool.
        The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(5) for details.
  scan: resilvered 91.2G in 01:54:39 with 0 errors on Sat Mar 23 16:05:46 2019
config:

        NAME        STATE     READ WRITE CKSUM
        stor10k     ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            da3     ONLINE       0     0     0
            da0     ONLINE       0     0     0
          mirror-1  ONLINE       0     0     0
            da2     ONLINE       0     0     0
            da1     ONLINE       0     0     0

errors: No known data errors
(Yes, I still need to upgrade the pool, haven't done that yet)
 
My FreeBSD 13.0-STABLE machine is rebooting itself as soon as I run zpool import poolname
This is a pool that was running perfectly fine on a 12.2-STABLE system that was upgraded from src to 13.0-STABLE and zpool upgarde run on the pool
So if I read this right this pool come from MachineA running 13.0-STABLE and you're importing it on MachineB that's also running 13.0-STABLE?

Are you sure those 13.0-STABLE versions are the same? The VERIFY3(sa.sa_magic == SA_MAGIC) failed (3100794 == 3100762) seems to imply feature differences. So you're trying to import a newer feature set that's supported on your 13.0-STABLE. In other words the 13.0-STABLE you're importing on is older than the 13.0-STABLE the pool came from.
 
This shouldn't be a problem for data disks. You need partitions on a system pool (the one you're booting from) because you need a freebsd-boot and/or efi partition to boot from.
What happens when the whole disk is used, unpartitioned and later one disk looses some blocks and the size decreases? Just asking. This is just one hypothesis.

But being able to import read only is also an indication of features mismatch. I have personal experience with upgraded OpenZFS pool became not importable under 12.2. Importing r/o was OK. The new features affected only write. So, I created a new empty pool on separate disk with 12.2, imported the upgraded pool r/o and using zfs send moved the data to freshly crated empty pool.

In this case here I would also try to physically remove one drive from that mirror and try to import separately from one or another drive. It may be some sort of exotic drive error also.
 
Make a backup.
Destroy your current pool and create a new one then restore the data. It will be faster than fixing current one.

Thanks, you are right, pretty much what I did was backup the data on the dataset, destroy it and the pool and re-create the pool and dataset.
FYI: the pool was imported from FreeBSD 12.2-STABLE to 13.0-STABLE

Thanks all.
 
 
What happens when the whole disk is used, unpartitioned and later one disk looses some blocks and the size decreases? Just asking. This is just one hypothesis.
Different pool, on a mirrored system pool you need to have the boot partitions on every disk, yes. But as this is a data pool, which is not used to boot the system, those partitions don't need to exist.
 
Back
Top