13.1 RC-3 Problem

I posted this earlier on the freebsd-stable list, but got no interest, so I'm posting here:

I'm running a 13.1 RC-3 server that has a zfs problem that didn't exist under 13.0 RELEASE.

First, here is the configuration of the server. It has the operating system on an NVD drive with all the partitions UFS. It has 8 UFS formatted drives in a SAS configuration. All of these show up when rebooting. I also have 2 drives in a ZFS mirror where the home directories are located and where the data in a MySQL database is located. None of the ZFS datasets mount when rebooting. After rebooting, if I do a "zpool import" all of the ZFS datasets mount.

Looking at dmesg after rebooting, it shows the following lines after the nvd0 drive shows up:

ZFS filesystem version: 5
ZFS storage pool version: features support (5000)
pid 48 (zpool), jid 0, uid 0: exited on signal 6
pid 49 (zpool), jid 0, uid 0: exited on signal 6

Further on in dmesg, the other drives show up, the 8 sas drives and the
and the 2 zfs drives. It appears ZFS is trying to configure itself, but can't know about its drives yet?

Do I have something misconfigured in 13.1? It has worked flawlessly in 13.0 for almost a year.

Rick
 
Probably not related, but this on Twitter (Colin Percival):

Filed under "weird bugs which only seem to show up when we're about to do a release": FreeBSD's encrypted disk support was broken in 13.1 RCs because a kernel module was 128k+8 bytes long and ended with 8 bytes of zeroes. Many thanks to Kyle Evans for debugging and fixing this!
 
Reproducible with 13.1-RC4? If so, please make a bug report.
Yes, it is the same in 13.1-RC4. I should also include the fact that each of the sas drives and each of the sata drives for zfs has a gpart label and fstab uses those labels. However, since nvd0 is the only such "drive" in the box, it does not have a label.
 
ZFS filesystem version: 5
ZFS storage pool version: features support (5000)
pid 48 (zpool), jid 0, uid 0: exited on signal 6
pid 49 (zpool), jid 0, uid 0: exited on signal 6

Do you get the same SIGABRT when running the zpool command after startup?

If so, you can run it in gdb to get a backtrace.
 
… 2 drives in a ZFS mirror …

… each of the sata drives for zfs has a gpart label and fstab uses those labels. …

Try not using fstab(5) for mounts of ZFS file systems.

<https://serverfault.com/a/943079/91969> (Allan Jude), and so on. As far as I know, it's proper (or commonly preferred) to allow rc(8) to perform the mounts. In particular:




(Recent <https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=262468#c3> helped me to think about order.)
 
After upgrading to 13.1-RC6, I notice "pid 12218 (zpool), jid 0, uid 0: exited on signal 6" message in my logs too, but ZFS datasets mount correctly.
 
Our hyperactive "cross-reference" fetishist may want to notice that he missed to reference that 13.1-RELEASE is out since May 16, 2022.
 
I experienced RC problems
... then, it makes just a little more sense to to get the final release when it is out and still wanting to report PRs.

Release candidates are obsolete by definition when the final release is available.

It takes me wonder that there is a need to tell about.
 
Back
Top