Solved Broke my GPT UEFI-only boot because I experimented with edonr just once

My symptoms are typical and my error not uncommon: failure to update my bootloader after an update and before rebooting. There's plenty of advice, but before I hose something worse I'm hoping someone can verify the correct steps to fix and then to prevent future hassles from updating and then failing to performing the needed bootloader incantations to avoid future hassles with these modern UEFI thangs.

On reboot, I got what appears to be from the searchengining a fairly standard unhappy screen:

1712060154591.png

(note, this is the raw OCR with 0/8 and y/g and probably other weird substitions)
Consoles: EFI console
ZFS: unsupported feature: org.illumos:edonr
ZFS: pool zroot is not supported
Reading loader env vars from lefi/freebsd/loader.env
Setting currdev to disk8p1:
FreeBSD/amd64 EFI loader, Revision 1.1
Command line arguments: loader.efi
Image base: 8x77456888
EFI version: 2.48
EFI Firmware: HP (rev 3.7688)
Console: efi (8)
Load Path: \efi\freebsd\loader.efi
Load Device: PciRoot(8x8)ci(8X1,8x8)/Pci(8x8,8x8)/Scsi(8x18,8x8)/HD(1,GPT,88451DF9-0313-11EE-8858-1898E8229188,8x28,8x82888
)
BootCurrent: 8817
BootUrder: 8817[*] 8888 8888 8882 8881 8883 8884 8885 8886 8887 888a 8889 888b 8880 888d 8889 888f 8818 8811 8812 8813 8814 8
815 8816
Bootlnfo Path: HD(1,GPT,884510F9—0313—11EE—8858—1898E82291B8,8x28,8x82888)l\efi\freebsd\loader.efi
Ignoring Boot8817: Unlg one DP found
Trying ESP: PciR00t(8x8)/Pci(8x1,8x8)/Pci(8x8,8x8)/Scsi(8x18,8x8)/H0(1,BPT,88451DF9—0318-11EE-8858—1898E8229138,8x28,8x82888)
Setting currdev t0 disk8p1:
Trging: PciRoot(8x8)/Pci(8x1,8x8)/Pci(8x8,8x8)/Scsi(8x18,8x8)/HD(2,GPT,884B7BF2—031B—11EE—8858-1898E8229188,8x82888,8x488888)
Setting currdev to disk8p2:
Trying: PciRoot(8x8)/Pci(8x1,8x8)/Pci(8x8,8x8)/Scsi(8x18,8x8)/HD(3,GPT,88534778—031B-11EE—8858-1898E82291B8,8x482888,8XSCDEH888)
Setting currdev to
Failed to find bootable partition
Press any key to interrupt reboot in 4 seconds

After downloading the FreeBSD-14.0-RELEASE-amd64-bootonly.iso and booting to it via iLO, I could proceed with gpart show to enumerate my disk structure - GPT and and EFI partition and no freebsd-boot partition
partition.

1712060479880.png


I reviewed what appears to be the most current canonical how-to Emrion thread: https://forums.freebsd.org/threads/update-of-the-bootcodes-for-a-gpt-scheme-x64-architecture.80163/ and if I read right, the following incantations (repeated for all 8 drives) should be something like:
mount -t msdosfs /dev/da0p1 /mnt (adjusting the canonical for the naming structure for my drives "ad0p1" -> "da0p1")
Then if I'm reading correctly, the corrective commands should be (and the advice is to do both, at least for FreeBSD boot-only systems):
cp /boot/loader.efi /mnt/efi/boot/bootx64.efi
and
cp /boot/loader.efi /mnt/efi/freebsd/loader.efi

Easy enough so far, but here's where I'm having a moment of doubt and insecurity - before copying I checked the checksums with cksum and:
1712061118262.png

The checksums are all already identical. I mean no harm in copying but what would this do? I'm paralyzed by checksum induced uncertainty, as is so often the case these days.

I see SirDice' s comment in this post about regenerating the efi boot manager with efibootmgr, which sounds promising but a different procedure than recommended in the most referenced "how to unbork your update mistake" thread - and would that command be, when booting from removable media something like:
mount -t msdosfs /dev/da0p1 /mnt
efibootmgr -a -c -l /mnt/efi/freebsd/loader.efi -L FreeBSD-14
my system not being dual/multi-boot with other OSes on it, also:
efibootmgr -a -c -l /mnt/efi/boot/bootx64.efi -L FreeBSD-14?


And being sure, from now on, to be sure to update the bootloader after any OS or ZFS update of a GPT-UFEI-FreeBSD 14 system before rebooting with:
cp /boot/loader.efi /boot/efi/efi/freebsd/loader.efi
my system not being dual/multi-boot with other OSes on it, also:
cp /boot/loader.efi /boot/efi/efi/boot/bootx64.efi
 
I'm a kind of lost in your tale. The message you get at boot seems to be a problem like: you upgraded your zpool and not the efi loader. But, you already had the latest version of that loader. It's not possible unless: you did not upgrade your FreeBSD version or you have already updated the loader.

What is the concerned upgrade (e.g. 13.2-RELEASE to 14.0-RELEASE)?

Did you install sysutils/openzfs?
 
I'm a kind of lost in your tale. The message you get at boot seems to be a problem like: you upgraded your zpool and not the efi loader. But, you already had the latest version of that loader. It's not possible unless: you did not upgrade your FreeBSD version or you have already updated the loader.

What is the concerned upgrade (e.g. 13.2-RELEASE to 14.0-RELEASE)?

Did you install sysutils/openzfs?
Hi, I started with 14R on new hardware. It has been few late nights and I'm not 100% sure of the exact sequence, but I tried freebsd-update(I think that worked) and might have tried zpool upgrade -a

I was experimenting with encryption and benchmarking edonr vs blake3 and zstd vs. lz4 vs gzip-9. I then rebooted and hang at the screen shot above. Since posting I've downloaded and booted to FreeBSD-14.0-STABLE-amd64-20240328-77205dbc1397-267062-disc1.iso and the loader.efi on that has a different checksum (1579878826). I tried copying that to the efi mounted efi partition locations (both loader.eftand bootx64.efi) and that didn't help or change the boot message. So I booted to live mode on the CD and brought up ssh and ran
gpart bootcode -p /boot/boot1.efi -i1 da0
(through da7) and rebooted, and now there are no attached UFEI devices. I mean, that seems easy enough to clean up, and if need be I can rebuild from scratch, but I'd much prefer to have a reliable way out of this jam just in case, 'cause I'm sure it'll happen again someday.
 
you have too many efi entries. I would clean up them first.
efi gets auto-installed on every device in an array. This isn't a choice I made, the installer CD did it as part of the install process, and it seems to be recommended. I did originally set the machine up as GELI encrypted and then removed that encryption because ZFS encryption seems more flexible. I did the exact same process on another box which reboots (though that other box did not get the freebsd-update to a later incremental release of 14 than the "stable" installer, nor the zpool upgrade -a - which returned "all feature flags already installed."
 
And being sure, from now on, to be sure to update the bootloader after any OS or ZFS update of a GPT-UFEI-FreeBSD 14 system before rebooting with:
cp /boot/loader.efi /boot/efi/efi/freebsd/loader.efi
my system not being dual/multi-boot with other OSes on it, also:
cp /boot/loader.efi /boot/efi/efi/boot/bootx64.efi
So I booted to live mode on the CD and brought up ssh and ran
gpart bootcode -p /boot/boot1.efi -i1 da0
(through da7) and rebooted, and now there are no attached UFEI devices.

The issue here is not about EFI boot loader but about a unsupported zpool feature. You need to recreate the pool (reinstall system).

I was experimenting with encryption and benchmarking edonr vs blake3
ZFS: unsupported feature: org.illumos:edonr
ZFS: pool zroot is not supported
zpool-features(7)
Rich (BB code):
     edonr
             GUID                  org.illumos:edonr
             DEPENDENCIES          extensible_dataset
             READ-ONLY COMPATIBLE  no

             This feature enables the use of the Edon-R hash algorithm for
             checksum, including for nopwrite (if compression is also enabled,
             an overwrite of a block whose checksum matches the data being
             written will be ignored).  In an abundance of caution, Edon-R
             requires verification when used with dedup: zfs set
             dedup=edonr,verify (see zfs-set(8)).

             Edon-R is a very high-performance hash algorithm that was part of
             the NIST SHA-3 competition.  It provides extremely high hash
             performance (over 350% faster than SHA-256), but was not selected
             because of its unsuitability as a general purpose secure hash
             algorithm.  This implementation utilizes the new salted
             checksumming functionality in ZFS, which means that the checksum
             is pre-seeded with a secret 256-bit random key (stored on the
             pool) before being fed the data block to be checksummed.  Thus
             the produced checksums are unique to a given pool, preventing
             hash collision attacks on systems with dedup.

             When the edonr feature is set to enabled, the administrator can
             turn on the edonr checksum on any dataset using zfs set
             checksum=edonr dset (see zfs-set(8)).  This feature becomes
             active once a checksum property has been set to edonr, and will
             return to being enabled once all filesystems that have ever had
             their checksum set to edonr are destroyed.
 
Is this correct?
...This feature becomes
active once a checksum property has been set to edonr, and will
return to being enabled once all filesystems that have ever had
their checksum set to edonr are destroyed.

(not disabled)

I did, indeed, set a dataset to edonr, then set it to blake3 before rebooting, not because I thought leaving it ednor would cause such a catastrophe, but because I was experimenting.
Code:
NAME                                        USED  ENCRYPTION   COMPRESS        RATIO  CHECKSUM   MOUNTPOINT    SYNC
zroot/crypt                                 869M  aes-256-gcm  on              5.74x  on         /zroot/crypt  standard
write: IOPS=15.4k, BW=60.1 MiB/s
zroot/crypt                                 869M  aes-256-gcm  on              5.74x  edonr      /zroot/crypt  standard
write: IOPS=12.6k, BW=49.4MiB/s
zroot/crypt                                 664M  aes-256-gcm  zstd-19         9.08x  edonr      /zroot/crypt  standard
write: IOPS=1292, BW=5.17MiB/
zroot/crypt                                 589M  aes-256-gcm  zstd-2          10.76x  edonr      /zroot/crypt  standard
write: IOPS=7692, BW=30.0MiB/s
zroot/crypt                                 589M  aes-256-gcm  gzip-9          8.33x  edonr      /zroot/crypt  standard
write: IOPS=3730, BW=14.6MiB/s
zroot/crypt                                 869M  aes-256-gcm  lz4             6.23x  blake3     /zroot/crypt  standard
write: IOPS=13.2k, BW=51.5MiB/s

the /crypt dataset is non booting, just a test dataset, but I gather (now) that's not the problem, you can't use ednor at all on any dataset that is part of a boot zpool, even if that dataset isn't a boot dataset.

If you can't fix the ednor break by disabling it, it's kinda like cd / && sudo rm -r * a major bummer... I'm gonna try mounting zroot and destoying the test dataset.

Also don't do the gpart bootcode thing on 14 or later. It's getting easier and easier to wipe out a file system! In any event, blake3 should be sufficient and is faster.

If I can fix it by destroying the dataset, I'll have time to report back. If I don't... I'm busy rebuilding my file system and patching the configs..
 
W00t, unhosed. A bit of an adventure.
TL:DR
1) Destroy the once-had edonr enabled on it and thus forever tainted dataset
2) Unscrewup the boot partitions I hosed with the gpart bootcode -p... command.

In detail:
Booted off the latest FreeBSD 14 CD into live mode using the iLO remote media capability
- copy the CD to a non-https local webserver on the LAN
- mount the CD by URL
- boot from it
1712105663664.png

1712105764012.png




I then followed this fine guide to get SSH working - I poked around a bit and that's awfully tedious on remote terminal, but hardly needed for the few commands actually necessary for recovery.

Code:
dhclient bge0
mkdir /tmp/etc
mount_unionfs /tmp/etc /etc
passwd root
echo "PermitRootLogin yes" >> /etc/ssh/sshd_config
service sshd onestart

After verifying the directory structure and mounting the ZFS set, mounting an NFS share, and backing up my tuned configs (better late than never), this is what gave me access to do the excision:
zpool import -R /zroot zroot
to mount the borked ZFS pool.
Code:
# zfs list -o name,used,encryption,compression,compressratio,checksum,mountpoint,sync
NAME                                        USED  ENCRYPTION   COMPRESS        RATIO  CHECKSUM   MOUNTPOINT              SYNC
zroot                                       526G  off          on              1.03x  on         /tmp/zroot/zroot        standard
zroot/ROOT                                  517G  off          on              1.00x  on         none                    standard
zroot/ROOT/14.0-RELEASE_2024-03-24_081700  18.3K  off          on              1.00x  on         /tmp/zroot              standard
zroot/ROOT/default                          517G  off          on              1.00x  on         /tmp/zroot              standard
zroot/crypt                                 869M  aes-256-gcm  lz4             6.22x  blake3     /tmp/zroot/zroot/crypt  standard
zroot/home                                  881M  off          on              5.66x  on         /tmp/zroot/home         standard
zroot/tmp                                   881M  off          on              5.32x  on         /tmp/zroot/tmp          disabled
zroot/usr                                  6.15G  off          lz4             1.29x  on         /tmp/zroot/usr          standard
zroot/usr/ports                            3.10G  off          lz4             1.21x  on         /tmp/zroot/usr/ports    standard
zroot/usr/src                              3.04G  off          lz4             1.34x  on         /tmp/zroot/usr/src      standard
zroot/var                                  2.15M  off          on              3.31x  on         /tmp/zroot/var          standard
zroot/var/audit                             219K  off          on              1.00x  on         /tmp/zroot/var/audit    standard
zroot/var/crash                             219K  off          on              1.01x  on         /tmp/zroot/var/crash    standard
zroot/var/log                               996K  off          on              4.69x  on         /tmp/zroot/var/log      disabled
zroot/var/mail                              329K  off          on              2.97x  on         /tmp/zroot/var/mail     standard
zroot/var/tmp                               219K  off          on              1.00x  on         /tmp/zroot/var/tmp      disabled
There's the edonr-tainted dataset right there at zroot/crypt, yes it says "blake3" but it was once edonr and so is forever unclean; just nuke that thing:
zfs destroy zroot/crypt

One problem solved. If I hadn't screwed up my efi partitions, I'd be home free, but I did, so gotta fix that. NP, just follow those copy directions for GPT-UEFI-14 starting with:
Code:
# mount -t msdosfs /dev/da0p1 /mnt
mount_msdosfs: /dev/da0p1: Invalid argument
DOH! hosed. But fortunately fairly easy to fix, just follow this handy guide and:
Code:
newfs_msdos -F 32 -c 1 /dev/da0p1
newfs_msdos -F 32 -c 1 /dev/da1p1
....
(loop for each drive you want a boot partition on: da0p1–daXp1 (X=7 for me))
mount_msdosfs /dev/da0p1 /mnt
mkdir -p /mnt/efi/boot
mkdir -p /mnt/efi/freebsd
cp /boot/loader.efi /mnt/efi/boot/bootx64.efi
cp /boot/loader.efi /mnt/efi/freebsd/loader.efi
umount /mnt
(/loop)
reboot
All good - need more pools.
"Killed my best friend - edonr, not even once"
 
the /crypt dataset is non booting, just a test dataset, but I gather (now) that's not the problem, you can't use ednor at all on any dataset that is part of a boot zpool, even if that dataset isn't a boot dataset.
I cannot confirm this.

On a (one disk 14.0-RELEASE-p6) VM test system a 'edonr' feature enabled child dataset doesn't break booting the system.

Was on your system maybe a boot critical data set 'edonr' enabled? I had to set the property explicitly on the 'zroot' parent (which got inherited by the child data sets) to break the booting.

What does zpool-history(8) show?

Rich (BB code):
$ freebsd-version -ru
14.0-RELEASE-p6
14.0-RELEASE-p6
$
$
$ zfs get -r checksum zroot
NAME                PROPERTY  VALUE      SOURCE
zroot               checksum  on         default
zroot/ROOT          checksum  on         default
zroot/ROOT/default  checksum  on         default
zroot/crypt         checksum  edonr      local
zroot/home          checksum  on         default
zroot/tmp           checksum  on         default
zroot/usr           checksum  on         default
zroot/usr/ports     checksum  on         default
zroot/usr/src       checksum  on         default
zroot/var           checksum  on         default
zroot/var/audit     checksum  on         default
zroot/var/crash     checksum  on         default
zroot/var/log       checksum  on         default
zroot/var/mail      checksum  on         default
zroot/var/tmp       checksum  on         default
$
$
$ zpool history | grep edonr
2024-04-03.05:23:11 zfs set checksum=edonr zroot/crypt

ednor-efi-boot.png

Note there is no loader message about unsupported features.
 
Huh, stranger and stranger. It does seem odd that you can set the checksum, it works fine, write data/read data, check performance but reboot? No! No reboot for you!

Code:
#freebsd-version -ru
14.0-RELEASE-p5
14.0-RELEASE-p5


# zpool history | grep crypt
2024-03-28.16:22:11 zfs create -o encryption=on -o keylocation=prompt -o keyformat=passphrase zroot/crypt
2024-03-28.16:23:45 zfs unload-key -r zroot/crypt
2024-03-28.16:24:24 zfs load-key -r zroot/crypt
2024-03-29.17:29:32 zfs set checksum=edonr zroot/crypt
2024-03-29.17:40:16 zfs set compression=zstd-19 zroot/crypt
2024-03-29.17:54:25 zfs set compression=gzip-9 zroot/crypt
2024-03-29.17:58:22 zfs set compression=lz4 zroot/crypt
2024-03-30.18:29:05 zfs set checksum=blake3 zroot/crypt
2024-04-02.17:01:04 zfs destroy zroot/crypt

Lemmie backup all my configs before I risk destroying this again. I'll recreate the partition and try rebooting again. Then if that fails, test -p6. Maybe -p6 removes an overzealous check? The only data on zroot/crypt was fio test blobs, I was just experimenting, this is to be a jail host and I believe that using ZFS encryption per jail dataset will be more convenient than GELI the disk for remote admin since I should be able to get SSH on boot and not have to rely on remote KVM (iLO plus avocent) to enter keys.
 
BTW, the command I used to create the problem disk was:

zfs create -o encryption=on -o checksum=edonr -o keylocation=prompt -o keyformat=passphrase zroot/crypt


and for interest, the fio bandwidth testing was done with


fio --name=random-write --ioengine=posixaio --rw=randwrite --bs=4k --size=4g --numjobs=1 --runtime=60 --time_based --end_fsync=1


Code:
zfs create -o encryption=on -o keylocation=prompt -o keyformat=passphrase zroot/crypt
(snip)
cd /zroot/crypt
fio --name=random-write --ioengine=posixaio --rw=randwrite --bs=4k --size=4g --numjobs=1 --runtime=60 --time_based --end_fsync=1
(snip)
  write: IOPS=15.9k, BW=61.9MiB/s
(snip)
reboot...

No boot errors (no special checksum)

Code:
zfs set checksum=edonr zroot/crypt
zfs load-key -r zroot/crypt
cd /zroot/crypt
fio --name=random-write --ioengine=posixaio --rw=randwrite --bs=4k --size=4g --numjobs=1 --runtime=60 --time_based --end_fsync=1
  write: IOPS=19.8k, BW=77.5MiB/s
reboot...
Booted - np.
Gonna try blake3 (which is what it was, so recreating the process).
Code:
zfs set checksum=blake3 zroot/crypt
zfs load-key -r zroot/crypt
cd /zroot/crypt
fio --name=random-write --ioengine=posixaio --rw=randwrite --bs=4k --size=4g --numjobs=1 --runtime=60 --time_based --end_fsync=1
  write: IOPS=14.1k, BW=55.3MiB/s
reboot...

Booted no problem. Weird... but good. so edonr seems fastest, gonna switch back and repeat the reboot test one last time and see if it stays fastest.

Repeating the edonr settings...
Code:
zfs set checksum=edonr zroot/crypt
zfs load-key -r zroot/crypt
cd /zroot/crypt
fio --name=random-write --ioengine=posixaio --rw=randwrite --bs=4k --size=4g --numjobs=1 --runtime=60 --time_based --end_fsync=1
  write: IOPS=13.5k, BW=52.9MiB/s
huh, not great... switch back to blake3...

Code:
zfs set checksum=blake3 zroot/crypt
fio --name=random-write --ioengine=posixaio --rw=randwrite --bs=4k --size=4g --numjobs=1 --runtime=60 --time_based --end_fsync=1
  write: IOPS=17.5k, BW=68.3MiB/s
reboot

Rebooted just fine and no warnings. Very odd - seems like test variability exceeds the reliable performance variation between checksums, so if I were to advise, I'd go with what sounds coolest myself. On this basis, I favor blake3 for encrypted volumes myself and it does seem to boot fine... whatever messed me up before, that does not seem to be it.

blake3 after reboot: write: IOPS=14.1k, BW=55.2MiB/s

Thanks T-Daemon, I have no answers now, but I do have a working system, so I got that going for me!
 
So it sounds like now there is no problem booting even with edonr set?
I wonder if somehow it got set on the top level zpool instead of a dataset and the bootloader doesn't understand it.
It may be interesting for you to look at the output of
zpool history zroot

It tells you pretty much everything ever done to a zpool starting at creation.
 
Yah, I booted with ednor and then, after rebooting successefully, again with blake 3 on the same partition that destroying seemed to fix (zroot/crypt). This is all one zpool with boot on ZFS. It's live and happy now.

I don't see anything obvious myself. I inserted the last reboot history into the timeline. The zpool replace actions were de-gelifying the system in anticipation of migrating to ZFS encryption (I started setup following my old, pre-ZFS 2.2 process and then got excited about new features).


Code:
History for 'zroot':
2024-02-24.05:51:16 zpool create -o altroot=/mnt -O compress=lz4 -O atime=off -m none -f zroot raidz2 da0p3.eli da1p3.eli da2p3.eli da3p3.eli da4p3.eli da5p3.eli da6p3.eli da7p3.eli da8p3.eli da9p3.eli
2024-02-24.05:51:16 zfs create -o mountpoint=none zroot/ROOT
2024-02-24.05:51:16 zfs create -o mountpoint=/ zroot/ROOT/default
2024-02-24.05:51:16 zfs create -o mountpoint=/home zroot/home
2024-02-24.05:51:16 zfs create -o mountpoint=/tmp -o exec=on -o setuid=off zroot/tmp
2024-02-24.05:51:16 zfs create -o mountpoint=/usr -o canmount=off zroot/usr
2024-02-24.05:51:16 zfs create -o setuid=off zroot/usr/ports
2024-02-24.05:51:16 zfs create zroot/usr/src
2024-02-24.05:51:16 zfs create -o mountpoint=/var -o canmount=off zroot/var
2024-02-24.05:51:17 zfs create -o exec=off -o setuid=off zroot/var/audit
2024-02-24.05:51:17 zfs create -o exec=off -o setuid=off zroot/var/crash
2024-02-24.05:51:17 zfs create -o exec=off -o setuid=off zroot/var/log
2024-02-24.05:51:17 zfs create -o atime=on zroot/var/mail
2024-02-24.05:51:17 zfs create -o setuid=off zroot/var/tmp
2024-02-24.05:51:17 zfs set mountpoint=/zroot zroot
2024-02-24.05:51:17 zpool set bootfs=zroot/ROOT/default zroot
2024-02-24.05:51:17 zpool set cachefile=/mnt/boot/zfs/zpool.cache zroot
2024-02-24.05:51:17 zfs set canmount=noauto zroot/ROOT/default
boot time                                  Sat Feb 24 14:27
shutdown time                              Sun Feb 25 15:10
boot time                                  Sat Mar  2 10:41
boot time                                  Mon Mar  4 11:39
shutdown time                              Sat Mar  9 05:51
boot time                                  Sat Mar  9 08:33
2024-03-09.08:35:59 zfs set compress=on zroot
shutdown time                              Sun Mar 24 08:22
boot time                                  Sun Mar 24 08:26
shutdown time                              Tue Mar 26 17:44
boot time                                  Tue Mar 26 17:48
shutdown time                              Wed Mar 27 06:37
boot time                                  Wed Mar 27 06:56
shutdown time                              Wed Mar 27 11:29
boot time                                  Wed Mar 27 12:23
shutdown time                              Wed Mar 27 17:22
boot time                                  Wed Mar 27 17:31
2024-03-28.16:20:53 zfs snapshot zroot@20240328
2024-03-28.16:22:11 zfs create -o encryption=on -o keylocation=prompt -o keyformat=passphrase zroot/crypt
2024-03-28.16:23:45 zfs unload-key -r zroot/crypt
2024-03-28.16:24:24 zfs load-key -r zroot/crypt
2024-03-29.10:10:45 zfs set sync=disabled zroot/var/log
2024-03-29.10:10:49 zfs set sync=disabled zroot/var/tmp
2024-03-29.10:11:02 zfs set sync=disabled zroot/tmp
2024-03-29.17:29:32 zfs set checksum=edonr zroot/crypt
2024-03-29.17:40:16 zfs set compression=zstd-19 zroot/crypt
2024-03-29.17:46:50 zfs set compression=zstd zroot/usr
2024-03-29.17:50:23 zfs set compression=on zroot/usr
2024-03-29.17:52:12 zfs set compression=lz4 zroot/usr
2024-03-29.17:54:25 zfs set compression=gzip-9 zroot/crypt
2024-03-29.17:58:22 zfs set compression=lz4 zroot/crypt
2024-03-30.12:59:20 zpool set autotrim=on zroot
2024-03-30.13:00:39 zpool replace zroot da0p3.eli da0p3
2024-03-30.13:05:15 zpool replace zroot da1p3.eli da1p3
2024-03-30.13:19:24 zpool replace zroot da2p3.eli da2p3
2024-03-30.14:39:00 zpool replace zroot da3p3.eli da3p3
2024-03-30.14:45:12 zpool replace zroot da4p3.eli da4p3
2024-03-30.16:27:24 zpool replace zroot da5p3.eli da5p3
2024-03-30.16:44:59 zpool replace zroot da6p3.eli da6p3
2024-03-30.16:48:26 zpool replace zroot da7p3.eli da7p3
2024-03-30.16:54:32 zpool replace zroot da8p3.eli da8p3
2024-03-30.16:57:30 zpool replace zroot da9p3.eli da9p3
2024-03-30.18:29:05 zfs set checksum=blake3 zroot/crypt
shutdown time                              Mon Apr  1 17:39  (this was the failure)
2024-04-02.16:44:29 zpool import -f -R /tmp/zroot zroot
2024-04-02.17:01:04 zfs destroy zroot/crypt
boot time                                  Tue Apr  2 17:50 (this is the fix)
2024-04-03.02:26:27 zpool scrub zroot
2024-04-03.06:57:48 zfs create -o encryption=on -o keylocation=prompt -o keyformat=passphrase zroot/crypt
2024-04-03.07:06:53 zfs set checksum=edonr zroot/crypt
2024-04-03.07:07:09 zfs load-key -r zroot/crypt
2024-04-03.07:18:26 zfs set checksum=blake3 zroot/crypt
2024-04-03.07:18:42 zfs load-key -r zroot/crypt
2024-04-03.07:25:49 zfs set checksum=edonr zroot/crypt
2024-04-03.07:26:00 zfs load-key -r zroot/crypt
2024-04-03.07:28:15 zfs set checksum=blake3 zroot/crypt
2024-04-03.07:42:56 zfs load-key -r zroot/crypt
 
Back
Top