Upgrading to 11.1, zfs root

This weekend I will upgrade freebsd to 11.1-RELEASE from 11.0-RELEASE-p12 and I am a bit worried due to regression that makes system unbootable after upgrade. Is this issue fixed? Is there something i should be aware of?
That issue seems to be specific to MMC disks in a very specific situation.

And there are further problems people have with booting within this forum.
Yes, this is a support forum after all. People that don't have problems don't generally post on a support forum.
Hm, this "backup handy" is a bit of a problem, i dont have enough storage around to do a full backup. The thing i am most worried about is updating the boot record (gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ...), this should be on boot disk only, i presume (boot record?!).
As far as I know there's no need to update the boot record for 11.0 -> 11.1, so it's best not to touch it. It may be needed if you're upgrading from older FreeBSD versions (like 9.x or 10.x) that have an older ZFS version.
You are already doing something very strange by using ZFS on i386.

Naah, that was just an example :) But I have noticed the 11.1 had some issues with zfs root and i would really like to avoid creating bootable usb / mounting dvd drive ;)
Boot issues are usually easily resolved. So I wouldn't worry about it too much. There's a far greater risk of botching the upgrade and ending up with a completely hosed system.
If you have a root-on-ZFS setup, then it should be configured for Boot Environments. Before you upgrade, create a new boot environment. Boot into that new BE. Do the upgrade there. If anything fails, you can switch to the old BE at the loader menu. Then delete the broken BE, create a new one, and try the upgrade again.

It's pretty much the whole reason for BEs to begin with. And one of the main benefits to having root on a ZFS pool. :)

And, if your root-on-ZFS isn't setup for BEs yet, that should be a main priority before worrying about upgrading the OS. :D
By using beadm + chroot the whole upgrade can be done safely inside a new BE with only one reboot after the upgrade:

Jails in combination with beadm can also be leveraged to safely perform upgrades to a system. This is explained in the EXAMPLES section of beadm(1), even including a link to the "FreeBSD ZFS Madness" Thread here in the forums (Section 6.2. explains the beadm+jails upgrade).
     •   Perform a system upgrade in a jail(8)

         Create a new boot environment called jailed:

               beadm create -e default jailed

         Set mountpoint for new jail to /usr/jails/jailed:

               beadm mount jailed /usr/jails/jailed

         The currently active boot environment is now replicated into the
         jailed system and ready for upgrade.  Startup the jail, login and
         perform the normal upgrade process.  Once this is done, stop the jail
         and disable it in /etc/rc.conf.

         Now activate the boot environment for the next boot

               beadm activate jailed

         Reboot into the new environment


     A HOWTO guide is posted at the FreeBSD forums:

     •   http://forums.freebsd.org/showthread.php?t=31662

One caveat for the beadm approach: datasets under /usr are not part of BEs, hence your /usr/src is not available in the jail and has to be upgraded manually after booting into the new BE. This is especially crucial if you run a custom kernel that has to be rebuild after upgrade.
And as usual, i find something disturbing when i already started the upgrade process =/


I have noticed the first problem, but i completely missed the second one =/
  • [2017-07-25] A late issue was discovered with FreeBSD/arm64 and "root on ZFS" installations where the root ZFS pool would fail to be located.
    There currently is no workaround.
I hope it applies on installation only and not to upgrade or i will have to revert the snapshot =/
Why's that?
For the record: I don't fully agree with Oko on this point but I do think he raises a fair concern...

The problem is that ZFS will disable pre-caching whenever the maximum amount of memory is below 4GB. This can be overruled by setting vfs.zfs.prefetch_disable to 0 in /boot/loader.conf but even so... ZFS is very memory intensive and on a 32bit system 4GB is the theoretical maximum amount of memory you can address. In reality this number lies much lower. Add 4GB of memory to a 32bit system and you most likely only got 2 - 2.5GB at your disposal.

So if you then allow a memory intensive process to gobble up resources on a system which is by nature already low on resources then you do run a risk that weird stuff could happen, it won't be optimal for your performance.

Of course it also doesn't have to be bad. I've ran 32bit systems using ZFS myself without any issues, it's also fair to note that when Sun introduced ZFS most systems (looking at Solaris 10) were also mostly 32bit. And no issues at all.
on a 32bit system 4GB is the theoretical maximum amount of memory you can address. In reality this number lies much lower. Add 4GB of memory to a 32bit system and you most likely only got 2 - 2.5GB at your disposal.

This is only true for Redmond's toy-OSes which don't have a proper PAE implementation and instead got an artificial capping on max usable memory:

The limitation with "proper" operating systems is mostly the hardware/firmware (BIOS) - on the OS side usually the limit is set to 64 or 128GB, which far exceeds the limits of nearly all i386 platforms. Another limiting factor often are drivers which directly address memory and don't use PAE. Some of these drivers may act very unpredictable and can wreak havoc (some old adaptec raid controllers come to mind....)

IIRC the 32bit ZFS codebase doesn't get as much attention as the 64bit one, so there may be unresolved issues. Considering most (all?) 32bit hardware is pretty old nowadays and is bound to fail in the near future, there are very few (if any) legitimate reasons to use such hardware with a filesystem that aims to be extremely reliable.