ZFS What OpenZFS version does FreeBSD 13.0-RELEASE use?

… path changes seem to have made the desktop behave strangely; …

I can't imagine OpenZFS having this effect. If you'd like to post details to a separate topic, someone can take a look. Thanks.

Well, if you install ZFS from packages or ports, # pkg info will tell you which version of the source tarball is in use.

Sorry, I can't see it:

Code:
root@mowa219-gjp4-8570p-freebsd:~ # pkg info openzfs
openzfs-2021121500_1
Name           : openzfs
Version        : 2021121500_1
Installed on   : Fri Jan  7 17:52:58 2022 GMT
Origin         : sysutils/openzfs
Architecture   : FreeBSD:14:amd64
Prefix         : /usr/local
Categories     : sysutils
Licenses       : CDDL
Maintainer     : freqlabs@FreeBSD.org
WWW            : https://github.com/zfsonfreebsd/ZoF
Comment        : OpenZFS userland for FreeBSD
Options        :
        DEBUG          : off
        PYTHON         : off
        TESTS          : off
Shared Libs required:
        libintl.so.8
Shared Libs provided:
        libzpool.so.5
        libnvpair.so.3
        libzfs_core.so.3
        libzfsbootenv.so.1
        libuutil.so.3
        libzfs.so.4
Annotations    :
        FreeBSD_version: 1400046
        cpe            : cpe:2.3:a:openzfs:openzfs:2021121500:::::freebsd14:x64:1
        repo_type      : binary
        repository     : FreeBSD
Flat size      : 17.4MiB
Description    :
Port of the OpenZFS project to FreeBSD

WWW: https://github.com/zfsonfreebsd/ZoF
root@mowa219-gjp4-8570p-freebsd:~ #
 
grahamperrin : # pkg info sysutils/openzfs | grep Version... Or, just look at the very first line of your output. 🤣 As SirDice points out, the ports version uses the development versioning scheme, rather than the official release versioning.
 
I have to agree, with separate versioning schemes mated to different usage scenarios (OpenZFS 2.0 or 2.1 in base 13-RELEASE) vs (sysutils/openzfs being at whatever version the port supplies, in this case openzfs-2021121500_1), that can create confusion. 😑
 
I can't imagine OpenZFS having this effect. If you'd like to post details to a separate topic, someone can take a look. Thanks.
Yes, I should - I'm embarrassed I can't figure it out for myself.
I don't think it's a bug - it's something I did in changing the paths, and / or mounting external pools. Will do so here when I've investigated further.
Thanks for the dig in the ribs.
 
… can create confusion. …

OpenZFS in ports takes an ideal approach to versioning for ports – the combination of date + hour is non-ambiguous and naturally orderly.

OpenZFS zfs version behaves as expected.

What puzzles me more than anything (on page 1) is the use of hashes that are in forks but not in master (OpenZFS) or main (FreeBSD).

(Maybe I'm yet again forgetting that non-default branches are not indexed, by GitHub, for search purposes. Something like that.)

Re: <https://www.freebsd.org/cgi/man.cgi?query=zfs&sektion=8&manpath=FreeBSD+12.3-RELEASE> for zfs(8) in FreeBSD 12.3-RELEASE, I guess it's normal to not have an equivalent to the version subcommand.

Code:
grahamperrin@freebsd:~ % uname -KU

1203000 1203000
grahamperrin@freebsd:~ % /usr/local/sbin/zfs version
zfs-2.1.99-1
zfs-kmod-v2021121500-zfs_f291fa658
grahamperrin@freebsd:~ % pkg info -x openzfs
openzfs-2021121500_1
openzfs-kmod-2021121500
grahamperrin@freebsd:~ % /sbin/zfs version
unrecognized command 'version'
usage: zfs command args ...
where 'command' is one of the following:

        create [-pu] [-o property=value] ... <filesystem>
        create [-ps] [-b blocksize] [-o property=value] ... -V <size> <volume>
        destroy [-fnpRrv] <filesystem|volume>
        destroy [-dnpRrv] <filesystem|volume>@<snap>[%<snap>][,...]
        destroy <filesystem|volume>#<bookmark>

        snapshot|snap [-r] [-o property=value] ... <filesystem|volume>@<snap> ...
        rollback [-rRf] <snapshot>
        clone [-p] [-o property=value] ... <snapshot> <filesystem|volume>
        promote <clone-filesystem>
        rename [-f] <filesystem|volume|snapshot> <filesystem|volume|snapshot>
        rename [-f] -p <filesystem|volume> <filesystem|volume>
        rename -r <snapshot> <snapshot>
        rename <bookmark> <bookmark>
        rename -u [-p] <filesystem> <filesystem>        bookmark <snapshot> <bookmark>
        program [-jn] [-t <instruction limit>] [-m <memory limit (b)>] <pool> <program file> [lu
a args...]

        list [-Hp] [-r|-d max] [-o property[,...]] [-s property]...
            [-S property]... [-t type[,...]] [filesystem|volume|snapshot] ...

        set <property=value> ... <filesystem|volume|snapshot> ...
        get [-rHp] [-d max] [-o "all" | field[,...]]
            [-t type[,...]] [-s source[,...]]
            <"all" | property[,...]> [filesystem|volume|snapshot|bookmark] ...
        inherit [-rS] <property> <filesystem|volume|snapshot> ...
        upgrade [-v]
        upgrade [-r] [-V version] <-a | filesystem ...>
        userspace [-Hinp] [-o field[,...]] [-s field] ...
            [-S field] ... [-t type[,...]] <filesystem|snapshot>
        groupspace [-Hinp] [-o field[,...]] [-s field] ...
            [-S field] ... [-t type[,...]] <filesystem|snapshot>

        mount
        mount [-vO] [-o opts] <-a | filesystem>
        unmount|umount [-f] <-a | filesystem|mountpoint>
        share <-a | filesystem>
        unshare <-a | filesystem|mountpoint>

        send [-DnPpRvLec] [-[iI] snapshot] <snapshot>
        send [-LPcenv] [-i snapshot|bookmark] <filesystem|volume|snapshot>
        send [-nvPe] -t <receive_resume_token>
        receive|recv [-vnsFu] <filesystem|volume|snapshot>
        receive|recv [-vnsFu] [-o origin=<snapshot>] [-d | -e] <filesystem>
        receive|recv -A <filesystem|volume>

        allow <filesystem|volume>
        allow [-ldug] <"everyone"|user|group>[,...] <perm|@setname>[,...]
            <filesystem|volume>
        allow [-ld] -e <perm|@setname>[,...] <filesystem|volume>
        allow -c <perm|@setname>[,...] <filesystem|volume>
        allow -s @setname <perm|@setname>[,...] <filesystem|volume>

        unallow [-rldug] <"everyone"|user|group>[,...]
            [<perm|@setname>[,...]] <filesystem|volume>
        unallow [-rld] -e [<perm|@setname>[,...]] <filesystem|volume>
        unallow [-r] -c [<perm|@setname>[,...]] <filesystem|volume>
        unallow [-r] -s @setname [<perm|@setname>[,...]] <filesystem|volume>

        hold [-r] <tag> <snapshot> ...
        holds [-Hp] [-r|-d depth] <filesystem|volume|snapshot> ...
        release [-r] <tag> <snapshot> ...
        diff [-FHt] <snapshot> [snapshot|filesystem]

        jail <jailid|jailname> <filesystem>
        unjail <jailid|jailname> <filesystem>
        remap <filesystem | volume>

Each dataset is of the form: pool/[dataset/]*dataset[@name]

For the property list, run: zfs set|get

For the delegated permission list, run: zfs allow|unallow
grahamperrin@freebsd:~ %
 
This is probably going to make me look very stupid, however . .
On 13.0-RELEASE-p4 I have openzfs running, (I think).
zfs version
zfs-2.0.0-FreeBSD_gf11b09dec
zfs-kmod-v2021121500-zfs_f291fa658
And my loader.conf shows (partial)
opensolaris_load="NO"
fusefs_load="YES"
zfs_load="NO"
openzfs_load="YES"
But my rc.conf shows (partial)
zfs_enable="YES"
When I tried to change this to
openzfs_enable="YES"
or even just remove it, the desktop will not boot.
I have changed it and then applied the boot update - which for me is:
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada1
which seems to report OK.
Maybe it doesn't matter; the desktop runs fine, but I don't like mysteries and I don't want some minor update to brick me.
I am doing this experimenting by using beadm and creating differing boot environments - but I must admit to being uncertain when to apply the gpart boot code (as above) - if at all.
Thanks if you can help.
 
This is probably going to make me look very stupid, …

No :) it's fine to ask questions, this is certainly a source of confusion.

Your /boot/loader.conf above tells FreeBSD to:
  • not load the OpenZFS module zfs that is integral to FreeBSD 13.⋯
  • load the OpenZFS module openzfs that is managed as a port, separate from FreeBSD.
Your /etc/rc.conf tells FreeBSD whether to enable ZFS in multi-user mode
  • zfs here is, like, the traditional generic term that predated OpenZFS.
Keys to understanding why the two modules are named in this way include:
  • a difference between FreeBSD 12.⋯ and 13.⋯ – the inferior version of the operating system has an integral zfs that is not OpenZFS
  • design for smooth upgrades of the OS – to not require ZFS-related changes to either loader.conf or rc.conf.



HTH
 
<https://forums.freebsd.org/profile-posts/3749/> noted the absence of FreeBSD from search results at <https://distrowatch.com/search.php?pkg=zfs&relation=similar&pkgver=&distrorange=InLatest#pkgsearch>:

1646572124457.png

I'm advised that DistroWatch tracks the ZFS package from <https://github.com/openzfs/zfs/releases>.

From <https://github.com/openzfs/zfs/releases/tag/zfs-2.1.2> (corresponding to 2.1.2 in the example above):

Supported Platforms
  • Linux: compatible with 3.10 - 5.15 kernels
  • FreeBSD: compatible with releases starting from 12.2-RELEASE

For DistroWatch detection purposes:
  • is there a easy way to parse the version of ZFS in a 2.⋯ format without performing an installation of FreeBSD?
(I suspect not …)



For reference, with FreeBSD 14.0-CURRENT updated a few hours ago:

Code:
root@mowa219-gjp4-8570p-freebsd:~ # grep zfs_load /boot/loader.conf
zfs_load="YES"
openzfs_load="NO"
root@mowa219-gjp4-8570p-freebsd:~ # zfs version
zfs-2.1.99-FreeBSD_g17b2ae0b2
zfs-kmod-2.1.99-FreeBSD_g17b2ae0b2
root@mowa219-gjp4-8570p-freebsd:~ # uname -aKU
FreeBSD mowa219-gjp4-8570p-freebsd 14.0-CURRENT FreeBSD 14.0-CURRENT #5 main-n253627-25375b1415f-dirty: Sat Mar  5 14:21:40 GMT 2022     root@mowa219-gjp4-8570p-freebsd:/usr/obj/usr/src/amd64.amd64/sys/GENERIC-NODEBUG amd64 1400053 1400053
root@mowa219-gjp4-8570p-freebsd:~ # ee /boot/loader.conf

root@mowa219-gjp4-8570p-freebsd:~ # grep zfs_load /boot/loader.conf
zfs_load="NO"
openzfs_load="YES"
root@mowa219-gjp4-8570p-freebsd:~ # exit
logout
% exit

– I see g17b2ae0b2 at <https://github.com/freebsd/freebsd-...ddc6cd94b963e/sys/modules/zfs/zfs_gitrev.h#L5> but beyond that, I get lost.

Code:
root@mowa219-gjp4-8570p-freebsd:~ # date ; uptime
Sun Mar  6 15:14:55 GMT 2022
 3:14PM  up 3 mins, 5 users, load averages: 3.91, 1.17, 0.44
root@mowa219-gjp4-8570p-freebsd:~ # pkg info -x openzfs
openzfs-2022021400_1
openzfs-kmod-2022021400
root@mowa219-gjp4-8570p-freebsd:~ # zfs version
zfs-2.1.99-FreeBSD_g17b2ae0b2
zfs-kmod-v2022021400-zfs_9f734e81f
root@mowa219-gjp4-8570p-freebsd:~ # ee /boot/loader.conf

root@mowa219-gjp4-8570p-freebsd:~ # grep zfs_load /boot/loader.conf
zfs_load="YES"
openzfs_load="NO"
root@mowa219-gjp4-8570p-freebsd:~ #

– GitHub matches for 9f734e81f include <https://github.com/openzfs/zfs/pull/13092>, but I can't see 9f734e81f anywhere in that PR. Maybe I'm being lazy.
 
a difference between FreeBSD 12.⋯ and 13.⋯ – the inferior version of the operating system has an integral zfs that is not OpenZFS
FreeBSD 13.0 does have OpenZFS integrated in the base. It replaced the previous ZFS import.

You do NOT need to install sysutils/openzfs and/or sysutils/openzfs-kmod. Those are a development version of OpenZFS. Use at your risk.
 
FreeBSD 13.0 does have OpenZFS integrated in the base. …

True, I didn't suggest otherwise.

I wrote: "… FreeBSD 12.⋯ and 13.⋯ – the inferior version of the operating system has an integral zfs that is not OpenZFS"
  • the inferior version is 12.
 
Dear grahamperrin,
a little bit off-topic:
The bad thing of UK people is that their vocabulary exceeds the knowledge of other folks. This can lead to misunderstandings.
The good thing is that people like me can use the FreeBSD forum to improve their English skills, too. That overcompensates a few possible misunderstandings :).
 
Dear grahamperrin,
a little bit off-topic:
The bad thing of UK people is that their vocabulary exceeds the knowledge of other folks. This can lead to misunderstandings.
The good thing is that people like me can use the FreeBSD forum to improve their English skills, too. That overcompensates a few possible misunderstandings :).
There's a good joke about this: A programmer was asked: How did you learn English so easily? His reply: Oh, come on! Just about all the English words are from C++ anyway!
 
There's a good joke about this: A programmer was asked: How did you learn English so easily? His reply: Oh, come on! Just about all the English words are from C++ anyway!
In that case, learning COBOL would also teach you English grammer, and even more vocabulary. Another joke about that: As we all know, C++ is the object-oriented version of the C programming language. The equivalent in COBOL is the following statement: "ADD 1 TO COBOL GIVING COBOL." Yes, that sentence (with the period at the end!) is syntactically correct COBOL, and a good way to do the same operation that "C++" does in C.
 
Lots of quotes from this thread ...

Reading the Release Notes is always good, in this instance especially so.*
Follow one of these four paths:
  1. FreeBSD home » Download FreeBSD**
    FreeBSD 13.0-RELEASE [...] Documentation [...] Release Notes 👈
  2. FreeBSD home » Get FreeBSD » Release Information
    Most Recent Releases
    Production Release

    Release 13.0 (April 13, 2021) Announcement : Release Notes 👈
  3. FreeBSD home » Production: 13.0***
    FreeBSD 13.0-RELEASE Announcement [...]
    For a complete list of new features and known problems, please see [...] :
    https://www.FreeBSD.org/releases/13.0R/relnotes/ 👈
  4. The Release Notes 👈 that go with the installation itself
From the Release Notes of FreeBSD 13.0-RELEASE:
The ZFS implementation is now provided by OpenZFS. 9e5787d2284e (Sponsored by iXsystems)

___
* although I must admit we wouldn't be enlightened about one of the subtle nuanced differences between the UK-English and US-English as used by grahamperrin
** just to the right of Beastie :)
*** near the tail of Beastie ;)
 
(I don't describe 12 as older than 13 because sometimes, it's newer ... and so on.)
Yeah, that versioning mishap (12.0 being older than 13.0, but 12.2 being newer than 13.0) is not my cup of tea. But, I guess we'll just have to wait for 13.1 for things to shake out by themselves. I don't see THAT as something to lay an egg over. 🥱
 
Multiple maintenance branches isn't really all that unusual in computing. Debian 10.11 is newer than 11.0, but it just continues to keep the exact code of 10.x alive through bug/security fixes. That's the same thing as FreeBSD 12.x and 13.x. (or take a non-OS example: Firefox 91.6 is newer than versions 92-96)
 
Multiple maintenance branches isn't really all that unusual in computing. Debian 10.11 is newer than 11.0, but it just continues to keep the exact code of 10.x alive through bug/security fixes. That's the same thing as FreeBSD 12.x and 13.x. (or take a non-OS example: Firefox 91.6 is newer than versions 92-96)
Yeah, that's true, but it does take some paying attention to avoid being confused. It is an extra detail that is known by some to be unreliable. 😩
 
Versioning should show YYYYMMDD somewhere. It removes all the stupid. It doesn't have to be the primary version. If I released something today. Thinginator v2.2_20220308 or Thinginator v2.2 (20220308), whatever just make it apparent.

Code:
$ thinginator -v
Thinginator v2.2 (20220308)

And code names are godawful. Debian / Ubuntu... "gi joe 800" "hairy hairbrush" etc makes searching for answers so painful. I don't believe FreeBSD suffers from this though.
*italics is not literal... depressing it needs saying
 
Apologies once again for my lack of knowledge (and using this ancient thread), but I am still confused about the use of gptzfsboot in upgrading root pools.
This is a simple case, but may help others.
I've upgraded several times to now reach 13.0-RELEASE-p11 and (as I understand it) now have a root partition asking me to
Code:
Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(5) for details.
Code:
zfs version
zfs-2.0.0-FreeBSD_gf11b09dec
zfs-kmod-2.0.0-FreeBSD_gf11b09dec
and gpart show gives (partial)
Code:
=>       40  976773088  nvd1  GPT  (466G)
         40       1024     1  freebsd-boot  (512K)
1064        984        - free -  (492K)
2048    4194304     2  freebsd-swap  (2.0G)
4196352  972576768     3  freebsd-zfs  (464G)
976773120          8        - free -  (4.0K)
My questions are:
  1. How do I use gptzfsboot after upgrading this zfs root partition (if that's the right way of describing it).
  2. What is the syntax to gptzfsboot in this case?
  3. Do I upgrade the pool, then run gptzfsboot then reboot?
  4. Do I have to remove lines like zfs_enable="YES" from rc.conf?
  5. Are there any other configuration adjustments to be made before I reboot?
  6. Lastly, if the machine doesn't boot - what are my best recovery options?
Many thanks for help - but I cannot find a guide / post that I feel comfortable with - but that's my lack of knowledge, not a complaint.
 
My opinons, my understanding of it all, apologies if it gets long:
about the use of gptzfsboot in upgrading root pools.
You do not use gptzfsboot to upgrade root or any other pools.

gptzfsboot is the "bootloader" or "loader" used when your system boots from BIOS or UEFI Compatibility Mode, which based on your gpart show that is how you are booting (no efi partition, freebsd-boot of about 512k)

Sometimes when upgrading a system, the old version of gptzfsboot may be incompatible with newer versions of zfs/zpools. In that case, you may not be able to boot the system.

A lot of people feel it's a "best practice" to update the bootblocks whenever you do an upgrade, at least across versions (say 12 to 13), others will do it every patch release.

zpool upgrade works on a zpool. Why would you ever do that? Because the new version of ZFS has some new feature that you want to use; the previous version of ZFS did not have that feature.
Typically it is safe to do a zpool upgrade. The warning note you see is a heads up that:
By doing zpool upgrade and enabling a new feature you may not be able to use that zpool on a system that doesn't understand that feature.
Think you have a system running FBSD-12, you create a new BE to upgrade to FBSD-13. There was a change from "native ZFS" in 12 to OpenZFS in 13, bringing in new features and other things. If you are in FBSD-13, zpool upgrade, enable some new OpenZFS feature you may not be able to boot into your FBSD-12 BE.

My experience has been if I've simply used the installer to create the system, root on zfs, never enabled any features just leave the zpools at whatever defaults the system used at the time, it's safe to first update gptzfsboot, then do a zpool upgrade.

My questions are:
  1. How do I use gptzfsboot after upgrading this zfs root partition (if that's the right way of describing it).
gpart bootcode -p /boot/gptzfsboot -i 1 ada0 where you replace ada0 with your device name
  1. What is the syntax to gptzfsboot in this case?
You use gpart to put a new version of gptzfsboot on the device, you do not run gptzfsboot.
  1. Do I upgrade the pool, then run gptzfsboot then reboot?
I prefer to update gptzfsboot using gpart first without doing zpool upgrade, then reboot to make sure the system comes up, then do a zpool upgrade and another reboot. But I'm overly cautious sometimes.
  1. Do I have to remove lines like zfs_enable="YES" from rc.conf?
No. If you are currently booting with root on ZFS, do not remove any zfs_load or zfs_enable from /boot/loader.conf or /etc/rc.conf
  1. Are there any other configuration adjustments to be made before I reboot?
Should not need any
  1. Lastly, if the machine doesn't boot - what are my best recovery options?
Come back here, blame me for giving bad advice? Seriously a copy of the latest install image for your version (FBSD-13.1-RELEASE), boot it, drop to a shell, and run the gpart command to update gptzfsboot often fixes a lot of issues. Other folks here will have other help based on exactly what the problem is.
 
My opinons, my understanding of it all, apologies if it gets long:

You do not use gptzfsboot to upgrade root or any other pools.

gptzfsboot is the "bootloader" or "loader" used when your system boots from BIOS or UEFI Compatibility Mode, which based on your gpart show that is how you are booting (no efi partition, freebsd-boot of about 512k)

Sometimes when upgrading a system, the old version of gptzfsboot may be incompatible with newer versions of zfs/zpools. In that case, you may not be able to boot the system.

A lot of people feel it's a "best practice" to update the bootblocks whenever you do an upgrade, at least across versions (say 12 to 13), others will do it every patch release.

zpool upgrade works on a zpool. Why would you ever do that? Because the new version of ZFS has some new feature that you want to use; the previous version of ZFS did not have that feature.
Typically it is safe to do a zpool upgrade. The warning note you see is a heads up that:
By doing zpool upgrade and enabling a new feature you may not be able to use that zpool on a system that doesn't understand that feature.
Think you have a system running FBSD-12, you create a new BE to upgrade to FBSD-13. There was a change from "native ZFS" in 12 to OpenZFS in 13, bringing in new features and other things. If you are in FBSD-13, zpool upgrade, enable some new OpenZFS feature you may not be able to boot into your FBSD-12 BE.

My experience has been if I've simply used the installer to create the system, root on zfs, never enabled any features just leave the zpools at whatever defaults the system used at the time, it's safe to first update gptzfsboot, then do a zpool upgrade.


gpart bootcode -p /boot/gptzfsboot -i 1 ada0 where you replace ada0 with your device name

You use gpart to put a new version of gptzfsboot on the device, you do not run gptzfsboot.

I prefer to update gptzfsboot using gpart first without doing zpool upgrade, then reboot to make sure the system comes up, then do a zpool upgrade and another reboot. But I'm overly cautious sometimes.

No. If you are currently booting with root on ZFS, do not remove any zfs_load or zfs_enable from /boot/loader.conf or /etc/rc.conf

Should not need any

Come back here, blame me for giving bad advice? Seriously a copy of the latest install image for your version (FBSD-13.1-RELEASE), boot it, drop to a shell, and run the gpart command to update gptzfsboot often fixes a lot of issues. Other folks here will have other help based on exactly what the problem is.
Most grateful. Need to read this several times. Your comment on gptzfsboot is illuminating. Several in the past have warned that not using gptzfsboot is looking for trouble, and I believe some are complaining the subject has been omitted from recent "official texts". Thanks again.
 
Back
Top