It is important to realise the commonalities and differences between the layered storage structures of non-ZFS filesystems like UFS, and ZFS filesystems. UFS must have separate slices (i.e., physical and fixed sized "partitions") for separate filesystems that are to be mapped or mounted to the hierarchical filesystem with
/ at the top of that hierarchy.
ZFS is a
pooled storage system where all the filesystems (within one pool) share many resources: one such property is space; hence the name pooled. In effect this creates an additional storage layer structure that isn't present in a traditional filesystem structure like UFS (see for example
ZFS Administration, Part X- Creating Filesystems by Aaron Toponce). Datasets within a pool are somewhat like slices ("partitions") in that they represent separate filesystems that can/must be mapped into the (mounting) hierarchy below
/.
The usual default dataset structures within a pool and its accompanying mounting mappings currently follow the structure as shown by
astyle. Your structure with your "tank" pool deviates from that[4]. Because ZFS is a pooled system all datasets reside inside one pool[1].
Rich (BB code):
(ins)jon:~ zfs list -o name,mounted,mountpoint,canmount
NAME MOUNTED MOUNTPOINT CANMOUNT
tank yes /tank on
tank/ROOT no none on
tank/ROOT/13.0-RELEASE-p11 no / noauto
tank/ROOT/13.0-RELEASE-p8 no / noauto
tank/ROOT/13.0-RELEASE-p8-nvidia-driver no / noauto
tank/ROOT/13.1-STABLE no / noauto
tank/ROOT/13.1-STABLE-20220507.080956 no / noauto
tank/ROOT/13.1-STABLE-20230224.072050 yes / noauto
[...] I think I may have inadvertently mounted tank at some point, but I don't have whatever command I ran save in my shell history.
"tank" is your pool name; at boot time this pool gets (automatically) imported and its datasets get mounted according to their respective
canmount
&
mountpoint
properties[2]. At the top of your
dataset hierarchy is your pool "tank"; any dataset inside "tank" is subordinate to it; it's placed relative to
tank
(yellow highlighted in the first column); in your current view above those are all the datasets
ROOT
and its (subordinate) child datasets, including your BEs.
[...] When I ran sudo zfs create tank/bhyve
it created the folder /tank/bhyve
instead of /bhyve
.
zfsconcepts(7):
Rich (BB code):
Mount Points
Creating a ZFS file system is a simple operation, so the number of file
systems per system is likely to be numerous. To cope with this, ZFS au-
tomatically manages mounting and unmounting file systems without the need
to edit the /etc/fstab file. All automatically managed file systems are
mounted by ZFS at boot time.
By default, file systems are mounted under /path, where path is the name
of the file system in the ZFS namespace. Directories are created and de-
stroyed as needed.
A file system can also have a mount point set in the mountpoint property.
This directory is created as needed, and ZFS automatically mounts the
file system when the zfs mount -a command is invoked (without editing
/etc/fstab). The mountpoint property can be inherited, so if pool/home
has a mount point of /export/stuff, then pool/home/user automatically in-
herits a mount point of /export/stuff/user.
A file system mountpoint property of none prevents the file system from
being mounted.
In your case, the top in your
dataset hierarchy,
tank (i.e. your pool name) has as
mountpoint
property
/tank. It has its
canmount
property set to
on
and thus it will appear as
/tank in your
mounted filesystem hierarchy. This
/tank directory will initially be empty and functions as sort of a marker for newly created data sets: those wil be placed there when not otherwise specified[3].
zfs-create(8):
Code:
DESCRIPTION
zfs create [-Pnpuv] [-o property=value]a| filesystem
Creates a new ZFS file system. The file system is automatically
mounted according to the mountpoint property inherited from the parent,
unless the -u option is used.
With
sudo zfs create tank/bhyve
you've created a new dataset
tank/bhyve
in the
dataset hierarchy of your pool "tank" having its
mountpoint
property (i.e.
/tank) inherited from its parent in the dataset hierarchy:
tank, creating a new directory
/tank/bhyve in your
filesystem hierarchy, then using that as mounpoint and lastly actually mounting it there (see also the quotation above of
zfsconcepts(7) where the dataset
pool/export/stuff/user is mapped to mountpoint
/export/stuff/user).
If you don’t want /tank, just set canmount=off on tank.
Before creating a new dataset there, with the aid of
zfs set canmount=off tank
you could get rid of
/tank. However, unless overridden, the
mountpoint
property
/tank will be used at the creation of any new dataset immediately below the top of the dataset hierarchy (
tank):
Code:
# zfs set canmount=off tank
# zfs create tank/bhyve
# zfs list -o name,mounted,mountpoint,canmount
NAME MOUNTED MOUNTPOINT CANMOUNT
tank no /tank off
tank/ROOT no none on
tank/ROOT/default yes / on
<snap>
tank/bhyve yes /tank/bhyve on
___
[1] Unless you have multiple pools, then each pool must "reside in a different slice"; usually that means different (sets of) physical hard disks where VDEVs from which a pool is constructed are used for another pool.
[2] As mentioned in the first quoted paragraph, ZFS manages its mounting outside of
/etc/filetab by default; if you set any
mountpoint
property of a dataset to
legacy
you must explicitly manage those mounts either through
/etc/filetab or manually through
mount(8), as mentioned in answers by others.
[3] See for example
ZFS default datasets from ca. 6:44 min.
This is from the following presentation and gives a good view of some of the (advanced) possibilities that BEs offer:
[4] As your dataset structure seems to deviate from the default given your output quoted above, you, for example, do not have a separate dataset for
tank/usr/home
,
tank/var/log
and
tank/var/mail
. That means that the files in
/usr/home,
/var/log or
/var/mail respectively belong to the dataset used for BEs; usually that is not what you want. It adds to the space occupied by the BEs but, more importantly, any change in those directories that falls inside a BE will revert to its state belonging to any other (older) BE when you decide to or must revert to another BE. Please refer to the presentation at [3] where this is explained further. The following references may also be helpful:
- Managing Boot Environments by Klara systems - July 15, 2021
- Let’s Talk OpenZFS Snapshots by Klara systems - July 28, 2021
- Basics of ZFS Snapshot Management by Klara Systems - May 12, 2021
- Advanced ZFS Snapshots by Klara Systems - October 27, 2021