Solved Clone a system with ZFS

I use a "test" system which I use as an "image" to clone it to "production" systems. The "test" system uses UFS and I use dump to backup each partition. The datacenter allows me to boot the "production" system in mfsbsd and then I use a script which creates partitions and uses restore to restore each partition from the backup, make changes to /etc/rc.conf, /etc/hosts, etc.

Now I want to create a second "test" system with ZFS and to clone to it "production" systems. What ZFS commands I will need for this?
 
What ZFS commands I will need for this?
zfs-snapshot(8), zfs-send(8) and zfs-receive(8).

The basic gist is to make a snapshot of the dataset, then use zfs-send(8) to turn this snapshot into a byte stream. That byte stream can be stored as a file, you could use as a backup, or piped directly to zfs-receive(8) which turns this byte stream back into a ZFS dataset.

The strategy is almost the same as dump(8)/restore(8) with zfs-send(8) taking the role of dump(8) and zfs-receive(8) the role of restore(8).
 
I install a new "test" system using a "bsdinstallimage" script provided from the datacenter which uses "bsdinstall auto" to complete the installation.

I choose to mirror SWAP and I got this:

Code:
gpart show
=>        40  7814037088  ada0  GPT  (3.6T)
          40        1024     1  freebsd-boot  (512K)
        1064         984        - free -  (492K)
        2048    33554432     2  freebsd-swap  (16G)
    33556480  7780478976     3  freebsd-zfs  (3.6T)
  7814035456        1672        - free -  (836K)

=>        40  7814037088  ada1  GPT  (3.6T)
          40        1024     1  freebsd-boot  (512K)
        1064         984        - free -  (492K)
        2048    33554432     2  freebsd-swap  (16G)
    33556480  7780478976     3  freebsd-zfs  (3.6T)
  7814035456        1672        - free -  (836K)

Code:
gmirror status
       Name    Status  Components
mirror/swap  COMPLETE  ada0p2 (ACTIVE)
                       ada1p2 (ACTIVE)

Question 1: Should I use UFS with gmirror for SWAP or I can use ZFS for it?

Also the installer create these:

Code:
df -h
Filesystem            Size    Used   Avail Capacity  Mounted on
zroot/ROOT/default    3.5T    546M    3.5T     0%    /
devfs                 1.0K    1.0K      0B   100%    /dev
zroot/usr/ports       3.5T     96K    3.5T     0%    /usr/ports
zroot/var/log         3.5T    160K    3.5T     0%    /var/log
zroot/tmp             3.5T    128K    3.5T     0%    /tmp
zroot/var/mail        3.5T     96K    3.5T     0%    /var/mail
zroot/var/audit       3.5T     96K    3.5T     0%    /var/audit
zroot/var/tmp         3.5T     96K    3.5T     0%    /var/tmp
zroot/var/crash       3.5T     96K    3.5T     0%    /var/crash
zroot/usr/home        3.5T    128K    3.5T     0%    /usr/home
zroot                 3.5T     96K    3.5T     0%    /zroot
zroot/usr/src         3.5T     96K    3.5T     0%    /usr/src

Code:
mount
zroot/ROOT/default on / (zfs, local, noatime, nfsv4acls)
devfs on /dev (devfs)
zroot/usr/ports on /usr/ports (zfs, local, noatime, nosuid, nfsv4acls)
zroot/var/log on /var/log (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/tmp on /tmp (zfs, local, noatime, nosuid, nfsv4acls)
zroot/var/mail on /var/mail (zfs, local, nfsv4acls)
zroot/var/audit on /var/audit (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/var/tmp on /var/tmp (zfs, local, noatime, nosuid, nfsv4acls)
zroot/var/crash on /var/crash (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/usr/home on /usr/home (zfs, local, noatime, nfsv4acls)
zroot on /zroot (zfs, local, noatime, nfsv4acls)
zroot/usr/src on /usr/src (zfs, local, noatime, nfsv4acls)

Question 2: With UFS I have a separate /var , is any reason to have separate filesystem for the /var subdirectories? The only reason for separate filesystems is the different mount options?
 
Should I use UFS with gmirror for SWAP
Swap partition doesn't have a filesystem on it. You could mirror it though, if you want. If one of the disks dies one of those swap partitions would be unavailable. This could cause problems if this happens while the system is running. Mirroring the swap would prevent issues in that case.

is any reason to have separate filesystem for the /var subdirectories? The only reason for separate filesystems is the different mount options?
Yes, that's the main reason. And because they're different datasets you can set different properties. You could set a higher compression on /var/log for example, most of the data there is text, which compresses really nicely. For /var/db/mysql you might want to use a different recordsize (one that matches with MySQL), etc.
 
Here is the plan I used to relocate my ZFS root. It's BIOS booting (not UEFI), so doesn’t support booting from NVMe devices.
Code:
#!/bin/sh

# $Revision: 1.5 $

# I want to re-make the ZFS root file system with a different layout.  The
# root is currently on a pair of 250GB Micron SSDs (with a zroot ZFS mirror
# and a gmirror for swap).  These consumer grade SSDs can lose cached data
# if there is a sudden power loss, and that's unacceptable for a SLOG.
# I need to replace them with "enterprise class" SSDs.
#
# In Stage 1 have plugged in a pair of Velociraptors which are the same
# size.  This script moves the root from Micron SSDs (ada0, ada1) to
# Velociraptors (ada2, ada3).
#
# In stage 2, I will move the root back to an "enterprise class" SSD mirror.
# I have an Intel DC S3520 240GB and a DC S3610 400GB for the mirror, and
# will create a power loss protected boot "disk" with SLOG and L2ARC.  To
# execute stage 2 (Velociraptors back to Intel SSD), only the root pool name,
# the swap mirror name, and target disks get switched:
#
#   ssd0: Intel DC S3520 240GB (PLP + E2E)
#   ssd1: Intel DC S3610 400GB (PLP + E2E)

# Stage 1 layout is 2x256GB boot disks (mirror) with ZFS SLOG and L2ARC
# (the ZFS SLOG and L2ARC aren't used until we have an SSD target in Stage 2):
#
# --: 40 sectors: MBR legacy boot and reserved sectors
# p1: 512KiB: freebsd-boot: manually constructed duplicates
# p2: 16GiB: gmirror: freebsd-swap
# p3: 86GiB: zfs-mirror: freebsd-zfs (zroot)
# p4: 12GiB: zfs-mirror: freebsd-zfs (SLOG)
# p5: 64GiB: stripe: freebsd-zfs (2 x 64GiB L2ARC)
# p6: 60GiB: unused: freebsd-ufs (2 x 60GiB UFS, empty, trimmed)

# Note, in Stage 2, with Intel DC 3xx0 SSD (mirror), p6 is different size:

# Target root pool name
#ZROOTSRC=zroot; ZROOTDST=zroot2        # Stage 1: target Velociraptors
ZROOTSRC=zroot2; ZROOTDST=zroot         # Stage 2: target Intel DC 3xx0 SSDs

# Target swap GEOM mirror name
#SWAP=swap2             # Stage 1
SWAP=swap               # Stage 2

# The target disks for the new root mirror.
DEV0=/dev/ada2          # Stage 1: Velociraptor.  Stage 2: 240GB DC S3520
DEV1=/dev/ada3          # Stage 1: Velociraptor.  Stage 2: 400GB DC S3610

PROG=$(basename $0)
TAB=$(echo | tr '\n' '\t')
PATH="/usr/local/bin:/bin:/usr/bin:/sbin:/usr/sbin:/usr/local/sbin"
export PATH

Say()
{
    echo "$PROG: $*"
}

Barf()
{
    Say $@ 1>&2
    exit 1
}

# BSD "echo -n"
case `echo -n` in
    -*) Echon() { echo ${1:+"$@"}"\c"; };;
    *)  Echon() { echo -n ${1:+"$@"}; };;
esac

IsPosNZint()
{
    echo $* | grep -q "^[1-9][0-9]*$"
}

# Get last "count" (default 15) characters of a disk serial number.
GetDiskSerialNumber()   # device [count]
{
    dev=$1
    count=${2:-15}
    sn=$(camcontrol identify $dev | \
        grep "serial number" | \
        sed -e 's/serial number[ $TAB]*//')
    nsn=$(Echon "$sn" | wc -c)
    nsn=$(echo $nsn)
    start=1
    [ $nsn -gt $count ] && start=$(($nsn-$count))
    echo "$sn" | cut -c $start-$nsn | sed -e 's/^[ -]*//'
}

id | grep "^uid=0" || Barf "You MUST be root"

# Get the serial numbers for the root mirror
SN0=$(GetDiskSerialNumber $DEV0 12)
SN1=$(GetDiskSerialNumber $DEV1 12)

# create the partition tables
gpart destroy -F ${DEV0}
gpart destroy -F ${DEV1}
gpart create -s GPT ${DEV0}
gpart create -s GPT ${DEV1}

# Create a 512 kB boot partition at offset 40 -- which is the size of the
# FAT_32 "reserved sectors" (32 or 34 blocks) rounded up to 4 kB boundary.
# This is the same layout used by the FreeBSD 13 intaller.
gpart add -i 1 -b 40 -s 512k -l ${SN0}:p1 -t freebsd-boot ${DEV0}
gpart add -i 1 -b 40 -s 512k -l ${SN1}:p1 -t freebsd-boot ${DEV1}

# Install the first and second stage bootloaders for a ZFS root
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ${DEV0}
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ${DEV1}

# Allign all subsequent partitions on a 1 MiB boundary
gpart add -a 1m -i 2 -s 16g -l ${SN0}:p2 -t freebsd-swap ${DEV0}
gpart add -a 1m -i 2 -s 16g -l ${SN1}:p2 -t freebsd-swap ${DEV1}
gpart add -a 1m -i 3 -s 86g -l ${SN0}:p3 -t freebsd-zfs ${DEV0}
gpart add -a 1m -i 3 -s 86g -l ${SN1}:p3 -t freebsd-zfs ${DEV1}
gpart add -a 1m -i 4 -s 12g -l ${SN0}:p4 -t freebsd-zfs ${DEV0}
gpart add -a 1m -i 4 -s 12g -l ${SN1}:p4 -t freebsd-zfs ${DEV1}
gpart add -a 1m -i 5 -s 64g -l ${SN0}:p5 -t freebsd-zfs ${DEV0}
gpart add -a 1m -i 5 -s 64g -l ${SN1}:p5 -t freebsd-zfs ${DEV1}
# The rest is TRIM'd and unused
gpart add -a 1m -i 6 -l ${SN0}:p6 -t freebsd-ufs ${DEV0}
gpart add -a 1m -i 6 -l ${SN1}:p6 -t freebsd-ufs ${DEV1}

gpart list ${DEV0}
gpart list ${DEV1}
ls -la /dev/gpt

# Make the new gmirror swap space on the new root disks.
# The gmirror metadata will get copied to /dev/mirror in the new root.
gmirror label -v -b round-robin $SWAP /dev/gpt/${SN0}:p2
gmirror insert $SWAP /dev/gpt/${SN1}:p2

# For the new root.
zpool create $ZROOTDST mirror /dev/gpt/${SN0}:p3 /dev/gpt/${SN1}:p3

# Save these commands for after we re-boot (paste them to a safe place).
# However we are only going to execute these steps if the new root is on SSDs.
# i.e. don't bother if we are relocating root to the interim Velociraptors.
echo zpool add tank log mirror /dev/gpt/${SN0}:p4 /dev/gpt/${SN1}:p4
echo zpool add tank cache /dev/gpt/${SN0}:p5 /dev/gpt/${SN1}:p5

# Partition 6 is assigned to over-provisioning on DEV0 and DEV1
newfs -E /dev/gpt/${SN0}:p6     # trimmed, unused
newfs -E /dev/gpt/${SN1}:p6     # trimmed, unused

# Copy the old root to the new root.
zfs set compression=lz4 $ZROOTDST
zpool status $ZROOTSRC $ZROOTDST
zfs snapshot -r $ZROOTSRC@replica1
zfs list -r -t snapshot $ZROOTSRC
zfs umount $ZROOTDST    # you must keep it unmounted
size=$(zfs send -nP -R $ZROOTSRC@replica1 | \
    grep "^size" | sed -e 's/size[ $TAB]*//')
IsPosNZint "$size" || Barf "bad send size for $ZROOTSRC@replica1: \"$size\""
zfs send -R $ZROOTSRC@replica1 \
    | pv -s $size -ptebarT | zfs receive -Fdu $ZROOTDST

# The new root is now frozen at the time of the snapshot.
# If this is an issue you need to drop into single user mode
# to execute the snapshot prior to send/receive.

# This is the default bootable dataset for the new root pool.
# It's usually <zroot_pool_name>/ROOT/default.
# But an upgrade using a different boot environment may change that.
# You must get this right, or your system will not boot.
# Run "zpool get bootfs $ZROOTSRC" and switch:
#   stage 1: zroot to zroot2
#   stage 2: zroot2 to zroot
zpool set bootfs=$ZROOTDST/ROOT/13 $ZROOTDST
zpool export $ZROOTDST

# Reboot, but interrupt it to re-confgiure the BIOS.
# Edit the BIOS boot order to favour new root mirrors, e.g. ada2, ada3.
# Reset, and allow the system to boot SINGLE USER mode.
# We need to stop the old zroot from being imported and mounted.
# https://openzfs.github.io/openzfs-docs/Project%20and%20Community/\
#         FAQ.html#the-etc-zfs-zpool-cache-file
zfs set readonly=off $ZROOTDST
rm -f /boot/zfs/zpool.cache /etc/zfs/zpool.cache
zpool set cachefile=/etc/zfs/zpool.cache $ZROOTDST
# Change fstab to use the new swap partition
# fstab: /dev/mirror/$SWAP none swap sw 0 0
vi /etc/fstab
exit

# Continue on to multi-user mode

# I need to release anything else used on the original root disks.
# My old swap gmirror is no longer used, and needs to be destroyed.
swapinfo
ls -la /dev/mirror
gmirror destroy <swap-name>

# After it's verified working...
zpool destroy $ZROOTSRC
Today, my root looks like this:
Code:
[sherman.138] $ zpool status zroot
  pool: zroot
 state: ONLINE
  scan: scrub repaired 0B in 00:02:52 with 0 errors on Wed Apr 13 14:13:59 2022
config:

    NAME                      STATE     READ WRITE CKSUM
    zroot                     ONLINE       0     0     0
      mirror-0                ONLINE       0     0     0
        gpt/236009L240AGN:p3  ONLINE       0     0     0
        gpt/410008H400VGN:p3  ONLINE       0     0     0

errors: No known data errors
With ".eli" appeded to the swap device, swapon(8) will set up GELI encrypt:
Code:
[sherman.145] $ cat /etc/fstab
# Device        Mountpoint    FStype    Options        Dump    Pass#
#/dev/mirror/swap      none        swap    sw        0    0
/dev/mirror/swap.eli      none        swap    sw        0    0

[sherman.146] $ geom mirror status
       Name    Status  Components
mirror/swap  COMPLETE  ada0p2 (ACTIVE)
                       ada1p2 (ACTIVE)

[sherman.147] $ swapinfo
Device          1K-blocks     Used    Avail Capacity
/dev/mirror/swap.eli  16777212        0 16777212     0%
 
For /var/db/mysql you might want to use a different recordsize (one that matches with MySQL), etc.

# zfs create -o mountpoint=/var/db/mysql zroot/var/db/mysql
cannot create 'zroot/var/db/mysql': parent does not exist
# zfs create -o mountpoint=/var/db zroot/var/db
# zfs create -o mountpoint=/var/db/mysql zroot/var/db/mysql

But then I can't see the files in /var/db as an empty filesystem is mounted on /var/db
 
OK I found how to do it:

# zfs create -o mountpoint=none zroot/var/db
# zfs create -o mountpoint=/var/db/mysql zroot/var/db/mysql
 
If you plan to install such remote systems on a more regular basis, you might also want to take a look at the scripting capabilities of bsdinstall(8), which has become quite versatile and very easy to extend with custom scripts.
Even complex setups can be installed via relatively short bsdinstall scripts (IMHO often easier than manually via the GUI or console) and you can also put together a very minimal mfsBSD-image that contains only the bare minimum to kick off bsdinstall and pulls everything else during installation. I've used this approach on bare metal servers as well as digitalocean droplets back when they only offered outdated and/or non-zfs FreeBSD images.
Additional packages can be installed after the main installation process and configuration or user data can be pulled e.g. from git repositories. You can pretty much fully set up the servers in an automated way without having to maintain (e.g. update) and copy around full images (or zfs datasets) on/from a "master" system.
 
Because I have all the software pre-installed and configured using ports (I need custom options) it's faster if I use an "image" to install a new system.
 
Now I want to create a second "test" system with ZFS and to clone to it "production" systems. What ZFS commands I will need for this?
If I understand right what you want, this is how I do this kind of task

Given machine X with disk Y (zfs), backup everything to "something" (an HD, USB, NAS, whatever), then restore to machine U with empty disk V
This is just a backup with the restore procedure.

I generally don't use a direct zfs send between two machines (one of which is empty): I only do it between two "full" machines (a replica, essentially).


In fact it is not super detailed, I have not put where to save the copy (eg on a USB memory/disk), or the case where you can connect the copy disk to the physical machine etc.
In the example the backup is made on a zfs-formatted drive, but it is just a .gz file, nothing more

If you need further help please ask
 
Thank you all for the help. I will try to write a script which I will run from mfsbsd to:

1) Destroy ada0, ada1
2) Create freebsd-boot (512K), freebsd-swap (16GB), freebsd-zfs (all free space) for ada0
3) Gpart backup partitions from ada0 and restore to ada1
4) Create gmirror for swap
5) Create zfs filesystems
6) Restore snapshots backups using zfs-send and zfs-receive
7) Write the zfs bootcode to both disks
8) Set bootfs to new zroot
9) Configure /etc/hosts and /etc/rc.conf with the correct IP
10) Reboot
 
I made some progress:

Code:
gpart destroy -F ada0
gpart destroy -F ada1

# ada0

gpart create -s gpt ada0
gpart add -i 1 -b 40 -s 512k -t freebsd-boot ada0
gpart add -i 2 -a 1m -s 16G -t freebsd-swap ada0
gpart add -i 3 -a 1m -t freebsd-zfs ada0

# ada1

gpart backup ada0 | gpart restore -F ada1

# Create gmirror

gmirror load
gmirror label -vb prefer swap /dev/ada0p2 /dev/ada1p2

# Bootcode

gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada0
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada1

zfs create -o atime=off -o mountpoint=none zroot/ROOT
zfs create -o atime=off -o mountpoint=/ -o canmount=noauto zroot/ROOT/default
zfs create -o mountpoint=/tmp -o atime=off -o exec=on -o setuid=off zroot/tmp
zfs create -o mountpoint=none zroot/usr
zfs create -o mountpoint=none zroot/var
zfs create -o mountpoint=/home zroot/home

# Set bootfs property

zpool set bootfs=zroot/ROOT/default zroot

I copy the data directly from the "original" server to the "cloned" server using these commands:

Code:
zfs send zroot/ROOT/default@today | ssh zfs.cretaforce.gr zfs recv -F zroot/ROOT/default
zfs send zroot/usr@today | ssh zfs.cretaforce.gr zfs recv -F zroot/usr
zfs send zroot/home@today | ssh zfs.cretaforce.gr zfs recv -F zroot/home

The next step is instead of using ssh, to zfs-send to a file and use this file to zfs-recv. I will store these files to a web-server and fetch them in mfsbsd's /tmpfs.
 
Here is the final script:

Code:
#!/usr/local/bin/bash

# create the partition tables

gpart destroy -F ada0
gpart destroy -F ada1

# ada0

gpart create -s gpt ada0
gpart add -i 1 -b 40 -s 512k -t freebsd-boot ada0
gpart add -i 2 -a 1m -s 16G -t freebsd-swap ada0
gpart add -i 3 -a 1m -t freebsd-zfs ada0

# ada1

gpart backup ada0 | gpart restore -F ada1

# Create gmirror

gmirror label -vb prefer swap /dev/ada0p2 /dev/ada1p2

# Bootcode

gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada0
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada1

# create zpool

zpool create -f -R /var/tmp/temp zroot mirror ada0p3 ada1p3

# Data file created to the "original" server with command:
# zfs send -R zroot@today > /tmp/data.zfs

fetch -o /tmp/data.zfs https://example.com/data.zfs

# restore data

zfs receive -vF zroot < /tmp/data.zfs

# Set bootfs property

zpool set bootfs=zroot/ROOT/default zroot

# Mount zfs to edit /etc/rc.conf & /etc/hosts

mount -t zfs zroot/ROOT/default /mnt
chroot /mnt
# edit /etc/rc.conf & /etc/hosts

# Reboot

# shutdown -r now
 
I successfully clone a "test" server to "production" server using ZFS filesystems.

I also was able to use UFS dumps and restore it over ZFS filesystems.

Now I have another question.

I boot the server with mfsbsd and I run:


mount -t tmpfs tmpfs /mnt
zpool import -f -R /mnt zroot


The problem is that when I enter /mnt it doesn't show the files / directories that I expect to be there, for example etc, root.


ls /mnt
ls
home tmp usr var zroot


Also /mnt/usr has these:


ls /mnt/usr
ports src


Here are the datasets:

Code:
zfs list
NAME                 USED  AVAIL     REFER  MOUNTPOINT
zroot               3.92G  3.49T       96K  /mnt/zroot
zroot/ROOT          2.25G  3.49T       96K  none
zroot/ROOT/default  2.25G  3.49T     1.69G  /mnt
zroot/home           176K  3.49T      116K  /mnt/home
zroot/tmp            264K  3.49T      152K  /mnt/tmp
zroot/usr           1.67G  3.49T       96K  /mnt/usr
zroot/usr/ports      948M  3.49T      948M  /mnt/usr/ports
zroot/usr/src        758M  3.49T      758M  /mnt/usr/src
zroot/var           1.93M  3.49T       96K  /mnt/var
zroot/var/audit      160K  3.49T       96K  /mnt/var/audit
zroot/var/crash      160K  3.49T       96K  /mnt/var/crash
zroot/var/log       1.14M  3.49T      776K  /mnt/var/log
zroot/var/mail       168K  3.49T      104K  /mnt/var/mail
zroot/var/tmp        224K  3.49T      160K  /mnt/var/tmp

If I run this command:


mkdir /mnt2
mount -t zfs zroot/ROOT/default /mnt2


then I see the files:


ls /mnt2
.cshrc .snap bin compat entropy home lib media net rescue sbin tmp usr zroot
.profile COPYRIGHT boot dev etc home2 libexec mnt proc root sys tmpfs var
 
The problem exist:


mkdir /zfs
zpool import -f -N -R /zfs zroot
zfs mount -a


The problem doesn't exist:


mkdir /zfs
zpool import -f -N -R /zfs zroot
zfs mount zroot/ROOT/default
zfs mount -a


Any idea if the zfs mount -a mounts in wrong order which may cause this issue?
 
canmount property of zroot/ROOT/default is usually set to noauto:
Code:
dice@maelcum:~ % zfs get canmount zroot/ROOT/default
NAME                PROPERTY  VALUE     SOURCE
zroot/ROOT/default  canmount  noauto    local

It's set to noauto because you could have multiple BEs, all with the / mountpoint property set. You don't want to mount all of them at the same time.
 
You are right. So it's the expected behaviour and nothing wrong with my configuration.

Now I try to see if I can find the commands to recreate the empty datasets instead of using the command zfs receive -vF zroot < /tmp/data.zfs

I will try first to do a fresh install using bsdinstall with the default ZFS datasets and check if zpool history provides any useful information.

My plan is to dump / restore a UFS filesystem to ZFS and it's better the datasets be empty instead of overriding files using restore. Of course I will have to manually edit /etc/rc.conf , /etc/loader.conf and /etc/fstab to be compatible with ZFS.
 
OK I found the commands used:

Code:
zpool history
History for 'zroot':
2022-05-06.17:52:26 zpool create -o altroot=/mnt -O compress=lz4 -O atime=off -m none -f zroot mirror ada0p3 ada1p3
2022-05-06.17:52:26 zfs create -o mountpoint=none zroot/ROOT
2022-05-06.17:52:26 zfs create -o mountpoint=/ zroot/ROOT/default
2022-05-06.17:52:26 zfs create -o mountpoint=/tmp -o exec=on -o setuid=off zroot/tmp
2022-05-06.17:52:27 zfs create -o mountpoint=/usr -o canmount=off zroot/usr
2022-05-06.17:52:27 zfs create zroot/usr/home
2022-05-06.17:52:27 zfs create -o setuid=off zroot/usr/ports
2022-05-06.17:52:27 zfs create zroot/usr/src
2022-05-06.17:52:27 zfs create -o mountpoint=/var -o canmount=off zroot/var
2022-05-06.17:52:28 zfs create -o exec=off -o setuid=off zroot/var/audit
2022-05-06.17:52:28 zfs create -o exec=off -o setuid=off zroot/var/crash
2022-05-06.17:52:28 zfs create -o exec=off -o setuid=off zroot/var/log
2022-05-06.17:52:28 zfs create -o atime=on zroot/var/mail
2022-05-06.17:52:29 zfs create -o setuid=off zroot/var/tmp
2022-05-06.17:52:29 zfs set mountpoint=/zroot zroot
2022-05-06.17:52:29 zpool set bootfs=zroot/ROOT/default zroot
2022-05-06.17:52:29 zpool set cachefile=/mnt/boot/zfs/zpool.cache zroot
2022-05-06.17:52:35 zfs set canmount=noauto zroot/ROOT/default
 
Back
Top