HOWTO: FreeBSD ZFS Madness

srivo said:
First, thanks for that how-to! It's really helpful to manage servers.
Welcome ;)

srivo said:
Like if the zfs jailed volume create by beadm is not mounted.
Indeed, I added these, to make sure that newly created Jail dataset is mounted:

3.1. Make new Jail dataset mountable.
[font="Courier New"]# zfs set canmount=noauto sys/ROOT/jailed[/font]

3.2. Mount new Jail dataset.
[font="Courier New"]# zfs mount sys/ROOT/jailed[/font]
 
Updates to the beadm utility:

- minor fixes and clean
- added -F switch for destroy option - does not need confirmation upon destroy
- implemented umount option with -f switch for umount -f (force)
- implemented mount option with several variants of usage, examples:

Code:
# [color="Blue"]beadm[/color]
usage:
  beadm subcommand cmd_options

  subcommands:

  beadm activate beName
  beadm create [-e nonActiveBe | -e beName@snapshot] beName
  beadm create beName@snapshot
  beadm destroy [-F] beName | beName@snapshot
  beadm list
  beadm mount
  beadm mount beName [mountpoint]
  beadm umount [-f] beName
  beadm rename origBeName newBeName

# [color="blue"]beadm mount[/color]
update
  sys/ROOT/update  /

# [color="blue"]beadm mount test /test[/color]
Mounted successfully on '/test'

# [color="blue"]beadm mount default[/color]
Mounted successfully on '/tmp/tmp.KhAtHe'

# [color="blue"]beadm mount[/color]
default
  sys/ROOT/default  /tmp/tmp.KhAtHe

test
  sys/ROOT/test            /test
  sys/ROOT/test/SOMETHING  /test/test

update
  sys/ROOT/update  /

# [color="Blue"]beadm umount test[/color]
Unmounted successfully

# [color="blue"]beadm umount -f default[/color]
Unmounted successfully

Please report all problems and BUGs ;)
 
> cd /usr/ports/sysutils/beadm; make install clean
===> License BSD accepted by the user
===> Extracting for beadm-0.7
=> SHA256 Checksum OK for beadm-0.7.tar.bz2.
===> Patching for beadm-0.7
===> Configuring for beadm-0.7
===> Installing for beadm-0.7
===> Generating temporary packing list
===> Checking if sysutils/beadm already installed
install -o root -g wheel -m 555 /usr/ports/sysutils/beadm/work/beadm-0.7/beadm /usr/local/sbin/beadm
install -o root -g wheel -m 444 /usr/ports/sysutils/beadm/work/beadm-0.7/beadm.1 /usr/local/man/man1/
===> Compressing manual pages for beadm-0.7
===> Registering installation for beadm-0.7
===> Cleaning for beadm-0.7
> beadm list
ERROR: This system is not configured for boot environments

I have FreeBSD 9.0-STABLE and encrypted zfs root. Is it possible to use beadm at this configuration?
 
vermaden said:
Post here outputs of gpart show and mount commands.

Code:
> gpart show 
=>       34  976773101  ada0  GPT  (465G)
         34        128     1  freebsd-boot  (64k)
        162       1854        - free -  (927k)
       2016    2097152     2  freebsd-ufs  (1.0G)
    2099168   20447232     3  freebsd-swap  (9.8G)
   22546400  954226735     4  freebsd-zfs  (455G)

Code:
> mount
tank0 on / (zfs, local, noatime, nfsv4acls)
devfs on /dev (devfs, local, multilabel)
/dev/label/boot0 on /boot-mount (ufs, local, noatime)
procfs on /proc (procfs, local)
fdescfs on /dev/fd (fdescfs)
linprocfs on /compat/linux/proc (linprocfs, local)
tank0/home on /home (zfs, local, noatime, nfsv4acls)
tank0/caesar on /home/caesar (zfs, local, noatime, nfsv4acls)
tank0/torrents on /home/caesar/Torrents (zfs, local, noatime, nfsv4acls)
tank0/VBox on /home/caesar/VirtualBox (zfs, local, noatime, nfsv4acls)
tank0/usr on /usr (zfs, local, noatime, nfsv4acls)
tank0/usr/ports on /usr/ports (zfs, local, noatime, nfsv4acls)
tank0/usr/ports/distfiles on /usr/ports/distfiles (zfs, local, noatime, nfsv4acls)
tank0/var on /var (zfs, local, noatime, nfsv4acls)
 
@lisiren

I see that You use UFS /boot for booting and then use ZFS for the rest of the system, this is not supported by beadm. You must use ZFS only setup like in this HOWTO (without UFS) and have specified schema, something like that:

Code:
> mount
tank0/ROOT/default on / (zfs, local, noatime, nfsv4acls)
tank0/ROOT/default/usr on /usr (zfs, local, noatime, nfsv4acls)
tank0/ROOT/default/usr/ports on /usr/ports (zfs, local, noatime, nfsv4acls)
tank0/ROOT/default/usr/ports/distfiles on /usr/ports/distfiles (zfs, local, noatime, nfsv4acls)
tank0/ROOT/default/var on /var (zfs, local, noatime, nfsv4acls)
tank0/home on /home (zfs, local, noatime, nfsv4acls)
tank0/home/caesar on /home/caesar (zfs, local, noatime, nfsv4acls)
tank0/home/caesar/Torrents on /home/caesar/Torrents (zfs, local, noatime, nfsv4acls)
tank0/home/caesar/VirtualBox on /home/caesar/VirtualBox (zfs, local, noatime, nfsv4acls)
 
Updates to the beadm utility:

- minor fixes and clean
- fixed incorrect MOUNTPOINT gathering in beadm mount
- added additional check to beadm activate if BE is not mounted by beadm mount command
 
@vermaden

Thank you very much for this how to. With the implementation of beadm, I’m thinking of switching back to zfs. However, I have one question/concern.

Is it possible to use beadm in conjuncture with GRUB2? I used OpenSolaris for a short while back in the days and, if I remember correctly, their version of beadm would create a new entry in GRUB2 that would allow you to select your BE upon boot. This process allowed you to select and test your new BE and, in the event of a kernel panic or /etc/fstab issue for example, you could just revert to your previous BE. This would be easier than recovering your previous BE using as installation CD/DVD as you explained above (reply #29).
 
xeube said:
Thank you very much for this how to. With the implementation of beadm, I’m thinking of switching back to zfs. However, I have one question/concern.
Welcome. If You face any issues with it, please report them ;)

xeube said:
Is it possible to use beadm in conjuncture with GRUB2?
GRUB2 supports booting from ZFS from version 1.99:
http://ashish.is.lostca.se/2011/12/28/booting-into-zfs-only-freebsd-from-grub2/

Recently GRUB2 version 2.0 has been released.

I think the answer is yes, but its difficult since the version of GRUB2 in Ports is 1.98 (which does not support ZFS).

You will have to use some Linux to install GRUB 2.0 that supports ZFS.

xeube said:
I used OpenSolaris for a short while back in the days and, if I remember correctly, their version of beadm would create a new entry in GRUB2 that would allow you to select your BE upon boot. This process allowed you to select and test your new BE and, in the event of a kernel panic or /etc/fstab issue for example, you could just revert to your previous BE. This would be easier than recovering your previous BE using as installation CD/DVD as you explained above (reply #29).
That is the long-time plan for FreeBSD, but with ZFSloader, to have such menu on 'our' bootcode and also to failback from not working BE, to the last working one, something line nextboot -k test to try new kernel /boot/test/kernel but if it will fail, the loader will failback to the default /boot/kernel/kernel.

Some work has been done to ZFSloader, to it will allow to select from which ZFS dataset to boot, I haven't followed that development through, it should not be that hard to add failback/BE layer once its done.
 
The beadm 0.8 has just been commited to the Ports tree:

http://freshports.org/sysutils/beadm

Changelog:

Code:
-- Introduce proper space calculation by each boot environment in *beadm list*
-- Rework the *beadm destroy* command so no orphans are left after destroying boot environment.
-- Fix the *beadm mount* and *beadm umount* commands error handling.
-- Rework consistency of all error and informational messages.
-- Simplify and cleanup code where possible.
-- Fix *beadm destroy* for 'static' (not cloned) boot environments received by *zfs receive* command.
-- Use mktemp(1) where possible.
-- Implement *beadm list -a* option to list all datasets and snapshots of boot environments.
-- Add proper mountpoint listing to the *beadm list* command.
   % beadm list
   BE      Active Mountpoint       Space Created
   default NR     /                11.0G 2012-07-28 00:01
   test1   -      /tmp/tmp.IUQuFO  41.2M 2012-08-27 21:20
   test2   -      -                56.6M 2012-08-27 21:20

-- Change snapshot format to the one used by original *beadm* command
(%Y-%m-%d-%H:%M:%S).
   % zfs list -t snapshot -o name -r sys/ROOT/default
   NAME
   sys/ROOT/default@2012-08-27-21:20:00
   sys/ROOT/default@2012-08-27-21:20:18

-- Implement *beadm list -D* option to display space that would be consumed by single boot environment if all other boot environments will be destroyed.
   % beadm list -D
   BE      Active Mountpoint       Space Created
   default NR     /                 9.4G 2012-07-28 00:01
   test1   -      /tmp/tmp.IUQuFO   8.7G 2012-08-27 21:20
   test2   -                        8.7G 2012-08-27 21:20

-- Add an option to BEADM DESTROY command to not destroy manually created snapshots used for boot environment.

   # beadm destroy test1
   Are you sure you want to destroy 'test1'?
   This action cannot be undone (y/[n]): y
   Boot environment 'test1' was created from existing snapshot
   Destroy 'default@test1' snapshot? (y/[n]): y
   Destroyed successfully

   # beadm destroy test1
   Are you sure you want to destroy 'test1'?
   This action cannot be undone (y/[n]): y
   Boot environment 'test1' was created from existing snapshot
   Destroy 'default@test1' snapshot? (y/[n]): n
   Origin snapshot 'default@test1' will be preserved
   Destroyed successfully
 
I think I'll go mad. I'm trying to get this work but I can't find where I'm doing it wrong.

I've created test1 and test2 be and then try to activate with beadm activate test2.

Then my beadm list looks like

Code:
BE      Active Mountpoint  Space Created
default N      /          959.0K 2012-10-01 13:47
test1    -      -              1.0M 2012-10-08 21:17
test2    R      -              3.4G 2012-10-08 21:17

When I reboot output is the same and default is still mounted on /

Code:
tank/ROOT/default on / (zfs, local, nfsv4acls)
tank/ROOT/test2/usr on /usr (zfs, local, nfsv4acls)
tank/ROOT/test2/usr/ports on /usr/ports (zfs, local, nosuid, nfsv4acls)
tank/ROOT/test2/usr/ports/distfiles on /usr/ports/distfiles (zfs, local, noexec, nosuid, nfsv4acls)
tank/ROOT/test2/usr/ports/packages on /usr/ports/packages (zfs, local, noexec, nosuid, nfsv4acls)
tank/ROOT/test2/usr/src on /usr/src (zfs, local, noexec, nosuid, nfsv4acls)
tank/ROOT/test2/var on /var (zfs, local, nfsv4acls)
tank/ROOT/test2/var/crash on /var/crash (zfs, local, noexec, nosuid, nfsv4acls)
tank/ROOT/test2/var/db on /var/db (zfs, local, noexec, nosuid, nfsv4acls)
tank/ROOT/test2/var/db/pkg on /var/db/pkg (zfs, local, nosuid, nfsv4acls)
tank/ROOT/test2/var/empty on /var/empty (zfs, local, noexec, nosuid, nfsv4acls)
tank/ROOT/test2/var/log on /var/log (zfs, local, noexec, nosuid, nfsv4acls)
tank/ROOT/test2/var/mail on /var/mail (zfs, local, noexec, nosuid, nfsv4acls)
tank/ROOT/test2/var/run on /var/run (zfs, local, noexec, nosuid, nfsv4acls)
tank/ROOT/test2/var/tmp on /var/tmp (zfs, local, nosuid, nfsv4acls)

I also notice that vfs.root.mountfrom entry inside /boot/loader.conf stays the same. But zpool property bootfs is correct. In this case tank/ROOT/test2

Is there something I missed or missread?

For more detail info here is output of zfs list -o name,canmount,mountpoint

Code:
tank/ROOT                                noauto  none
tank/ROOT/default                        noauto  legacy
tank/ROOT/default/usr                    noauto  /usr
tank/ROOT/default/usr/ports              noauto  /usr/ports
tank/ROOT/default/usr/ports/distfiles    noauto  /usr/ports/distfiles
tank/ROOT/default/usr/ports/packages     noauto  /usr/ports/packages
tank/ROOT/default/usr/src                noauto  /usr/src
tank/ROOT/default/var                    noauto  /var
tank/ROOT/default/var/crash              noauto  /var/crash
tank/ROOT/default/var/db                 noauto  /var/db
tank/ROOT/default/var/db/pkg             noauto  /var/db/pkg
tank/ROOT/default/var/empty              noauto  /var/empty
tank/ROOT/default/var/log                noauto  /var/log
tank/ROOT/default/var/mail               noauto  /var/mail
tank/ROOT/default/var/run                noauto  /var/run
tank/ROOT/default/var/tmp                noauto  /var/tmp
tank/ROOT/test1                          noauto  legacy
tank/ROOT/test1/usr                      noauto  /usr
tank/ROOT/test1/usr/ports                noauto  /usr/ports
tank/ROOT/test1/usr/ports/distfiles      noauto  /usr/ports/distfiles
tank/ROOT/test1/usr/ports/packages       noauto  /usr/ports/packages
tank/ROOT/test1/usr/src                  noauto  /usr/src
tank/ROOT/test1/var                      noauto  /var
tank/ROOT/test1/var/crash                noauto  /var/crash
tank/ROOT/test1/var/db                   noauto  /var/db
tank/ROOT/test1/var/db/pkg               noauto  /var/db/pkg
tank/ROOT/test1/var/empty                noauto  /var/empty
tank/ROOT/test1/var/log                  noauto  /var/log
tank/ROOT/test1/var/mail                 noauto  /var/mail
tank/ROOT/test1/var/run                  noauto  /var/run
tank/ROOT/test1/var/tmp                  noauto  /var/tmp
tank/ROOT/test2                              on  legacy
tank/ROOT/test2/usr                          on  /usr
tank/ROOT/test2/usr/ports                    on  /usr/ports
tank/ROOT/test2/usr/ports/distfiles          on  /usr/ports/distfiles
tank/ROOT/test2/usr/ports/packages           on  /usr/ports/packages
tank/ROOT/test2/usr/src                      on  /usr/src
tank/ROOT/test2/var                          on  /var
tank/ROOT/test2/var/crash                    on  /var/crash
tank/ROOT/test2/var/db                       on  /var/db
tank/ROOT/test2/var/db/pkg                   on  /var/db/pkg
tank/ROOT/test2/var/empty                    on  /var/empty
tank/ROOT/test2/var/log                      on  /var/log
tank/ROOT/test2/var/mail                     on  /var/mail
tank/ROOT/test2/var/run                      on  /var/run
tank/ROOT/test2/var/tmp                      on  /var/tmp
 
Code:
I also notice that vfs.root.mountfrom entry inside /boot/loader.conf stays the same. But zpool property bootfs is correct. In this case tank/ROOT/test2

Is there something I missed or missread?
Could it be that /boot/loader.conf is read only by any chance?
 
rawthey said:
Code:
I also notice that vfs.root.mountfrom entry inside /boot/loader.conf stays the same. But zpool property bootfs is correct. In this case tank/ROOT/test2

Is there something I missed or missread?
Could it be that /boot/loader.conf is read only by any chance?

No, I've checked that already.
 
@urosgruber
You can run beadm command in debug mode like that: sh -x $( which beadm ) list instead of beadm list without debug.

Post the results of the 'non-working' beadm activate command here: sh -x $( which beadm ) activate test2
 
I finaly manage to resolve my issue. It looks like something was left behind when moving data from older zfs pool to the new zfs pool. Boot process was actually started from that older pool so bootfs on new pool was never actualy used. I removed the old pool, corrected some settings and now it looks like it works ok. I guess I needed some sleep ;)
 
@urosgruber

Do You thing that beadm can be improved to cope with such things, what we can implement in beadm so that will not happen again?
 
@vermaden

I realy don't know a good answer to that. I can give a list what was wrong in my case.

  • disks of the new pool didn't have any boot partition
  • because of that there was no bootcode on them.
  • disks of the old pool have correct partitioning and also bootcode. But becase content was almost the same as on new pool I didn't notice that maybe everyting is booting from the old pool or bootfs setting was read from the old pool but pointing it to the new pool.

If you install the whole system by the book there is no problem but in my case I was doing some conversion from plain zfs structure to beadm structure and also moving data from one zfs pool to another zfs pool with totaly different storage devices. It's realy hard to spot this kind of a problem.
 
@urosgruber

Ok, thanks for suggestions, maybe I will be able to at least add some more or less useful warning ;)
 
Hi,

@Vermaden : is there a way to have a "more encrypted" setup than the one you descripted in the "Road Warrior Laptop" chapter ?

My problem in particular is that with your setup /etc is not encrypted.

I wanted to do something like :

Code:
kldload zfs aesni geom_eli
gpart destroy -F $PRIMARY_DISK
gpart create -s gpt $PRIMARY_DISK
gpart add -b 40 -s 256 -t freebsd-boot $PRIMARY_DISK
gpart add -b 2048 -s $SWAP_SIZE -t freebsd-swap -l swap0 $PRIMARY_DISK
gpart add -s $BOOT_SIZE -t freebsd-zfs -l boot0 $PRIMARY_DISK
gpart add -t freebsd-zfs -l root0 $PRIMARY_DISK
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 $PRIMARY_DISK

# /
echo $PASSPHRASE | geli init -b -a HMAC/SHA256 -e AES-XTS -l 256 -s 4096 -B none -J - /dev/gpt/root0
echo $PASSPHRASE | geli attach -j - /dev/gpt/root0
dd if=/dev/zero of=/dev/gpt/root0.eli bs=1M
zpool create -o altroot=$DESTDIR -o cachefile=/tmp/zpool.cache -O utf8only=on zroot /dev/gpt/root0.eli
zfs set mountpoint=none zroot
zfs set checksum=fletcher4 zroot
zfs set atime=off zroot
zfs create zroot/ROOT
zfs create -o mountpoint=/ zroot/ROOT/default

# /boot
gnop create -S 4096 /dev/gpt/boot0
dd if=/dev/zero of=/dev/gpt/boot0.nop bs=1M
zpool create -o altroot=$DESTDIR -o cachefile=/tmp/zpool.cache -O utf8only=on zboot /dev/gpt/boot0.nop
cp /tmp/zpool.cache /tmp/zpool.cache.bak
zpool export zboot
gnop destroy /dev/gpt/boot0.nop
mv /tmp/zpool.cache.bak /tmp/zpool.cache
zpool import -o altroot=$DESTDIR -o cachefile=/tmp/zpool.cache zboot
zfs set mountpoint=none zboot
zfs set checksum=fletcher4 zboot
zfs set atime=off zboot
zfs create -o mountpoint=/bootfs zboot/default
zfs set freebsd:boot-environment=1 zboot/default
zfs set bootfs=zboot/default zboot

# /usr/local
zfs create -o mountpoint=/usr/local zroot/local

# /var
zfs create -o mountpoint=/var zroot/var
zfs create -o compression=lzjb -o exec=off -o setuid=off zroot/var/crash
zfs create -o exec=off -o setuid=off zroot/var/db
zfs create -o compression=lzjb -o exec=on -o setuid=off zroot/var/db/pkg
zfs create -o exec=off -o setuid=off zroot/var/empty
zfs create -o compression=lzjb -o exec=off -o setuid=off zroot/var/log
zfs create -o compression=gzip -o exec=off -o setuid=off zroot/var/mail
zfs create -o exec=off -o setuid=off zroot/var/run

# /var/tmp, /tmp
zfs create -o compression=lzjb -o exec=on -o setuid=off zroot/var/tmp
chmod 1777 $DESTDIR/var/tmp
zfs create -o mountpoint=/tmp -o compression=on -o exec=on -o setuid=off zroot/tmp
chmod 1777 $DESTDIR/tmp

# /home
zfs create -o mountpoint=/home zroot/home

# Install OS
foreach file (/usr/freebsd-dist/*.txz)
 tar --unlink -xpJf $file -C $DESTDIR
end
zfs set readonly=on zroot/var/empty

# /boot on zboot
mv $DESTDIR/boot $DESTDIR/bootfs/boot
ln -shf bootfs/boot $DESTDIR/boot
chflags -h schg $DESTDIR/boot
cp /tmp/zpool.cache $DESTDIR/boot/zfs/zpool.cache

# FreeBSD Loader
cat >> $DESTDIR/boot/loader.conf <<EOF
ahci_load="YES"
aesni_load="YES"
geom_eli_load="YES"
kern.geom.eli.visible_passphrase="2"
zfs_load="YES"
vfs.root.mountfrom="zfs:zroot/ROOT/default"
linux_load="YES"
linprocfs_load="YES"
atapicam_load="YES"
snd_hda_load="YES"
kern.maxfiles="25000"
sem_load="YES"
autoboot_delay="2"
vesa_load="YES"
splash_bmp_load="YES"
bitmap_load="YES"
bitmap_name="/boot/splash.bmp"
if_iwn_load="YES"
mmc_load="YES"
mmcsd_load="YES"
sdhci_load="YES"
EOF

# Settings
echo "hostname=\"$HOSTNAME\"" >> $DESTDIR/etc/rc.conf
echo "ifconfig_$NETIF=\"DHCP\"" >> $DESTDIR/etc/rc.conf
cat >> $DESTDIR/etc/rc.conf <<EOF
zfs_enable="YES"
geli_swap_flags="-e AES-XTS -l 256 -s 4096 -d"
#wlans_iwn0="wlan0"
#ifconfig_wlan0="country FR WPA DHCP"
background_dhclient="YES"
background_fsck="YES"
fsck_y_enable="YES"
keymap="fr.iso.acc"
font8x8="iso15-8x8"
font8x14="iso15-8x14"
font8x16="iso15-8x16"
scrnmap="NO"
moused_enable="YES"
sshd_enable="YES"
postfix_enable="YES"
sendmail_enable="NO"
sendmail_submit_enable="NO"
sendmail_outbound_enable="NO"
sendmail_msp_queue_enable="NO"
ntpdate_enable="YES"
#hald_enable="YES"
#dbus_enable="YES"
#gdm_enable="YES"
#gdm_lang="fr_FR.UTF-8"
#gnome_enable="YES"
#linux_enable="YES"
clear_tmp_enable="YES"
EOF
echo -e "network={\n   ssid=\"MYSSID\"\n   pskw=\"MYKEY\"\n}" >> $DESTDIR/etc/wpa_supplicant.conf
echo '/dev/gpt/swap0.eli none swap sw 0 0' >> $DESTDIR/etc/fstab
echo 'WRKDIRPREFIX=/usr/obj' >> $DESTDIR/etc/make.conf
cp $DESTDIR/usr/share/zoneinfo/Europe/Paris $DESTDIR/etc/localtime
cd $DESTDIR/etc/mail
make aliases
freebsd-update -b $DESTDIR fetch
freebsd-update -b $DESTDIR install

zfs umount -a
zfs set mountpoint=/zroot zroot
zfs set mountpoint=/zboot zboot
zfs set mountpoint=/zroot/ROOT zroot/ROOT
zfs set mountpoint=legacy zroot/ROOT/default

beadm would have to be able to snapshot two pools instead of one and manage these pools.

Other question : I don't know what is the better between swap inside ZFS or outside ?

Thank you !
 
Trois-Six said:
Hi,

@Vermaden : is there a way to have a "more encrypted" setup than the one you descripted in the "Road Warrior Laptop" chapter ?

My problem in particular is that with your setup /etc is not encrypted.

Well, my 'way' for Road Warrior is an 'hack' already (not having the WHOLE system encrypted as You specified).

IMHO FreeBSD Developers should implement/allow to boot from ZFS on GELI, which would solve the problem instead of dirty hacks like mine or Yours.




I wanted to do something like :

(...)

beadm would have to be able to snapshot two pools instead of one and manage these pools.

I also experimented with that setup, I even had a beadm version to cope with that, here it is, maybe You will find it helpful: http://paste2.org/p/2396219 (but its very old - from the beggining when I started to work on beadm)

The beadm is BSD licensed and open-source, You can create Your branch from http://github.com/vermaden/beadm and add several quirks to make this possible, in the end its just a Shell script.



Other question : I don't know what is the better between swap inside ZFS or outside ?
I use SWAP on ZFS because of flexibility. I can add/remove/increase/decrease the SWAP size as needed, I do not have that flexibility with GPT partitions, so I use ZFS here.
 
@Trois-Six

Here is the latest version of beadm (0.8.4) with the option to use separate /boot from separate pool, I haven't tested this as I do not longer have that setup and haven't checked it at VirtualBox, so beware ;)

http://paste2.org/p/2396248
 
I was already using some custom-ish boot environment-like (actually, zfs namespace-like) system, but beadm really makes things a lot easier. Thanks!

Now, one bug and one suggestion:
  • intermediate datasets with mountpoint=none (like usr/) are mounted by beadm mount;
  • why not using mktemp -dt be (${2} - name of boot environment - would be more meaningful, but could lead to too long names) instead of the meaningless /tmp/tmp.XXXXXX?
 
Back
Top