HOWTO: FreeBSD ZFS Madness

trois-six said:
... redacted content. see link for the original patch ...

trois-six said:
... redacted content. see link for the original auto install script ...

I've taken the installation script and the patch above and made them work better together. I've also considerably simplified the above patch, making the changes much more clear. In the process, "activate" was fixed (it was a typo in the original patch), along with a few other issues.

The result is a more fully fleshed-out idea started by @Trois-Six. In particular, the boot pool is now also administered by beadm. It is still mounted to /bootfs (via fstab, since it's a legacy mountpoint in zfs), and /bootfs/boot is then symlinked to /boot.

@vermaden, could you please review my changes to beadm? I think the ability to support a separate boot pool would be a very useful feature, and the patch is now cleaner and feels less intrusive.

Finally, here's a link to my clone of @vermaden's beadm repository with @Trois-Six's changes and my modifications applied, in case anyone else is interested: https://bitbucket.org/aasoft/beadm/
 
Last edited by a moderator:
@AASoft,

Hi, I am not sure if all these patches are needed for that, as I currently use two ZFS pools with 'stock' beadm (3.3. Road Warrior Laptop from the HOWTO) and it works flawlessly:

Code:
% zpool list
NAME    SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
local   133G   126G  7.04G    94%  1.00x  ONLINE  -
sys    15.9G  9.03G  6.84G    56%  1.00x  ONLINE  -

% zfs list -r sys
NAME            USED  AVAIL  REFER  MOUNTPOINT
sys            9.03G  6.60G    32K  none
sys/ROOT       9.02G  6.60G    31K  none
sys/ROOT/safe  9.02G  6.60G  9.02G  legacy

% beadm list
BE   Active Mountpoint  Space Created
safe NR     /            9.0G 2013-03-05 13:29
 
Last edited by a moderator:
Right, I was more thinking of the following layout:

Code:
% beadm list
BE             Active Mountpoint             Space Created
default        N      /                       1.6G 2013-07-16 20:11

% zfs list -r zboot
NAME                        USED  AVAIL  REFER  MOUNTPOINT
zboot                       356M  1.61G   144K  none
zboot/ROOT                  354M  1.61G   144K  none
zboot/ROOT/default          354M  1.61G   354M  legacy

% zfs list -r zroot
NAME                                            USED  AVAIL  REFER  MOUNTPOINT
zroot                                          5.69G   109G   144K  none
zroot/ROOT                                     1.56G   109G   144K  none
zroot/ROOT/default                             1.55G   109G  24.2M  legacy
zroot/ROOT/default/usr                         1.38G   109G   341M  /usr
zroot/ROOT/default/var                          153M   109G   568K  /var
(/usr and /var structures redacted)
zroot/home                                      144K   109G   144K  /home
zroot/swap                                     4.13G   114G    72K  -
zroot/tmp                                       192K   109G   192K  /tmp
zroot/usr                                       296K   109G   144K  none
zroot/usr/jails                                 152K   109G   152K  /usr/jails

with default being the only existing BE at this point. zboot has its own 2 GB partition, and zroot is on a GELI-encrypted partition that takes up the rest of the disk. Executing beadm create testBE at this point will create zboot/ROOT/testBE and zroot/ROOT/testBE the way beadm currently does for the root pool.

I could easily be missing something, but I don't believe stock beadm supports such a configuration.
 
I think these modifications are terrific, and just what I need to be able to use beadm in production. I really hope this gets reviewed and commited!

/Sebulon
 
I need some help fixing a broken FreeBSD install that utilizes beadm.

I followed the two disk server guide featured in the very first post. Since then, I've been making snapshots of the default BE periodically. After installing some drivers and modifying my installs configuration, it seems that a driver I installed - unrelated to beadm - is causing a kernel panic at boot.

Given that a system using the stock beadm (as of July 23, 2013) and is unbootable, what can a newbie do to revert the BE back to an older snapshot which was bootable? I imagine that firing up a Live CD of FreeBSD and running some beadm commands would be part of the solution.

Please, if you care to respond, speak in newbie/layman's terms.
 
Help with errors

First, thank you, @vermaden. This all looks really cool and I think it will save me many headaches.

I'm new to both ZFS and beadm. I followed your instructions...mainly. I have one SSD; so, I made some adjustments from another script.

I believe, I messed something up.

Create
Code:
# beadm create -e default jailed
cannot open 'sys/ROOT/default@install@2013-07-31-18:43:25': invalid dataset name

Start jail
Code:
# jls
   JID  IP Address      Hostname                      Path
# beadm create -e default jailed
ERROR: Boot environment 'jailed' already exists

Activate
Code:
# beadm create upgrade
cannot open 'sys/ROOT/default@install@2013-07-31-18:50:22': invalid dataset name
# beadm activate upgrade
cannot set property for 'sys/ROOT/default@install': this property can not be modified for snapshots

Setup
Code:
root@freebsd:/root # zpool list
NAME   SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
sys   28.8G  1.53G  27.2G     5%  1.00x  ONLINE  -

root@freebsd:/root # zfs mount
sys/ROOT/default                /
sys/ROOT/default/usr            /usr
sys/ROOT/default/usr/home       /usr/home
sys/ROOT/default/usr/ports      /usr/ports
sys/ROOT/default/usr/src        /usr/src
sys/ROOT/default/var            /var
sys/ROOT/default/var/log        /var/log

root@freebsd:/root # beadm list
BE      Active Mountpoint  Space Created
default N      /            1.5G 2013-07-31 16:49
jailed  -      -           79.5K 2013-07-31 18:43
upgrade R      -           69.0K 2013-07-31 23:58

root@freebsd:/root # zfs list -r
NAME                                               USED  AVAIL  REFER  MOUNTPOINT
sys                                               1.52G  26.8G    31K  /sys
sys/ROOT                                          1.52G  26.8G  46.5K  /sys/ROOT
sys/ROOT/default                                  1.52G  26.8G   349M  /
sys/ROOT/default@install                           109K      -   349M  -
sys/ROOT/default@configured                           0      -   349M  -
sys/ROOT/default@configured_with_beadm                0      -   349M  -
sys/ROOT/default@2013-07-31-18:43:25                  0      -   349M  -
sys/ROOT/default@2013-07-31-23:58:12                  0      -   349M  -
sys/ROOT/default@2013-07-31-23:58:32                  0      -   349M  -
sys/ROOT/default/usr                              1.02G  26.8G   579M  /usr
sys/ROOT/default/usr@install                      52.5K      -   290M  -
sys/ROOT/default/usr@configured                     72K      -   579M  -
sys/ROOT/default/usr@configured_with_beadm            0      -   579M  -
sys/ROOT/default/usr@2013-07-31-18:43:25              0      -   579M  -
sys/ROOT/default/usr@2013-07-31-23:58:12              0      -   579M  -
sys/ROOT/default/usr@2013-07-31-23:58:32              0      -   579M  -
sys/ROOT/default/usr/home                         90.5K  26.8G    62K  /usr/home
sys/ROOT/default/usr/home@install                 28.5K      -  46.5K  -
sys/ROOT/default/usr/home@configured                  0      -    62K  -
sys/ROOT/default/usr/home@configured_with_beadm       0      -    62K  -
sys/ROOT/default/usr/home@2013-07-31-18:43:25         0      -    62K  -
sys/ROOT/default/usr/home@2013-07-31-23:58:12         0      -    62K  -
sys/ROOT/default/usr/home@2013-07-31-23:58:32         0      -    62K  -
sys/ROOT/default/usr/ports                         462M  26.8G   462M  /usr/ports
sys/ROOT/default/usr/ports@install                40.5K      -  46.5K  -
sys/ROOT/default/usr/ports@configured                 0      -   462M  -
sys/ROOT/default/usr/ports@configured_with_beadm      0      -   462M  -
sys/ROOT/default/usr/ports@2013-07-31-18:43:25        0      -   462M  -
sys/ROOT/default/usr/ports@2013-07-31-23:58:12        0      -   462M  -
sys/ROOT/default/usr/ports@2013-07-31-23:58:32        0      -   462M  -
sys/ROOT/default/usr/src                          37.5K  26.8G  37.5K  /usr/src
sys/ROOT/default/usr/src@install                      0      -  37.5K  -
sys/ROOT/default/usr/src@configured                   0      -  37.5K  -
sys/ROOT/default/usr/src@configured_with_beadm        0      -  37.5K  -
sys/ROOT/default/usr/src@2013-07-31-18:43:25          0      -  37.5K  -
sys/ROOT/default/usr/src@2013-07-31-23:58:12          0      -  37.5K  -
sys/ROOT/default/usr/src@2013-07-31-23:58:32          0      -  37.5K  -
sys/ROOT/default/var                               169M  26.8G   168M  /var
sys/ROOT/default/var@install                        67K      -   252K  -
sys/ROOT/default/var@configured                       0      -   168M  -
sys/ROOT/default/var@configured_with_beadm            0      -   168M  -
sys/ROOT/default/var@2013-07-31-18:43:25              0      -   168M  -
sys/ROOT/default/var@2013-07-31-23:58:12              0      -   168M  -
sys/ROOT/default/var@2013-07-31-23:58:32              0      -   168M  -
sys/ROOT/default/var/log                           230K  26.8G    94K  /var/log
sys/ROOT/default/var/log@install                  46.5K      -  73.5K  -
sys/ROOT/default/var/log@configured                   0      -  75.5K  -
sys/ROOT/default/var/log@configured_with_beadm        0      -  75.5K  -
sys/ROOT/default/var/log@2013-07-31-18:43:25          0      -  75.5K  -
sys/ROOT/default/var/log@2013-07-31-23:58:12          0      -    93K  -
sys/ROOT/default/var/log@2013-07-31-23:58:32          0      -    93K  -
sys/ROOT/jailed                                   79.5K  26.8G   349M  /usr/jails/jailed
sys/ROOT/upgrade                                    69K  26.8G   349M  legacy

Any help given will be greatly appreciated. Thank you.
 
Last edited by a moderator:
ejr2122 said:
Given that a system using the stock beadm (as of July 23, 2013) and is unbootable, what can a newbie do to revert the BE back to an older snapshot which was bootable? I imagine that firing up a Live CD of FreeBSD and running some beadm commands would be part of the solution.

Hi, sorry for late response. You can use the live CD from here: http://mfsbsd.vx.sk/. Then you will have to do something like this: # zpool set bootfs=sys/ROOT/safe sys and set
Code:
vfs.root.mountfrom="zfs:sys/ROOT/safe"
in the /boot/loader.conf file.

Let me know how that works.
 
I'm not sure of the version, I installed using your tutorial (thank you) on the 29th or so, just a few days ago, I used fetch (as your tutorial instructs). The file is dated Nov 18 2012. Is there a way to check the version?
 
I got a short question:

Is there anything against:
  1. Create a BE.
  2. Mess everything up.
  3. Return to the BE and rename the BE to "default" for example.
This would save one reboot per change.

Thanks and regards.
Markus
 
After discussing the topic with @vermaden, he confirmed the way I asked for.
  1. Create a new BE.
  2. Mess everything up.
  3. Reboot into the new BE.
  4. (Optional) You can rename the BE to "default" again.
Regards,

Markus
 
Last edited by a moderator:
I encountered the same error as @doc1623 (invalid dataset name). I'm using beadm v0.8.5.

First scenario:

Code:
root@testbsd:/root # zfs list
NAME                 USED  AVAIL  REFER  MOUNTPOINT
rpool                631M  18.9G    31K  none
rpool/ROOT           631M  18.9G    31K  none
rpool/ROOT/default   631M  18.9G   631M  legacy
rpool/home          38.5K  18.9G  38.5K  /home
rpool/tmp             31K  18.9G    31K  /tmp
root@testbsd:/root #

root@testbsd:/root # beadm create 9.2
Created successfully
root@testbsd:/root #

root@testbsd:/root # beadm list
BE      Active Mountpoint  Space Created
default NR     /          631.1M 2013-10-04 22:17
9.2     -      -            1.0K 2013-10-04 23:12
root@testbsd:/root #

root@testbsd:/root # beadm destroy 9.2
Are you sure you want to destroy '9.2'?
This action cannot be undone (y/[n]): y
Destroyed successfully
root@testbsd:/root #

Now I like to see the snapshots when I list ZFS datasets, so I did the following:

Code:
root@testbsd:/root # zpool set listsnapshots=on rpool
root@testbsd:/root #

root@testbsd:/root # zfs list
NAME                              USED  AVAIL  REFER  MOUNTPOINT
rpool                             631M  18.9G    31K  none
rpool/ROOT                        631M  18.9G    31K  none
rpool/ROOT/default                631M  18.9G   631M  legacy
rpool/ROOT/default@freshinstall  63.5K      -   631M  -
rpool/home                       38.5K  18.9G  38.5K  /home
rpool/tmp                          31K  18.9G    31K  /tmp
root@testbsd:/root #

root@testbsd:/root # beadm create 9.2
cannot open 'rpool/ROOT/default@freshinstall@2013-10-04-23:15:59': invalid dataset name
root@testbsd:/root #

root@testbsd:/root # zfs list
NAME                                     USED  AVAIL  REFER  MOUNTPOINT
rpool                                    631M  18.9G    31K  none
rpool/ROOT                               631M  18.9G    31K  none
rpool/ROOT/9.2                             1K  18.9G   631M  legacy
rpool/ROOT/default                       631M  18.9G   631M  legacy
rpool/ROOT/default@freshinstall         63.5K      -   631M  -
rpool/ROOT/default@2013-10-04-23:15:59  61.5K      -   631M  -
rpool/home                              38.5K  18.9G  38.5K  /home
rpool/tmp                                 31K  18.9G    31K  /tmp
root@testbsd:/root #
root@testbsd:/root # beadm list
BE      Active Mountpoint  Space Created
default NR     /          631.1M 2013-10-04 22:17
9.2     -      -           62.5K 2013-10-04 23:15
root@testbsd:/root #

I think problem is here:

Code:
   119    # clone properties of source boot environment
   120    zfs list -H -o name -r ${SOURCE} \
   121      | while read FS
   122        do
As the line 120 is expecting not to find any snapshots. But as the snapshot is listed, following line produces error:

Code:
   139            zfs clone -o canmount=off ${OPTS} ${FS}@${FMT} ${DATASET}
 
Last edited by a moderator:
@matoatlantis,

Thank you for finding that out, I did not know that ZFS allows one to enable to 'always display snapshots', I will fix that and commit ASAP.
 
Last edited by a moderator:
I am having an issue using beadm on my FreeBSD 9.2 full zfs machine. Here is my partition layout...

Code:
# gpart show
=>        34  3907029101  ada0  GPT  (1.8T)
          34           6        - free -  (3.0k)
          40         128     1  freebsd-boot  (64k)
         168    16777216     2  freebsd-swap  (8.0G)
    16777384  3890251744     3  freebsd-zfs  (1.8T)
  3907029128           7        - free -  (3.5k)

=>        34  3907029101  ada1  GPT  (1.8T)
          34           6        - free -  (3.0k)
          40         128     1  freebsd-boot  (64k)
         168    16777216     2  freebsd-swap  (8.0G)
    16777384  3890251744     3  freebsd-zfs  (1.8T)
  3907029128           7        - free -  (3.5k)

=>        34  3907029101  ada2  GPT  (1.8T)
          34           6        - free -  (3.0k)
          40         128     1  freebsd-boot  (64k)
         168    16777216     2  freebsd-swap  (8.0G)
    16777384  3890251744     3  freebsd-zfs  (1.8T)
  3907029128           7        - free -  (3.5k)

Code:
# zfs list
NAME                USED  AVAIL  REFER  MOUNTPOINT
boss-zfs           16.1G  3.53T  40.0K  none
boss-zfs/root      10.9G  3.53T  10.6G  /
boss-zfs/tmp       16.5M  3.53T  16.5M  /tmp
boss-zfs/usr       3.55G  3.53T  2.36G  /usr
boss-zfs/usr/home   151K  3.53T   115K  /usr/home
boss-zfs/var       1.63G  3.53T  1.53G  /var

I noticed that this same error occurs when trying to install to a machine that is not full ZFS.

Any help will be appreciated.
 
Can beadm be used with a system that is set up with encrypted ZFS root and a separate /boot? My server is set up as follows:

https://www.dan.me.uk/blog/2012/05/06/full-disk-encryption-with-zfs-root-for-freebsd-9-x/

But to date I have not had any luck using beadm. I tried again today but when I tried to beadm activate upgrade it said that it couldn't find my zpool.cache file in the /tmp directory?

Can beadm work with encrypted ZFS root and having /boot on a separate USB key?
 
xy16644 said:
Can beadm work with encrypted ZFS root and having /boot on a deparate USB key?
Nope.

beadm works if you boot from the root ZFS pool.

The UFS filesystem does not have 'bootable snapshots'.
 
Thanks @vermaden!

I'm not using UFS. Only ZFS is used, even on the USB stick. The USB stick has a ZFS pool with /boot on it.

I assume I still can't use beadm?
 
Last edited by a moderator:
Do I need to have a separate SLOG device for my ZIL when using a pure SSD pool? I've read contradictory information on that. Some say yes, some say no. :OOO
 
volatilevoid said:
Do I need to have a separate SLOG device for my ZIL when using a pure SSD pool? I've read contradictory information on that. Some say yes, some say no. :OOO

You may and You may not. Earlier I havent used seperate ZIL with single SSD, now I am using it, the stats seem to be similar to the stats I have had when NOT using separate ZIL partitions:

Code:
There are 909324 files.
There are 1003593 blocks and 153621 fragment blocks.
There are 110297 fragmented blocks (71.80%).
There are 43324 contiguous blocks (28.20%).
 
Back
Top