0. This is SPARTA!
Some time ago I found a good, reliable way of using and installing FreeBSD and described it in my Modern FreeBSD Install [1] [2] HOWTO. Now, more then a year later I come back with my experiences about that setup and a proposal of newer and probably better way of doing it.
1. Introduction
Same as year ago, I assume that You would want to create fresh installation of FreeBSD using one or more hard disks, but also with (laptops) and without GELI based full disk encryption.
This guide was written when FreeBSD 9.0 and 8.3 were available and definitely works for 9.0, but I did not try all this on the older 8.3, if You find some issues on 8.3, let me know I will try to address them in this guide.
Earlier, I was not that confident about booting from the ZFS pool, but there is some very neat feature that made me think ZFS boot is now mandatory. If You just smiled, You know that I am thinking about Boot Environments feature from Illumos/Solaris systems.
In case You are not familiar with the Boot Environments feature, check the Managing Boot Environments with Solaris 11 Express PDF white paper [3]. Illumos/Solaris has the beadm(1M) [4] utility and while Philipp Wuensche wrote the manageBE script as replacement [5], it uses older style used at times when OpenSolaris (and SUN) were still having a great time.
I last couple of days writing an up-to-date replacement for FreeBSD compatible beadmutility, and with some tweaks from today I just made it available at SourceForge [6] if you wish to test it. Currently it's about 200 lines long, so it should be pretty simple to take a look at it. I tried to make it as compatible as possible with the 'upstream' version, along with some small improvements, it currently supports basic functions like list, create, destroy and activate.
There are several subtle differences between mine implementation and Philipp's one, he defines and then relies upon ZFS property called freebsd:boot-environment=1 for each boot environment, I do not set any other additional ZFS properties. There is already org.freebsd:swap property used for SWAP on FreeBSD, so we may use org.freebsd:be in the future, but is just a thought, right now its not used. My version also supports activating boot environments received with zfs recvcommand from other systems (it just updates appreciate /boot/zfs/zpool.cache file).
My implementation is also style compatible with current Illumos/Solaris beadm(1M) which is like the example below.
The boot environments are located in the same plase as in Illumos/Solaris, at pool/ROOT/environment place.
2. Now You're Thinking with Portals
The main purpose of the Boot Environments concept is to make all risky tasks harmless, to provide an easy way back from possible troubles. Think about upgrading the system to newer version, an update of 30+ installed packages to latest versions, testing software or various solutions before taking the final decision, and much more. All these tasks are now harmless thanks to the Boot Environments, but this is just the tip of the iceberg.
You can now move desired boot environment to other machine, physical or virtual and check how it will behave there, check hardware support on the other hardware for example or make a painless hardware upgrade. You may also clone Your desired boot environment and ... start it as a Jail for some more experiments or move Your old physical server install into FreeBSD Jail because its not that heavily used anymore but it still have to be available.
Other good example may be just created server on Your laptop inside VirtualBox virtual machine. After you finish the creation process and tests, You may move this boot environment to the real server and put it into production. Or even move it into VMware ESX/vSphere virtual machine and use it there.
As You see the possibilities with Boot Environments are unlimited.
3. The Install Process
I created 3 possible schemes which should cover most demands, choose one and continue to the next step.
3.1. Server with Two Disks
I assume that this server has 2 disks and we will create ZFS mirror across them, so if any of them will be gone the system will still work as usual. I also assume that these disks are ada0 and ada1. If You have SCSI/SAS drives there, they may be named da0 and da1 accordingly. The procedures below will wipe all data on these disks, You have been warned.
After these instructions and reboot we have these GPT partitions available, this example is on a 512MB disk.
3.2. Server with One Disk
If Your server configuration has only one disk, lets assume its ada0, then You need different points 5. and 7. to make, use these instead of the ones above.
All other steps are the same.
Some time ago I found a good, reliable way of using and installing FreeBSD and described it in my Modern FreeBSD Install [1] [2] HOWTO. Now, more then a year later I come back with my experiences about that setup and a proposal of newer and probably better way of doing it.
1. Introduction
Same as year ago, I assume that You would want to create fresh installation of FreeBSD using one or more hard disks, but also with (laptops) and without GELI based full disk encryption.
This guide was written when FreeBSD 9.0 and 8.3 were available and definitely works for 9.0, but I did not try all this on the older 8.3, if You find some issues on 8.3, let me know I will try to address them in this guide.
Earlier, I was not that confident about booting from the ZFS pool, but there is some very neat feature that made me think ZFS boot is now mandatory. If You just smiled, You know that I am thinking about Boot Environments feature from Illumos/Solaris systems.
In case You are not familiar with the Boot Environments feature, check the Managing Boot Environments with Solaris 11 Express PDF white paper [3]. Illumos/Solaris has the beadm(1M) [4] utility and while Philipp Wuensche wrote the manageBE script as replacement [5], it uses older style used at times when OpenSolaris (and SUN) were still having a great time.
I last couple of days writing an up-to-date replacement for FreeBSD compatible beadmutility, and with some tweaks from today I just made it available at SourceForge [6] if you wish to test it. Currently it's about 200 lines long, so it should be pretty simple to take a look at it. I tried to make it as compatible as possible with the 'upstream' version, along with some small improvements, it currently supports basic functions like list, create, destroy and activate.
Code:
# beadm
usage:
beadm activate <beName>
beadm create [-e nonActiveBe | -e beName@snapshot] <beName>
beadm create <beName@snapshot>
beadm destroy [-F] <beName | beName@snapshot>
beadm list [-a] [-s] [-D] [-H]
beadm rename <origBeName> <newBeName>
beadm mount <beName> [mountpoint]
beadm { umount | unmount } [-f] <beName>
There are several subtle differences between mine implementation and Philipp's one, he defines and then relies upon ZFS property called freebsd:boot-environment=1 for each boot environment, I do not set any other additional ZFS properties. There is already org.freebsd:swap property used for SWAP on FreeBSD, so we may use org.freebsd:be in the future, but is just a thought, right now its not used. My version also supports activating boot environments received with zfs recvcommand from other systems (it just updates appreciate /boot/zfs/zpool.cache file).
My implementation is also style compatible with current Illumos/Solaris beadm(1M) which is like the example below.
Code:
# beadm create -e default upgrade-test
Created successfully
# beadm list
BE Active Mountpoint Space Policy Created
default N / 1.06M static 2012-02-03 15:08
upgrade-test R - 560M static 2012-04-24 22:22
new - - 8K static 2012-04-24 23:40
# zfs list -r sys/ROOT
NAME USED AVAIL REFER MOUNTPOINT
sys/ROOT 562M 8.15G 144K none
sys/ROOT/default 1.48M 8.15G 558M legacy
sys/ROOT/new 8K 8.15G 558M none
sys/ROOT/upgrade-test 560M 8.15G 558M none
# beadm activate default
Activated successfully
# beadm list
BE Active Mountpoint Space Policy Created
default NR / 1.06M static 2012-02-03 15:08
upgrade-test - - 560M static 2012-04-24 22:22
new - - 8K static 2012-04-24 23:40
2. Now You're Thinking with Portals
The main purpose of the Boot Environments concept is to make all risky tasks harmless, to provide an easy way back from possible troubles. Think about upgrading the system to newer version, an update of 30+ installed packages to latest versions, testing software or various solutions before taking the final decision, and much more. All these tasks are now harmless thanks to the Boot Environments, but this is just the tip of the iceberg.
You can now move desired boot environment to other machine, physical or virtual and check how it will behave there, check hardware support on the other hardware for example or make a painless hardware upgrade. You may also clone Your desired boot environment and ... start it as a Jail for some more experiments or move Your old physical server install into FreeBSD Jail because its not that heavily used anymore but it still have to be available.
Other good example may be just created server on Your laptop inside VirtualBox virtual machine. After you finish the creation process and tests, You may move this boot environment to the real server and put it into production. Or even move it into VMware ESX/vSphere virtual machine and use it there.
As You see the possibilities with Boot Environments are unlimited.
3. The Install Process
I created 3 possible schemes which should cover most demands, choose one and continue to the next step.
3.1. Server with Two Disks
I assume that this server has 2 disks and we will create ZFS mirror across them, so if any of them will be gone the system will still work as usual. I also assume that these disks are ada0 and ada1. If You have SCSI/SAS drives there, they may be named da0 and da1 accordingly. The procedures below will wipe all data on these disks, You have been warned.
Code:
1. Boot from the FreeBSD USB/DVD.
2. Select the 'Live CD' option.
3. login: root
4. # sh
5. # DISKS="ada0 ada1"
6. # for I in ${DISKS}; do
> NUMBER=$( echo ${I} | tr -c -d '0-9' )
> gpart destroy -F ${I}
> gpart create -s GPT ${I}
> gpart add -t freebsd-boot -l bootcode${NUMBER} -s 128k ${I}
> gpart add -t freebsd-zfs -l sys${NUMBER} ${I}
> gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ${I}
> done
7. # zpool create -f -o cachefile=/tmp/zpool.cache sys mirror /dev/gpt/sys*
8. # zfs set mountpoint=none sys
9. # zfs set checksum=fletcher4 sys
10. # zfs set atime=off sys
11. # zfs create sys/ROOT
12. # zfs create -o mountpoint=/mnt sys/ROOT/default
13. # zpool set bootfs=sys/ROOT/default sys
14. # cd /usr/freebsd-dist/
15. # for I in base.txz kernel.txz; do
> tar --unlink -xvpJf ${I} -C /mnt
> done
16. # cp /tmp/zpool.cache /mnt/boot/zfs/
17. # cat << EOF >> /mnt/boot/loader.conf
> zfs_load=YES
> vfs.root.mountfrom="zfs:sys/ROOT/default"
> EOF
18. # cat << EOF >> /mnt/etc/rc.conf
> zfs_enable=YES
> EOF
19. # :> /mnt/etc/fstab
20. # zfs umount -a
21. # zfs set mountpoint=legacy sys/ROOT/default
22. # reboot
After these instructions and reboot we have these GPT partitions available, this example is on a 512MB disk.
Code:
# gpart show
=> 34 1048509 ada0 GPT (512M)
34 256 1 freebsd-boot (128k)
290 1048253 2 freebsd-zfs (511M)
=> 34 1048509 ada1 GPT (512M)
34 256 1 freebsd-boot (128k)
290 1048253 2 freebsd-zfs (511M)
# gpart list | grep label
label: bootcode0
label: sys0
label: bootcode1
label: sys1
# zpool status
pool: sys
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
sys ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
gpt/sys0 ONLINE 0 0 0
gpt/sys1 ONLINE 0 0 0
errors: No known data errors
3.2. Server with One Disk
If Your server configuration has only one disk, lets assume its ada0, then You need different points 5. and 7. to make, use these instead of the ones above.
Code:
5. # DISKS="ada0"
7. # zpool create -f -o cachefile=/tmp/zpool.cache sys /dev/gpt/sys*
All other steps are the same.