HOWTO: FreeBSD ZFS Madness

@avilla@

Thanks ;)

Consider the suggestion as approved, good idea BTW.

I will look into that beadm mount problem and let You know in this thread.
 
I don't understand why you're doing this, by the way:
Code:
MOUNTPOINT="/$( echo "${FS}" | sed s/"${PREFIX}"//g )"
Shouldn't it use the MOUNTPOINT of the dataset instead of its name?
 
vermaden said:
Merged to HEAD, about temporary mount points names:
Code:
[color="Red"]- mktemp -d /tmp/tmp.XXXXXX[/color]
[color="Lime"]+ mktemp -d /tmp/beadm.${BE}.XXXXXX[/color]

Thanks! I think, though, that this will result in too long directory names, which will spoil beadm list output; beadm alone is probably a better template. Also, you shouldn't hardcode /tmp, but let the user set his TMPDIR, so consider using the -t option for mktemp.
 
avilla@ said:
Thanks! I think, though, that this will result in too long directory names, which will spoil beadm list output; beadm alone is probably a better template.
Maybe I will think something shorter.

avilla@ said:
Also, you shouldn't hardcode /tmp, but let the user set his TMPDIR, so consider using the -t option for mktemp.
If user want to mount it somewhere else, then the syntax is beadm mount <beName> [mountpoint]
 
I can't find the way to boot on a dataset from my my zboot pool.

I explain : this works :

Code:
#!/usr/bin/env tcsh

set PRIMARY_DISK=`/sbin/sysctl -n kern.disks`
set NETIF=`/sbin/ifconfig -l -u | /usr/bin/sed -e 's/lo0//' -e 's/ //g'`
set SWAP_SIZE=1G
set BOOT_SIZE=1G
set HOSTNAME=freebsd.localdomain
set DESTDIR=/mnt
set PASSPHRASE=mypassword
kldload zfs aesni geom_eli
gpart destroy -F $PRIMARY_DISK
gpart create -s gpt $PRIMARY_DISK
gpart add -b 40 -s 256 -t freebsd-boot $PRIMARY_DISK
gpart add -b 2048 -s $BOOT_SIZE -t freebsd-zfs -l boot0 $PRIMARY_DISK
gpart add -t freebsd-zfs -l root0 $PRIMARY_DISK
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 $PRIMARY_DISK
echo $PASSPHRASE | geli init -b -e AES-XTS -l 256 -s 4096 -B none -J - /dev/gpt/root0
echo $PASSPHRASE | geli attach -j - /dev/gpt/root0
zpool create -o altroot=$DESTDIR -o cachefile=/tmp/zpool.cache -O utf8only=on -m none zroot /dev/gpt/root0.eli
zfs set checksum=fletcher4 zroot
zfs set atime=off zroot
zfs set mountpoint=none zroot
zfs create zroot/ROOT
zfs create -o mountpoint=/ zroot/ROOT/default
gnop create -S 4096 /dev/gpt/boot0
dd if=/dev/zero of=/dev/gpt/boot0.nop bs=1M
zpool create -o altroot=$DESTDIR -o cachefile=/tmp/zpool.cache -O utf8only=on -m /bootfs zboot /dev/gpt/boot0.nop
zpool export zboot
gnop destroy /dev/gpt/boot0.nop
zpool import -o altroot=$DESTDIR -o cachefile=/tmp/zpool.cache zboot
zfs set checksum=fletcher4 zboot
zfs set atime=off zboot
zfs set bootfs=zboot zboot
zfs create -o mountpoint=/usr/local zroot/local
zfs create -o mountpoint=/var zroot/var
zfs create -o compression=lzjb -o exec=off -o setuid=off zroot/var/crash
zfs create -o exec=off -o setuid=off zroot/var/db
zfs create -o compression=lzjb -o exec=on -o setuid=off zroot/var/db/pkg
zfs create -o exec=off -o setuid=off zroot/var/empty
zfs create -o compression=lzjb -o exec=off -o setuid=off zroot/var/log
zfs create -o compression=gzip -o exec=off -o setuid=off zroot/var/mail
zfs create -o exec=off -o setuid=off zroot/var/run
zfs create -o compression=lzjb -o exec=on -o setuid=off zroot/var/tmp
chmod 1777 $DESTDIR/var/tmp
zfs create -o mountpoint=/tmp -o compression=on -o exec=on -o setuid=off zroot/tmp
chmod 1777 $DESTDIR/tmp
zfs create -o mountpoint=/home zroot/home
foreach file (/usr/freebsd-dist/*.txz)
 tar --unlink -xpJf $file -C $DESTDIR
end
zfs set readonly=on zroot/var/empty
mv $DESTDIR/boot $DESTDIR/bootfs/boot
ln -shf bootfs/boot $DESTDIR/boot
chflags -h schg $DESTDIR/boot
cp /tmp/zpool.cache $DESTDIR/boot/zfs/zpool.cache
cat >> $DESTDIR/boot/loader.conf <<__EOF__
ahci_load="YES"
aesni_load="YES"
geom_eli_load="YES"
kern.geom.eli.visible_passphrase="2"
zfs_load="YES"
vfs.root.mountfrom="zfs:zroot/ROOT/default"
__EOF__
echo hostname=\"$HOSTNAME\" >> $DESTDIR/etc/rc.conf
echo ifconfig_$NETIF=\"DHCP\" >> $DESTDIR/etc/rc.conf
cat >> $DESTDIR/etc/rc.conf <<__EOF__
zfs_enable="YES"
__EOF__
cp $DESTDIR/usr/share/zoneinfo/Europe/Paris $DESTDIR/etc/localtime
cd $DESTDIR/etc/mail
setenv SENDMAIL_ALIASES $DESTDIR/etc/mail/aliases
make aliases
cd /
zfs umount -a
zfs set mountpoint=legacy zroot/ROOT/default
zfs set mountpoint=/zroot zroot
zfs set mountpoint=/zroot/ROOT zroot/ROOT
zfs set mountpoint=/home zroot/home
zfs set mountpoint=/tmp zroot/tmp
zfs set mountpoint=/usr/local zroot/local
zfs set mountpoint=/var zroot/var

But this doesn't :

Code:
#!/usr/bin/env tcsh

set PRIMARY_DISK=`/sbin/sysctl -n kern.disks`
set NETIF=`/sbin/ifconfig -l -u | /usr/bin/sed -e 's/lo0//' -e 's/ //g'`
set SWAP_SIZE=1G
set BOOT_SIZE=1G
set HOSTNAME=freebsd.localdomain
set DESTDIR=/mnt
set PASSPHRASE=mypassword
kldload zfs aesni geom_eli
gpart destroy -F $PRIMARY_DISK
gpart create -s gpt $PRIMARY_DISK
gpart add -b 40 -s 256 -t freebsd-boot $PRIMARY_DISK
gpart add -b 2048 -s $BOOT_SIZE -t freebsd-zfs -l boot0 $PRIMARY_DISK
gpart add -t freebsd-zfs -l root0 $PRIMARY_DISK
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 $PRIMARY_DISK
echo $PASSPHRASE | geli init -b -e AES-XTS -l 256 -s 4096 -B none -J - /dev/gpt/root0
echo $PASSPHRASE | geli attach -j - /dev/gpt/root0
zpool create -o altroot=$DESTDIR -o cachefile=/tmp/zpool.cache -O utf8only=on -m none zroot /dev/gpt/root0.eli
zfs set checksum=fletcher4 zroot
zfs set atime=off zroot
zfs set mountpoint=none zroot
zfs create zroot/ROOT
zfs create -o mountpoint=/ zroot/ROOT/default
gnop create -S 4096 /dev/gpt/boot0
dd if=/dev/zero of=/dev/gpt/boot0.nop bs=1M
zpool create -o altroot=$DESTDIR -o cachefile=/tmp/zpool.cache -O utf8only=on -m none zboot /dev/gpt/boot0.nop
zpool export zboot
gnop destroy /dev/gpt/boot0.nop
zpool import -o altroot=$DESTDIR -o cachefile=/tmp/zpool.cache zboot
zfs set checksum=fletcher4 zboot
zfs set atime=off zboot
zfs set mountpoint=none zboot
zfs set bootfs=zboot/default zboot
zfs create -o mountpoint=/bootfs zboot/default
zfs create -o mountpoint=/usr/local zroot/local
zfs create -o mountpoint=/var zroot/var
zfs create -o compression=lzjb -o exec=off -o setuid=off zroot/var/crash
zfs create -o exec=off -o setuid=off zroot/var/db
zfs create -o compression=lzjb -o exec=on -o setuid=off zroot/var/db/pkg
zfs create -o exec=off -o setuid=off zroot/var/empty
zfs create -o compression=lzjb -o exec=off -o setuid=off zroot/var/log
zfs create -o compression=gzip -o exec=off -o setuid=off zroot/var/mail
zfs create -o exec=off -o setuid=off zroot/var/run
zfs create -o compression=lzjb -o exec=on -o setuid=off zroot/var/tmp
chmod 1777 $DESTDIR/var/tmp
zfs create -o mountpoint=/tmp -o compression=on -o exec=on -o setuid=off zroot/tmp
chmod 1777 $DESTDIR/tmp
zfs create -o mountpoint=/home zroot/home
foreach file (/usr/freebsd-dist/*.txz)
 tar --unlink -xpJf $file -C $DESTDIR
end
zfs set readonly=on zroot/var/empty
mv $DESTDIR/boot $DESTDIR/bootfs/boot
ln -shf bootfs/boot $DESTDIR/boot
chflags -h schg $DESTDIR/boot
cp /tmp/zpool.cache $DESTDIR/boot/zfs/zpool.cache
cat >> $DESTDIR/boot/loader.conf <<__EOF__
ahci_load="YES"
aesni_load="YES"
geom_eli_load="YES"
kern.geom.eli.visible_passphrase="2"
zfs_load="YES"
vfs.root.mountfrom="zfs:zroot/ROOT/default"
__EOF__
echo hostname=\"$HOSTNAME\" >> $DESTDIR/etc/rc.conf
echo ifconfig_$NETIF=\"DHCP\" >> $DESTDIR/etc/rc.conf
cat >> $DESTDIR/etc/rc.conf <<__EOF__
zfs_enable="YES"
__EOF__
cp $DESTDIR/usr/share/zoneinfo/Europe/Paris $DESTDIR/etc/localtime
cd $DESTDIR/etc/mail
setenv SENDMAIL_ALIASES $DESTDIR/etc/mail/aliases
make aliases
cd /
zfs umount -a
zfs set mountpoint=legacy zroot/ROOT/default
zfs set mountpoint=/zfspools/zroot zroot
zfs set mountpoint=/zfspools/zroot/ROOT zroot/ROOT
zfs set mountpoint=/home zroot/home
zfs set mountpoint=/tmp zroot/tmp
zfs set mountpoint=/usr/local zroot/local
zfs set mountpoint=/var zroot/var
zfs set mountpoint=/zfspools/zboot zboot
zfs set mountpoint=/bootfs zboot/default

The diff :

Code:
@@ -68,13 +68,15 @@
 # /boot
 gnop create -S 4096 /dev/gpt/boot0
 dd if=/dev/zero of=/dev/gpt/boot0.nop bs=1M
-zpool create -o altroot=$DESTDIR -o cachefile=/tmp/zpool.cache -O utf8only=on -m /bootfs zboot /dev/gpt/boot0.nop
+zpool create -o altroot=$DESTDIR -o cachefile=/tmp/zpool.cache -O utf8only=on -m none zboot /dev/gpt/boot0.nop
 zpool export zboot
 gnop destroy /dev/gpt/boot0.nop
 zpool import -o altroot=$DESTDIR -o cachefile=/tmp/zpool.cache zboot
 zfs set checksum=fletcher4 zboot
 zfs set atime=off zboot
-zfs set bootfs=zboot zboot
+zfs set mountpoint=none zboot
+zfs set bootfs=zboot/default zboot
+zfs create -o mountpoint=/bootfs zboot/default
 
 # /usr/local
 zfs create -o mountpoint=/usr/local zroot/local
@@ -186,10 +188,12 @@
 
 zfs umount -a
 zfs set mountpoint=legacy zroot/ROOT/default
-zfs set mountpoint=/zroot zroot
-zfs set mountpoint=/zroot/ROOT zroot/ROOT
+zfs set mountpoint=/zfspools/zroot zroot
+zfs set mountpoint=/zfspools/zroot/ROOT zroot/ROOT
 zfs set mountpoint=/home zroot/home
 zfs set mountpoint=/tmp zroot/tmp
 zfs set mountpoint=/usr/local zroot/local
 zfs set mountpoint=/var zroot/var
+zfs set mountpoint=/zfspools/zboot zboot
+zfs set mountpoint=/bootfs zboot/default

The FreeBSD loader doesn't not find the zfsloader. Is there a way to do it ?

Help ?
 
I do not see any point about which I can say 'this one'.

IMHO start with the setup that works and change one thing at a time, that will take some time (can be scripted through) but will show You where the problem is.
 
Hi,

I finally found the configuration to be able to boot with two pools having datasets.

Howto :

First, boot with the FreeBSD liveCD, then start SSH :

Code:
mkdir /tmp/etc
mdmfs -s32m -S md /tmp/etc
mount -t unionfs /tmp/etc /etc
echo password | pw usermod root -h 0
rm /etc/resolv.conf
dhclient em0
cat /var/run/resolvconf/interfaces/* > /etc/resolv.conf
echo PermitRootLogin=yes >> /etc/ssh/sshd_config
service sshd onestart

Then you only have to copy via scp the attached script.

chmod +x it, and execute it.

time + 10min, you have a working FreeBSD system.

Config after reboot :

Code:
root@beastie:/root # zfs list
NAME                 USED  AVAIL  REFER  MOUNTPOINT
zboot                387M   597M   144K  /zfspools/zboot
zboot/default        386M   597M   386M  /bootfs
zroot               2.30G  16.3G   152K  /zfspools/zroot
zroot/ROOT          1.26G  16.3G   152K  /zfspools/zroot/ROOT
zroot/ROOT/default  1.26G  16.3G  1.26G  legacy
zroot/home           144K  16.3G   144K  /home
zroot/local          144K  16.3G   144K  /usr/local
zroot/swap          1.03G  17.3G    72K  -
zroot/tmp            184K  16.3G   184K  /tmp
zroot/var           1.93M  16.3G   568K  /var
zroot/var/crash      148K  16.3G   148K  /var/crash
zroot/var/db         388K  16.3G   244K  /var/db
zroot/var/db/pkg     144K  16.3G   144K  /var/db/pkg
zroot/var/empty      144K  16.3G   144K  /var/empty
zroot/var/log        192K  16.3G   192K  /var/log
zroot/var/mail       144K  16.3G   144K  /var/mail
zroot/var/run        240K  16.3G   240K  /var/run
zroot/var/tmp        152K  16.3G   152K  /var/tmp

root@beastie:/root # zpool get bootfs
NAME   PROPERTY  VALUE          SOURCE
zboot  bootfs    zboot/default  local
zroot  bootfs    -              default

I used your modified beadm script :

Code:
root@beastie:/root # beadm create upgrade
Created successfully
root@beastie:/root # beadm list
BE      Active Mountpoint Space Policy Created
default N      /          1.26G static 2012-11-02 21:52
upgrade -      -             8K static 2012-11-02 22:06
root@beastie:/root # zfs list
NAME                 USED  AVAIL  REFER  MOUNTPOINT
zboot                387M   597M   144K  /zfspools/zboot
zboot/default        386M   597M   386M  /bootfs
zroot               2.30G  16.3G   152K  /zfspools/zroot
zroot/ROOT          1.26G  16.3G   152K  /zfspools/zroot/ROOT
zroot/ROOT/default  1.26G  16.3G  1.26G  legacy
zroot/ROOT/upgrade     8K  16.3G  1.26G  /zfspools/zroot/ROOT/upgrade
zroot/home           144K  16.3G   144K  /home
zroot/local          144K  16.3G   144K  /usr/local
zroot/swap          1.03G  17.3G    72K  -
zroot/tmp            184K  16.3G   184K  /tmp
zroot/var           1.93M  16.3G   568K  /var
zroot/var/crash      148K  16.3G   148K  /var/crash
zroot/var/db         388K  16.3G   244K  /var/db
zroot/var/db/pkg     144K  16.3G   144K  /var/db/pkg
zroot/var/empty      144K  16.3G   144K  /var/empty
zroot/var/log        192K  16.3G   192K  /var/log
zroot/var/mail       144K  16.3G   144K  /var/mail
zroot/var/run        240K  16.3G   240K  /var/run
zroot/var/tmp        152K  16.3G   152K  /var/tmp

It didn't create a snapshot of zboot/default to zboot/upgrade ?
Is it the configuration you did in the past ?

Thank's,

Trois Six
 

Attachments

  • auto_install.txt
    4.4 KB · Views: 307
Trois-Six said:
First, boot with the FreeBSD liveCD, then start SSH :

Code:
mkdir /tmp/etc
mdmfs -s32m -S md /tmp/etc
mount -t unionfs /tmp/etc /etc
echo password | pw usermod root -h 0
rm /etc/resolv.conf
dhclient em0
cat /var/run/resolvconf/interfaces/* > /etc/resolv.conf
echo PermitRootLogin=yes >> /etc/ssh/sshd_config
service sshd onestart

Then you only have to copy via scp the attached script.

You can shorten that procedure to:
# dhclient em0
# nc -l 2222 > /root/install.sh

... and on the client ...
% nc -w3 ${SERVER_IP} 2222 < ../path/to/install.sh




Config after reboot :

Code:
root@beastie:/root # zfs list
NAME                 USED  AVAIL  REFER  MOUNTPOINT
zboot                387M   597M   144K  /zfspools/zboot
zboot/default        386M   597M   386M  /bootfs
zroot               2.30G  16.3G   152K  /zfspools/zroot
zroot/ROOT          1.26G  16.3G   152K  /zfspools/zroot/ROOT
zroot/ROOT/default  1.26G  16.3G  1.26G  legacy
zroot/home           144K  16.3G   144K  /home
zroot/local          144K  16.3G   144K  /usr/local
zroot/swap          1.03G  17.3G    72K  -
zroot/tmp            184K  16.3G   184K  /tmp
zroot/var           1.93M  16.3G   568K  /var
zroot/var/crash      148K  16.3G   148K  /var/crash
zroot/var/db         388K  16.3G   244K  /var/db
zroot/var/db/pkg     144K  16.3G   144K  /var/db/pkg
zroot/var/empty      144K  16.3G   144K  /var/empty
zroot/var/log        192K  16.3G   192K  /var/log
zroot/var/mail       144K  16.3G   144K  /var/mail
zroot/var/run        240K  16.3G   240K  /var/run
zroot/var/tmp        152K  16.3G   152K  /var/tmp

Using that schema make beadm less useful then it actually is. By adding the /usr/local, /var/db/pkg and /var/db beadm will also 'snapshot' the installed software which is great for upgrades/updates. If something during upgrade/update fails, You can go back to clean and working system. Besides that, its ok.

I used your modified beadm script :
So its seems to work correctly with these two pools?

It didn't create a snapshot of zboot/default to zboot/upgrade ?
The beadm creates snapshots from everything under zroot/ROOT/BENAME and nowhere else. If You want to have something 'supported' by beadm, put it under the zroot/ROOT/BENAME path (for example zroot/ROOT/BENAME/usr/local).

Of course beadm can be modified to also support another pool for boot.

Is it the configuration you did in the past ?
Yes, something like that.
 
Hi,

Following my layout, I did a quick and dirty hack of your script.
create, rename, mount, umount, list work
activate... not really ;)

another problem is that the snapshotted /bootfs is mounted over the currently mounted /bootfs.

/usr/local, /var/db/pkg and /var/db do not always depend on system running ; but yes I agree that I can snapshot them too.

Regards,

Trois Six
 

Attachments

  • patch_multiple_pools.diff
    18 KB · Views: 202
Thanks for the extensive patch (I did not tired it, just reviewed).

I am afraid that these changes are quite big and I would like to NOT incorporate them into the beadm (because of possible BUGs, future code maintain and for easier implementing new features), but if You would need my assistance for that 'fork' for Your setup, let me know ;)
 
Hi,

I fully agree you should not merge that patch :), it's too invasive and I didn't spend time to make activate work for the moment.

Maybe only these changes :

Code:
           if __be_clone ${POOL}/ROOT/${DESTROY}
           then
             # promote clones dependent on snapshots used by destroyed boot environment
-            zfs list -H -t all -o name,origin \
+            zfs list -H -t all -o name,origin -r ${POOL} \
               | while read NAME ORIGIN
                 do
                   if echo "${ORIGIN}" | grep -q -E "${POOL}/ROOT/${DESTROY}(/.*@|@)" 2> /dev/null
@@ -582,7 +750,7 @@
           if __be_clone ${POOL}/ROOT/${DESTROY}
           then
             # promote datasets dependent on origins used by destroyed boot environment
-            ALL_ORIGINS=$( zfs list -H -t all -o name,origin )
+            ALL_ORIGINS=$( zfs list -H -t all -o name,origin -r ${POOL} )
             echo "${ORIGIN_SNAPSHOTS}" \
               | while read S
                 do
@@ -596,7 +764,83 @@
                 done
           fi
           # destroy origins used by destroyed boot environment
-          SNAPSHOTS=$( zfs list -H -t snapshot -o name )
+          SNAPSHOTS=$( zfs list -H -t snapshot -o name -r ${POOL} )

Because in your code you don't specify the pool, and if you have more than one pool, maybe it will not do what it is expected to do.
 
Thank you for the guide, it's very comprehensive and serves both uses cases that I had.
Unfortunately, I'm unable to boot afterwards - it's like the drive is not being flagged as bootable.

Should I be wiping the disk/MBR prior to setup (using parted/livecd) or should gpart destroy be taking care of this for me? (machine previously had grub/Linux).

I've also tried changing AHCI to Legacy for the SATA disk and flagging the /boot partition bootable in parted via a liveCD. Any suggestions?
 
@sadsfae

First, leave the disk/chipset/controller in AHCI, its not this.

Second, as You had Linux there before, I would suggest wiping the beginning of the disk with that command:
# dd < /dev/zero > /dev/ada0 bs=8m count=16

Next, do the instructions as in the guide, it should work as expected.

You can also first check that You do these instructions properly under virtual machine within VirtualBox or other virtualization platform.

Try these and let me know what You get.

and flagging the /boot partition bootable in parted via a liveCD. Any suggestions?
Unlike Linux, FreeBSD does not use separate partition for /boot.
 
vermaden said:
@sadsfae

First, leave the disk/chipset/controller in AHCI, its not this.

Second, as You had Linux there before, I would suggest wiping the beginning of the disk with that command:
# dd < /dev/zero > /dev/ada0 bs=8m count=16

Next, do the instructions as in the guide, it should work as expected.

You can also first check that You do these instructions properly under virtual machine within VirtualBox or other virtualization platform.

Try these and let me know what You get.

Unlike Linux, FreeBSD does not use separate partition for /boot.

Thanks for the quick response, I tried it again after dd and still same results.
I did notice an error around this part, but maybe it's because of the livecd ::

(after geli attach)
# zpool create -f -o cachefile=/tmp/zpool.cache local /dev/gpt/local0.eli

cannot mount '/local': failed to create mountpoint

I'll try later today or tomorrow in a KVM VM or switch out the media, perhaps it's not extracting all the files correctly during the install portions.
 
sadsfae said:
(after geli attach)
# zpool create -f -o cachefile=/tmp/zpool.cache local /dev/gpt/local0.eli

cannot mount '/local': failed to create mountpoint

Its harmless error, its because / is mounted read-only with LiveCD, so /local mountpoint can not be created.
 
vermaden said:
Its harmless error, its because / is mounted read-only with LiveCD, so /local mountpoint can not be created.

Still no luck, but I think it's on my side - tried with another disk as well. I think the hardware I'm using needs a firmware update (Thinkpad T420S with SATA/AHCI).

I'll work it out in a VM and also try some different bare-metal hardware. Thank you for the assistance thus far.
 
sadsfae said:
Still no luck, but I think it's on my side - tried with another disk as well. I think the hardware I'm using needs a firmware update (Thinkpad T420S with SATA/AHCI).

I'll work it out in a VM and also try some different bare-metal hardware. Thank you for the assistance thus far.

@Vermaden - this is working beautifully now. I think there were issues with the Lenovo Thinkpad T420s and an older version of the BIOS. I'm up and running now on a Thinkpad T510.

Thank you for the wonderful guide and help here.

Just two questions

1) I'm using ZFS snapshots of / and eventually home when my userland/apps are perfected.. can I simply restore it online in a recovery scenario or do I need to boot to single-user or recovery and promote/change? (I'm still reading through ZFS documentation)

2) Beadm - do folks use this to provision a new machine with similiar hardware and save the effort of the setup/ports compilation, etc? Looks like a very powerful tool.
 
sadsfae said:
@Vermaden - this is working beautifully now. I think there were issues with the Lenovo Thinkpad T420s and an older version of the BIOS. I'm up and running now on a Thinkpad T510.

Thank you for the wonderful guide and help here.

Welcome ;)

sadsfae said:
1) I'm using ZFS snapshots of / and eventually home when my userland/apps are perfected.. can I simply restore it online in a recovery scenario or do I need to boot to single-user or recovery and promote/change? (I'm still reading through ZFS documentation)

You can set zfs property called snapdir=visible, so You would have .zfs directory with snapshots. You can as well mount these snapshots somewhere and then do something with the files stored there.

sadsfae said:
2) Beadm - do folks use this to provision a new machine with similiar hardware and save the effort of the setup/ports compilation, etc? Looks like a very powerful tool.
I have done that in the past, do the zfs send sys/ROOT/name | ... | zfs recv ... and then just beadm activate name + reboot ;)

I also did beadm 'backup' before upgrading pacakges, before upgrading to newer system snapshot (STABLE), before moving into the PKGng and so on.
 
Thanks yor your great Howto and the beadm script!

I have already done some testinstalls both under virtualbox and on some real server hardware and the zfs-on-root + beadm setup really works ok.

What really makes me nervous are thinkable situations, when I have an active boot environment that locks up right after the kernel is loaded or some other form of broken freebsd. How can I go back to a previous, stable boot environment, without being able to use the beadm script?

As an experiment I created a new boot environment from default, named "be1". I activated "be1" with beadm and on startup escaped to loader prompt. There I did:

- unload kernel
- set vfs.root.mountfrom=zfs:sys/ROOT/default (from vfs.root.mountfrom=zfs:sys/ROOT/be1)
- set currdev=zfs:sys/ROOT/default: (from currdev=zfs:sys/ROOT/be1:)
- load kernel
- load zfs
- boot

This leads to the the result, that - after the kernel loaded ok - sys/ROOT/default cannot be mounted and the following text is displayed:

mounting from zfs:sys/ROOT/default: failed with error 2

Is this because the property "canmount" on sys/ROOT/default has still "noauto" set and should be set to "on" ?

What can I do if I have a an active, but broken, boot environment, and want to revert to a previous, stable boot environment?

FYI: I am testing on a freebsd 9-STABLE from december, 4th 2012..

Regards.
 
Back
Top