HOWTO: Modern FreeBSD Install RELOADED (vermaden way)

I have it like that in /etc/fstab file:
Code:
#BASE
#DEV            #MOUNT      #FS    #OPTS      #PASS/DUMP
/dev/label/root /           ufs    rw,noatime 1 1
/dev/label/usr  /usr        ufs    rw         2 2
storage/var     /var        zfs    rw         0 0
/dev/label/pkg  /var/db/pkg ufs    rw         2 2
/dev/cd0        /mnt/cdrom  cd9660 ro,noauto  0 0

#ADDITIONAL
#DEV                        #MOUNT               #FS #OPTS #PASS/DUMP
storage                     /storage             zfs rw    0 0
storage/home                /home                zfs rw    0 0
storage/usr/obj             /usr/obj             zfs rw    0 0
storage/usr/ports           /usr/ports           zfs rw    0 0
storage/usr/ports/distfiles /usr/ports/distfiles zfs rw    0 0
storage/usr/ports/obj       /usr/ports/obj       zfs rw    0 0
storage/usr/ports/packages  /usr/ports/packages  zfs rw    0 0
storage/usr/src             /usr/src             zfs rw    0 0
 
@zeroseven

Welcome.

There are two ways to have ZFS mounted after reboot.

1. put zfs_enable=YES in /etc/rc.conf, so all defined datasets will be mounted
2. put each needed dataset into /etc/fstab where You want
 
Hello, I'm getting error:

Code:
fdisk: Geom not found: "ad4"
when doing:
# fdisk -f part ad4

I have 3 HDD's ad4, ad6, ad8 on SATA (with Intel s5000vsa mb it doesn't have AHCI mode for sata) I just tried running the same code before step 1.4 and it didn't have any errors, or it just didn't do anything?

I have tried several ways for getting software raid5 to start working and failed so far :e FreeBSD hates me.
 
fdisk: Geom not found: "ad4"
From what I remember it always complain about that, unless You kldload the geom_mbr module, but even without the module it just works, check fdisk ad4 command again if Your changes have been made.
 
Updates for 8.2?

Hi Vermaden,

I've got a system running 8.1 now, but I'm thinking of installing 8.2 from scratch using your method. Will there be any changes to the installation procedure for the new STABLE release? Thanks!
 
Excellent how-to! Two quick questions concerning the 3 disk configuration:

1) If one of the disks fail, would there be corruption of the swap space which might interfer with a running system?
2) If one disk fails, what is the recovery process?
 
copypaiste said:
Cool guide, vermaden! One question here - why did you choose 159 (9f BSD/OS) type for the 2nd partition?
I have used 165 (FreeBSD type) for UFS partitions and I wanted to make ZFS partitions that would be used different then UFS ones, as BSD/OS is already dead, so I used BSD/OS (159) label to make it look different in fdisk output.
 
Nice tutorial. I have few questions:
- What is the advantage of having /tmp mounted in swap?
- Mounting a 2G /tmp in 4 GB swap will be ok on a 2G RAM machine? (I think yes but I want to be sure). In other words mounting in swap will not use any memory even if mdmfs according to man page will mount using in-memory file system, how much memory it will use?)
- If I use your setup with two drives, (with same setup as in your example with 3 drives) but using ZFS as RAID1, it will be possible later to add two new drives, and have /var and /usr on 4 drives like that: in a raidz tank to have: slice3 from drive1, slice 3 from drive2, drive 3 and drive4? (of course the pool must be recreated)
- On your setup is there any advantage of using GPT instead of MBR?
- If I want to encrypt with geli the whole pool, it will work ok? Should I put zfs on top of geli or the other way around?

Thank you
 
overmind said:
- What is the advantage of having /tmp mounted in swap?
The 'usage pattern' of SWAP is almost the same as SWAP, many random very temporary files/data needed for short period of time, it can be also on ZFS, mostly matter of preference.

- Mounting a 2G /tmp in 4 GB swap will be ok on a 2G RAM machine?
SWAP is just the disk space, so yes it will be ok.

- If I use your setup with two drives, (with same setup as in your example with 3 drives) but using ZFS as RAID1, it will be possible later to add two new drives, and have /var and /usr on 4 drives like that: in a raidz tank to have: slice3 from drive1, slice 3 from drive2, drive 3 and drive4? (of course the pool must be recreated)
Yes, You will 'advance' from RAID1 to RAID10 (stripe of two RAID1).

- On your setup is there any advantage of using GPT instead of MBR?
I havent found any, it should work the same with GPT partitions, I just use MBR partitions because I got used to them, I do not need more then 2-3 primary partitions, I also use bsdlabel partitions on one primary partition on FreeBSD mostly and sometime I need to put Windows XP on the same disk, which will not work with GPT partitions.

- If I want to encrypt with geli the whole pool, it will work ok? Should I put zfs on top of geli or the other way around?
Yes, just encrypt the drives before creating the pool with zpool, I have seen several guides on the net for that, just search for zfs geli freebsd ;)
 
No basefs to rescue?

Thank you vermaden for this detailed step by step.

I appear to have no problems until:
[cmd=]/rescue/zpool import -D || /rescue/zpool import -f basefs[/cmd]

My machine tells me there is no basefs to rescue. I proceeded with the other suggestions, but then on reboot the root file system cannot be found.

I am trying this with 8.2 release.
 
Yes, that works. Thank you!
I have already managed to damage and repair the rootfs mirror. The machine mysteriously shut off when I tried to press ctrl-alt-F9 and my finger grazed a 4th button?!

cheers
 
@vermaden
Most awesome tutorial. I never felt need to try zfs on my desktop, but I'll give it a try.
One question. Would it be possible to modify this tutorial to include that change @20.10.2010. How you go about doing / and /usr on UFS and the rest on zfs with 3 disks in raid (but with vermaden's flavor with some /usr on zfs). Also is that /tmp @ 128RAM now only mounted in memory or is it still swap based?
It looks a little daunting...
 
@bbzz

Thanks mate.

Would it be possible to modify this tutorial to include that change @20.10.2010.
What change exacly?

How you go about doing / and /usr on UFS and the rest on zfs with 3 disks in raid (but with vermaden's flavor with some /usr on zfs).
I have had similar setup, / and /usr on CF 8GB card on UFS and /usr/local /var /tmp and rest on ZFS pool.
 
I was a bit confused by your new setup, but I get it now. I wanted to setup a small home server with 3x500gb, and I wasn't sure if I should just go all zfs. So I did; ended up with everything on zfs pool including swap, /tmp, etc. Not sure if that's a good idea.
 
bbzz said:
I was a bit confused by your new setup, but I get it now. I wanted to setup a small home server with 3x500gb, and I wasn't sure if I should just go all zfs. So I did; ended up with everything on zfs pool including swap, /tmp, etc. Not sure if that's a good idea.
The only risk I may see in that setup are upgrades, for example changes in boot code that habe to be 'set up' again after upgrade, that would make upgrading a little PITA.

I have a NAS with FreeBSD and 2 * 2TB drives, base system (/) is on 8GB CF card and all the rest is on zpool mirror (raid1 equivalent) on those 2 drives, all the 'classic' MBR way.

... but for laptop that I use everyday O would go for GPT/ZFS only setup, one zpool 'to rule them all' ;)
If that laptop fails I do not care, I have backup (on the NAS ...) and I can do reinstall if upgrade fail.
 
What do you mean changes in boot code? Wouldn't that be just a matter of copying it to freebsd-zfs slice? Or I'm missing something (probably) :D

Just a couple of quick questions about your setup if you don't mind please :)
1) Why use UFS and not another zfs pool for /, /var/db/pkg, and /usr on another disk? Any special reason why you stick with UFS?
2) Are you keeping your backup/data files all under /home? Since I would be using this as my desktop regularly as well, wouldn't it be better to create another mountpoint (/data), which you can then tune (parts with lots of docs and text with compression, etc) rather than keep all under home directory?
3) Are there any limitations or zfs performance issues with having lots of mountpoints under mountpoints (say, /usr/home/bbzz/data/docs under /usr/home/bbzz/data under /usr/home/ under /usr, etc).
4) Is that /tmp mounted only under memory? I used that before, with only 16-32mb, and found out that sometimes there are issues when building large ports so ended up just mounting under swap.
 
bbzz said:
What do you mean changes in boot code? Wouldn't that be just a matter of copying it to freebsd-zfs slice? Or I'm missing something (probably) :D

I remember some thread about problems after zpool/zfs upgrade.

1) Why use UFS and not another zfs pool for /, /var/db/pkg, and /usr on another disk? Any special reason why you stick with UFS?
I was not able to boot from ZFS pool from a MBR partition, that is why I use MBR+UFS+ZFS most of the time. I have tried several howtos on MBR/ZFSrootboot but none of them worked, at least for 8.2-RELEASE. So its either MBR/UFS+ZFS or GPT/ZFS. Using several ZFS pools is totally pointless.

2) Are you keeping your backup/data files all under /home? Since I would be using this as my desktop regularly as well, wouldn't it be better to create another mountpoint (/data), which you can then tune (parts with lots of docs and text with compression, etc) rather than keep all under home directory?
I use it most of the time as /storage or /data or something like that, /home is separate dataset only.

3) Are there any limitations or zfs performance issues with having lots of mountpoints under mountpoints (say, /usr/home/bbzz/data/docs under /usr/home/bbzz/data under /usr/home/ under /usr, etc).
I havent heard any performance issues about mount points count.

As You already asking about performance, here are differences between 8.1 and 8.2 ZFS in BLOGBENCH:

Code:
8.1 ZFS
Final score for writes:           375
Final score for reads :         42163

8.2 ZFS
Final score for writes:          1273
Final score for reads :        120520

8.2 UFS
Final score for writes:            77
Final score for reads :        119512

4) Is that /tmp mounted only under memory? I used that before, with only 16-32mb, and found out that sometimes there are issues when building large ports so ended up just mounting under swap.
I also use /tmp as a ZFS dataset now.
 
vermaden said:
@zeroseven

Welcome.

There are two ways to have ZFS mounted after reboot.

1. put zfs_enable=YES in /etc/rc.conf, so all defined datasets will be mounted
2. put each needed dataset into /etc/fstab where You want

I reinstalled using your setup, vermaden. Seems most convenient since I can just blow up my usb stick.

When I only put
Code:
zfs_enable=YES
in /etc/rc.conf, what seems to happen is that first stuff under /etc/fstab is mounted, and then only zfs. Implication is that /var/db/pkg cannot be mounted under /var which is zfs, so all files get installed under zfs pool.

On the other hand, if I just put /etc/fstab and don't enable
Code:
zfs_enable=YES
I can't access anything under /usr, that is only root has permission to access it.

Conclusion. I had to enable both 1. and 2. to make it work, making sure that pool/var is mounted in /etc/fstab before /var/db/pkg.

Just wanted to comment on this if someone is reading this using this setup, and having this issue.

BTW if you have any other suggestions, please do tell before I fill up my drives too much. :)
 
bbzz said:
Conclusion. I had to enable both 1. and 2. to make it work, making sure that pool/var is mounted in /etc/fstab before /var/db/pkg.
Strange, I did not had such problems, I only use /etc/fstab for ZFS/UFS mounts.

bbzz said:
BTW if you have any other suggestions, please do tell before I fill up my drives too much. :)
Currently after my NAS 'update' I again use / on CF card and /usr, /tmp and /var on ZPOOL, an alternative can be using / and /usr on CF and /usr/local, /tmp and /var on ZPOOL.

This is my current /etc/fstab file:
Code:
#DEV            #MOUNT          #FS     #OPTS           #PASS/DUMP
/dev/label/root /               ufs     rw,noatime      1 1
storage/usr     /usr            zfs     rw,noatime      0 0
storage/var     /var            zfs     rw,noatime      0 0
storage/tmp     /tmp            zfs     rw,noatime      0 0
/dev/cd0        /mnt/cdrom      cd9660  ro,noauto       0 0
 
Isn't the 'base system' / and stuff under /usr as well, like libraries, binaries, etc, things that won't work one without other. Also /var/db/pkg which would make updating tedious/impossible if corrupted. So yeah I really liked your previous setup where zpool only has stuff that can live on its own after reinstall.

Code:
#def_fs
/dev/label/root         /               ufs     rw,noatime      1 1
/dev/label/usr          /usr            ufs     rw              2 2
pool/var                /var            zfs     rw              0 0
/dev/label/pkg          /var/db/pkg     ufs     rw              2 2
pool                    /pool           zfs     rw              0 0
pool/home               /home           zfs     rw              0 0
pool/usr/obj            /usr/obj        zfs     rw              0 0
pool/usr/ports          /usr/ports      zfs     rw              0 0
pool/usr/ports/distfiles /usr/ports/distfiles zfs rw            0 0
pool/usr/ports/packages /usr/ports/packages zfs rw              0 0
pool/usr/src            /usr/src        zfs     rw              0 0
pool/data               /data           zfs     rw              0 0
pool/var/tmp            /var/tmp        zfs     rw              0 0
proc                    /proc           procfs  rw              0 0

Code:
mount
/dev/label/root on / (ufs, local, noatime, read-only)
devfs on /dev (devfs, local, multilabel)
/dev/label/usr on /usr (ufs, local, soft-updates)
pool/var on /var (zfs, local, noexec, nosuid)
/dev/label/pkg on /var/db/pkg (ufs, local, soft-updates)
pool on /pool (zfs, local)
pool/home on /home (zfs, local)
pool/usr/obj on /usr/obj (zfs, local)
pool/usr/ports on /usr/ports (zfs, local, nosuid)
pool/usr/ports/distfiles on /usr/ports/distfiles (zfs, local, noexec, nosuid)
pool/usr/ports/packages on /usr/ports/packages (zfs, local, noexec, nosuid)
pool/usr/src on /usr/src (zfs, local, noexec, nosuid)
pool/data on /data (zfs, local, noexec, nosuid)
pool/var/tmp on /var/tmp (zfs, local, nosuid)
procfs on /proc (procfs, local)
pool/tmp on /tmp (zfs, local, nosuid)

edit: Is there a need for pool to have its own mountpoint i.e /pool?
 
Back
Top