zfs partitioning during installation advice

I've been using Freebsd to serve my git repo and rsync needs as well as desktop for a few years without any problems using the default partitioning scheme from Freebsd 9, then 10. I recently upgraded to 11, and everything seems to be working fine. Never one to let things be, I am considering going ZFS so that I can get snapshots (my rationale is naive, but it's mine..).

My question is this - if I have a 250 GB SSD plus a 1TB hybrid drive (32 GB SSD + rest 5400rpm), what's a good way to partition things and to ease into zfs from install onward? Is there some guidance beyond the quickstart section of the handbook. Something that talks about simple user scenarios. Everything I've been able to find has assumed an already running system and multiple large drives. I'm thinking of something that gives rule of thumb for the install, when to take snapshots, what needs to be backed up and where, practical advice for a non-production-server-cum-desktop user type.
 
First off, that sounds like "important" data. I'd suggest mirroring at least the data portion. I just picked up a current 4 TB Seagate for $120 to my door. The snapshot capabilities of ZFS aren't going to help you with a disk failure!

Personally, I'd configure with a couple $100-class SSDs, mirrored, for your OS and swap. The "default" partitioning provided by the installer is fine for most purposes, likely including yours. I'd then add another zpool with a pair of 3-8 TB drives, mirrored, for your data storage. ZFS lets you mount things in arbitrary places without having to pre-allocate the amount of storage for each piece, so you can decide where you want the storage "on the fly" as your needs evolve.

Using compression is my recommendation over trying to use de-dup to save space. Even with low-power Celeron processors, I've never "felt" the compression in my day-to-day work.

If you're wanting to keep the budget down even more, you might get away for a while with just adding another 1 TB drive and using your existing SSD for swap.

I used to use a very complex ZFS layout, but eventually the maintenance of that, especially across jails, became moe trouble than it was worth. The port of sysutils/beadm to FreeBSD makes a lot of things easier if you keep your root filesystem bootable.
 
While jef has some very valid points and logical suggestions that should be followed, if you can't actually add all the drives, I would suggest the following:

Install FreeBSD on your SSD using the defaults recommended by the installer.

Once the install is finished, create a separate ZFS pool out of the 1TB drive you have and use this as your data.

This is basically the same as what jef said but without the mirroring. It will help ease you into ZFS.

Jef's point about not having protection from a drive failure is quite valid. So, be sure you have a good backup solution and test to make sure you can recover all your data and configurations in the event of a drive failure.

----

On a separate note, I use SSDs very differently from the way most people recommend. I will explain that here, but to be clear, this is not considered "normal" by most people on this forum.

My setup is a single SSD (typically 128 GB) with a cluster of HDDs. I partition the SSD with 3 partitions:

The first is a FreeBSD boot partition of 32 GB. It is just the base install. It is intended as an emergency recovery partition in the event something goes horribly wrong and is otherwise ignored.

The second partition is currently another 32GB. It is used for SWAP. The size will vary depending on needs of the system.

The third partition is used as L2ARC for the zfsroot pool created out of the cluster of HDDs.

I don't have room in the system for more disks or I would have a zfsroot for OS, software, and /tmp with a separate zfsdata pool for the data. The zfsdata pool would ideally have a separate SSD for L2ARC and/or ZIL (depending on expected workload).
 
Install FreeBSD on your SSD using the defaults recommended by the installer.

Once the install is finished, create a separate ZFS pool out of the 1TB drive you have and use this as your data.

This is basically the same as what jef said but without the mirroring. It will help ease you into ZFS.
That would be my suggestion too. You can also use UFS on the SSD for the OS and ZFS for the data disk. That's how I started with ZFS, I had a working server booting off a traditional UFS partitioned disk. I then added 4 disks and configured it with RAID-Z. It was easy to add and allowed me to play around with ZFS. Later on, when I got more confident about ZFS, I reinstalled the server and put the OS on ZFS too. Then simply added the previously created RAID-Z pool to that.
 
That would be my suggestion too. You can also use UFS on the SSD for the OS and ZFS for the data disk. That's how I started with ZFS, I had a working server booting off a traditional UFS partitioned disk. I then added 4 disks and configured it with RAID-Z. It was easy to add and allowed me to play around with ZFS. Later on, when I got more confident about ZFS, I reinstalled the server and put the OS on ZFS too. Then simply added the previously created RAID-Z pool to that.

I actually started the same way except I used a pair of USB attached hard drives (used mirror instead of raid-z). The advantage to the USB drives is/was that I could see how it reacted when one of them was disconnected from the pool on a running system to better understand the failure scenarios and recovery procedures.

The USB drives are now used for backup (but my "server" is a home system, not something for work).
 
I actually started the same way except I used a pair of USB attached hard drives (used mirror instead of raid-z). The advantage to the USB drives is/was that I could see how it reacted when one of them was disconnected from the pool on a running system to better understand the failure scenarios and recovery procedures.
I was clever enough to buy an Icy Dock disk enclosure. Four disks fitting in 3 x 5.25 bay with hot swap. So I was able to pull/replace disks without having to open up the server. I have since bought different disk enclosures, the old one had a fan that was difficult to replace. The ones I have now use one or two 'standard' 8 cm fans on the outside of the enclosure, those are a lot easier to replace (my house is quite dusty so I need to replace fans from time to time because of the noise they start to make).
 
I actually started the same way except I used a pair of USB attached hard drives (used mirror instead of raid-z). The advantage to the USB drives is/was that I could see how it reacted when one of them was disconnected from the pool on a running system to better understand the failure scenarios and recovery procedures.

The USB drives are now used for backup (but my "server" is a home system, not something for work).
This sounds like an interesting experiment. I'll give it a try.
 
As a somewhat humorous followup, I had it all figured out - I was going to use ZFS for my extra disk and start easy. I opened up terminal to prepare and df -Th showed:

zroot/ROOT/default zfs 206G 9.0G 197G 4% /
devfs devfs 1.0K 1.0K 0B 100% /dev
fdescfs fdescfs 1.0K 1.0K 0B 100% /dev/fd
procfs procfs 4.0K 4.0K 0B 100% /proc
rsync-data zfs 899G 855G 44G 95% /rsync-data
zroot/tmp zfs 197G 516K 197G 0% /tmp
zroot/usr/home zfs 201G 4.4G 197G 2% /usr/home
zroot/usr/ports zfs 198G 637M 197G 0% /usr/ports
zroot/usr/src zfs 198G 637M 197G 0% /usr/src
zroot/var/audit zfs 197G 96K 197G 0% /var/audit
zroot/var/crash zfs 197G 96K 197G 0% /var/crash
zroot/var/log zfs 197G 1.4M 197G 0% /var/log
zroot/var/mail zfs 197G 152K 197G 0% /var/mail
zroot/var/tmp zfs 197G 7.3M 197G 0% /var/tmp
zroot zfs 197G 96K 197G 0% /zroot


So, I went back to my 11 install notes and sure enough, I chose root on ZFS. Apparently, the transition from UFS to ZFS is so painless that once selected the change was unnoticeable. I'm off to figuring out jails on ZFS now :). I am going to try out the ZFS on external USB's though.
 
OK. So, I thought this was going to be painful, but... ZFS is uhmazing...

I added a third hard drive that was identical to my second and mirrored it with the single command:

sudo zpool attach rsync-data /dev/ada1 /dev/ada2

The result was magnificent:

pool: rsync-data
state: ONLINE
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scan: resilver in progress since Sun Aug 13 15:07:53 2017
35.6G scanned out of 855G at 79.0M/s, 2h56m to go
35.6G resilvered, 4.17% done
config:

NAME STATE READ WRITE CKSUM
rsync-data ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ada1 ONLINE 0 0 0
ada2 ONLINE 0 0 0 (resilvering)

errors: No known data errors


I'm loving ZFS!
 
Back
Top