Deploying Multiple Systems ==> Drives, Filesystems, Imaging, Etc.

PROBLEM:

GPT partition tables and gmirror both write metadata at the end of a hard drive, which can cause problems with corrupted data.
The only recommended solution is to use MBR partitions, at least for now.

Does ZFS somehow get around this issue?
ZFS and gmirror are completely unrelated. You might get spurious messages about a corrupt GPT if you created a ZFS vdev using whole devices (e.g., zpool create zfspool raidz2 da0 da1 da2 da3) that had existing GPT partitions on them. The cure for that is to run gpart destroy -F on the device before adding it to the pool. The messages are harmless in any case.
Does RaidZ have this same problem?
No.
 
ZFS and gmirror are completely unrelated. You might get spurious messages about a corrupt GPT if you created a ZFS vdev using whole devices (e.g., zpool create zfspool raidz2 da0 da1 da2 da3) that had existing GPT partitions on them. The cure for that is to run gpart destroy -F on the device before adding it to the pool. The messages are harmless in any case.

So, to clarify, you are saying that:
Using GPT partitions with ZFS file system does *NOT* have the same corruption problem as "GPT + gmirror," which is caused by (depending on install order) "one overwriting metadata of the other, at the end of the disk,"
so in other words,
GPT + ZFS works fine, doesn't matter if I use GPT + ZFS mirror, or GPT + ZFS RAIDz?

.
.
Am I correct in saying that "gmirror" is used *ONLY* with UFS file system, and not at all, ever, with the ZFS file system?
So that really, the problem is being caused by a GPT + UFS file system issue?
.
.
 
GPT + ZFS works fine, doesn't matter if I use GPT + ZFS mirror, or GPT + ZFS RAIDz?
No need to shout. Yes, GPT + ZFS will work fine. You'll get harmless warnings about corruption if you use a raw device that had a GPT partition on it in a ZFS vdev. This is because you don't have to use partitions at all with ZFS if you don't want. It will work fine on raw devices.
Am I correct in saying that "gmirror" is used *ONLY* with UFS file system, and not at all, ever,with the ZFS file system?
Gmirror works at the block level, and does not care about what filesystem is used. It should work with msdosfs(5) filesystems as well. Heck, it's probably possible to add a gmirror device to a ZFS vdev. I've never tried this, though.
So that really, the problem is being caused by a GPT + UFS file system issue?
UFS has no problems with GPT.
 
No need to shout. Yes, GPT + ZFS will work fine. You'll get harmless warnings about corruption if you use a raw device that had a GPT partition on it in a ZFS vdev. This is because you don't have to use partitions at all with ZFS if you don't want. It will work fine on raw devices.

Sorry, I wanted to make sure that was the case because I just bought 8 of 4TB WD Enterprise drives, which I would have to return for 2TB drives if forced to run with MBR (cap. limit = 2TB).
Wasn't trying to shout!

Gmirror works at the block level, and does not care about what filesystem is used. It should work with msdosfs(5) filesystems as well. Heck, it's probably possible to add a gmirror device to a ZFS vdev. I've never tried this, though.

So its a gmirror issue. Good to know.
 
This is because you don't have to use partitions at all with ZFS if you don't want. It will work fine on raw devices.
Currently trying to figure out how to create two partitions on first set of 3 mirrored drives, 1 partition for OS, 1 partition for LOCAL_BACKUP.
Second set of 3 mirrored drives will be DATA only, so that can be raw device.

Can't be done with the normal install routine without breaking out to shell. Then it looks like manually installing everything so things get complicated fast.

I've heard its a good idea to always install ZFS into a partition slightly smaller than the actual drive size in case of total size mis-match when moving everything to a new drive.
 
[...] Second set of 3 mirrored drives will be DATA only, so that can be raw device.
[...] I've heard its a good idea to always install ZFS into a partition slightly smaller than the actual drive size in case of total size mis-match when moving everything to a new drive.
Yes, you can use raw disks, the question is: do you want to.
  1. raw disks are not particularly faster than partitioned disks;
  2. raw disks cannot have nice human-friendly formatted names as labels because those labels are given to partitions, not raw disks;
  3. raw disks cannot have boot partitions, so you'll never be able to boot from them;
  4. the number of sectors of raw disks is fixed: and thereby disk brand, model and type specific.
#1 although raw disks form the shortest way to "communicate directly" to the disk sectors; it removes as much software layers as possible. I cannot imagine that that will bring you any measurable speed advantage however.

#2 might not seem particularly significant at first but, all that can change when you have to decide exactly which disk to pick to replace (or move around) when for example you have a failed disk and have to take that one 1) out of the pool and 2) physically remove that disk. Now you do not have to deal with an array of 24 or even bigger, I know. However, in times of stress picking the right drive is important. Picking the wrong drive may even be disastrous.

#3 matters only, of course, when you want or need to have the option of booting from that pool but, leaving room for that (=reserving and or tailoring the necessary partitions of the various drives for that option) requires little extra disk space; this is a small administrative overhead when you partion the drives.

#4 you have no influence over the size (=# of sectors) that ZFS will use from each disk. When you create a pool of raw disks ZFS will see to it that within a vdev of a certain pool, that vdev will be made of disk spaces of exactly the same size for each individual drive; even when the disks themselves are not equal in size. With disks of the same brand, model and type you have identical disks: no problems and waste of space at creation. However, when you need to replace a raw disk in such a vdev, the new disk must have the same or more sectors. Smaller disks will not be accepted; one new replacement 4TB drive may very well be a bit smaller that the other 4TB drives in the pool. This may result a prolonged search for a suited replacement disk, at a time when your pool is in a degraded state.

You said as much in your last sentence ("into a partition slightly smaller"): what holds for a pool that is bootable, holds just as well for data-only disks. When partitioning the drives you can shave off a reasonable number of Mbytes, a replacement disk of the same size (as advertised by “the label on the box”) will not get you into trouble if it happens to be a little smaller.

Think about this and decide if you really want raw disks.
 
Think about this and decide if you really want raw disks.

I don't want raw disks. Thank you your excellent input. You just made up my mind.

So I have my desired drives and partitions, the next step is figuring out how to make it happen. The FreeBSD installer won't do this without (at the very least) escaping to command line, and I doubt it will do it from there. Looks like I may have to escape to command line then complete the whole install using only the command line.

Has anybody done this and developed a sequence of commands? Something like that should be easy to modify for anyone having the same basic need.

I'll get to work and see what I can figure out.

Maybe we should share this when we get it figured out.
 
When partitioning the drives you can shave off a reasonable number of Mbytes, a replacement disk of the same size (as advertised by “the label on the box”) will not get you into trouble if it happens to be a little smaller.

Any idea what percentage of total space to allow for "different size disks"?
I've heard 5%, but that might be a bit much? (4TB x 5% = 200GB)
 
Currently trying to figure out how to create two partitions on first set of 3 mirrored drives, 1 partition for OS, 1 partition for LOCAL_BACKUP.
Second set of 3 mirrored drives will be DATA only, so that can be raw device.
I honestly don't understand this part of your setup. What's the point of the OS and backup partitions? If your goal is to reserve capacity for backups or impose limits on how much space some filesystem can take up, you can accomplish that with ZFS reservations and quotas.

I would create two mirror vdevs with three drives each*, and add them both to a single zpool. I would then create OS, backup, and data filesystems in that single pool.
Can't be done with the normal install routine without breaking out to shell. Then it looks like manually installing everything so things get complicated fast.
Yeah, the installer can really only handle simple setups in my experience.
I've heard its a good idea to always install ZFS into a partition slightly smaller than the actual drive size in case of total size mis-match when moving everything to a new drive.
Sounds like it might be a good idea in theory. I've never had occasion to use this in practice. The replacement disk(s) have always been far bigger than the replaced disk.

* This is more robust than the setups I usually do for home systems. I'm following what you said in post #67
 
I would create two mirror vdevs with three drives each*
Two drives is the minimum for a mirror. One more drive = cheap insurance. One drive fails, I have a safety net while replacing. Otherwise, I'm immediately at risk while replacing.
* This is more robust than the setups I usually do for home systems.
This is for a small business.
I honestly don't understand this part of your setup. What's the point of the OS and backup partitions? If your goal is to reserve capacity for backups or impose limits on how much space some filesystem can take up, you can accomplish that with ZFS reservations and quotas.
Correct me if wrong.
With my setup, if I lose the wrong three drives, then I lose either DATA drive, or OS / LOCAL_BACKUP drive. But not both.
If I make the pair of 3-way mirrors a pool, then I lose the wrong three drives, and I lose everything. Also, wouldn't all the files be scattered across two sets of drives, rather than stored on
one set of drives? With one set of drives, each drive is complete in itself? With two sets of drives, no drives are complete in itself?

Again, correct me if wrong.
The point of the OS and LOCAL_BACKUP partitions:
I want a LOCAL_BACKUP drive, on a separate drive, for things like ZFS clones snapshots/clones/etc., DB Server dumps, incremental backups, whatever other need arises.
General consensus was that its always best to have OS on a separate drive.
I was going to have a pair of mirrored drives for OS, but decided (see other posts) that combining OS and LOCAL_BACKUP on one drive would eliminate two (of my 8) hot swap slots
for whatever drive replacement routines I needed to implement.
 
Two drives is the minimum for a mirror. One more drive = cheap insurance. One drive fails, I have a safety net while replacing. Otherwise, I'm immediately at risk while replacing.
Fair enough, but I think this is usually done using online spare drives.*
Correct me if wrong.
With my setup, if I lose the wrong three drives, then I lose either DATA drive, or OS / LOCAL_BACKUP drive. But not both.
If I make the pair of 3-way mirrors a pool, then I lose the wrong three drives, and I lose everything.
Correct. However in your setup if you will still lose a zpool if you lose the wrong three drives. I wouldn't expect the data pool to be more lucky than the other pool. With my setup and some luck you could lose four drives and still be smiling.
Also, wouldn't all the files be scattered across two sets of drives, rather than stored on one set of drives?
Why would you care?
With one set of drives, each drive is complete in itself? With two sets of drives, no drives are complete in itself?
I don't understand this. Edit: OK, I think I understand now. You're thinking you can take a zpool out and put it somewhere else? I'm not sure what would be the point. Also, it seems to me transferring the OS and/or backup filesystem using zfs send/receive would be a whole lot easier.
Again, correct me if wrong.
The point of the OS and LOCAL_BACKUP partitions:
I want a LOCAL_BACKUP drive, on a separate drive, for things like ZFS clones snapshots/clones/etc., DB Server dumps, incremental backups, whatever other need arises.
ZFS snapshots live in the same pool as the filesystem they're a snapshot of, and therefore have the same reliability guarantees. Dumps, backups, etc., are just bytes on a filesystem at the end of the day. Their reliability depends only on the zpool that contains the filesystem.
General consensus was that its always best to have OS on a separate drive.
I dunno about that. Definitely on its own filesystem (ZFS) or partition (everything else).
I was going to have a pair of mirrored drives for OS, but decided (see other posts) that combining OS and LOCAL_BACKUP on one drive would eliminate two (of my 8) hot swap slots for whatever drive replacement routines I needed to implement.
I'm not sure I follow this either.

In any case, I don't see where you're going to put your UEFI partition. It is possible to BIOS boot from a GPT partition. Is that your plan? Also, do you plan on having swap on this array?

* Edit 2: I'm not a fan of this approach. You're hoping the spare drives that have been sitting there spinning but unused possibly for years, are good and ready for an intense write load when resilvering happens.

I prefer to bake the reliability guarantees into the RAID level I'm using (i.e., RAIDZn). In my experience, drives usually fail either when they're very new or very old. Do some burn-in when you create the array to make sure no drives fall into the first category, and you'll probably have years of trouble-free operation. I do tend to replace my drives preemptively after 2-4 years. I learned to do this the hard way.

Please keep in mind I've only ever set up smallish systems for my own home use. I'm a weak-minded Java programmer by day, and an amateur Freebsd sysop by night.
 
* Edit 2: I'm not a fan of this approach. You're hoping the spare drives that have been sitting there spinning but unused possibly for years, are good and ready for an intense write load when resilvering happens.
I'm talking empty hot-swap slots. Spare drives sitting on the shelf.

Correct. However in your setup if you will still lose a zpool if you lose the wrong three drives. I wouldn't expect the data pool to be more lucky than the other pool. With my setup and some luck you could lose four drives and still be smiling.
Not following you here. If you have 2 sets of 3-way mirrors, you can lose 2 drives from each mirror, but if you lose 3 drives from one mirror, you lose everything.
Data is backed up in a mixture (yet to be determined) of cloud, tape, external hard drives, etc. The LOCAL_BACKUP is not the main and final backup, but only a local means of doing misc. backups. Also, LOCAL_BACKUP can be used for backups of system, then backup to ext. hard drive, tape or cloud, so that when you do a "compare" it doesn't say that things don't match because the LIVE data set you otherwise would have used, has changed. I don't know all the answers, but see this setup as a good way to experiment with various solutions.
Dumps, backups, etc., are just bytes on a filesystem at the end of the day. Their reliability depends only on the zpool that contains the filesystem.
Added reliability comes from being on a second set of 3-way mirrored drives. So technically, if something exists on both sets of 3-way mirrored drives, All original data exists (obviously) on DATA (3-way mirror). If backup of same original data exists on LOCAL_BACKUP (a different 3-way mirror), I would have to lose all 6 drives to lose that data.

LOCAL_BACKUP is not for final backups (that would be foolish).
I'm not sure I follow this either.
If interested, please read the previous 4 pages of posts. I am interested in your feedback.
I prefer to bake the reliability guarantees into the RAID level I'm using (i.e., RAIDZn
I've thought long and hard about this. Problem with RAID is you never have one complete set of data on one drive. Everything is scattered across drives.
Its slower than mirror.
Its harder to resilver a drive than with a mirror.
Takes more computing power (cpu and hard drive/head movement/etc.)
Overall "sexier" than plain-old-boring mirror (simple duplication) but I'm not sure it best fits my use case, being that its a small business server, and not a multi-terrabyte behemouth running some massive business organization. If I get that big, then things could change, probably to a mixture of mirrors and RAIDZn.
I'm a weak-minded Java programmer by day, and an amateur Freebsd sysop by night.
We can all learn something from each other.
"iron sharpens iron"
I appreciate your thoughts and feedback.
 
I'm talking empty hot-swap slots. Spare drives sitting on the shelf.
Seems worse to me. You're assuming these brand-new drives will be good when you need them to be, and that's likely to be at a stressful time. Again, I prefer to burn in my drives.

Data is backed up in a mixture (yet to be determined) of cloud, tape, external hard drives, etc. The LOCAL_BACKUP is not the main and final backup, but only a local means of doing misc. backups. Also, LOCAL_BACKUP can be used for backups of system, then backup to ext. hard drive, tape or cloud, so that when you do a "compare" it doesn't say that things don't match because the LIVE data set you otherwise would have used, has changed. I don't know all the answers, but see this setup as a good way to experiment with various solutions.
Still sounds like a ZFS filesystem to me.

Added reliability comes from being on a second set of 3-way mirrored drives. So technically, if something exists on both sets of 3-way mirrored drives, I would have to lose all 6 drives to lose everything. Original data on DATA (3-way mirror). Backup (some form of) on LOCAL_BACKUP (a different 3-way mirror).
Losing the OS should not be a big deal if you have good runbooks. Your backup staging area should be semi-disposable too since you should have at least one copy of what's on there somewhere else. Losing your data is what you should seek to avoid at all costs, and you still have a three drive loss maximum on that pool.

I've thought long and hard about this. Problem with RAID is you never have one complete set of data on one drive. Everything is scattered across drives.
You're hoping to pull out one good drive from the wreckage? Fair enough, but supposing there's only one good drive left after some disaster you only have a one-in-two chance that that will be drive out of your data pool.

Its slower than mirror.
Only for reads. I wonder about the write overhead of a three-way mirror like you propose. Most mirrors setups I've seen only had two members.

Its harder to resilver a drive than with a mirror.
Not sure about this one. Resilvering a mirror means all data must be copied to the new drive.

Takes more computing power (cpu and hard drive/head movement/etc.)
Not sure about this one either. I believe the ZFS implementation does clever things in this area. Do you have a reference?
 
Short comments:

I've thought long and hard about this. >> good, it's "your" system.
Problem with RAID is you never have one complete set of data on one drive. Everything is scattered across drives.
>> that isn't an actual problem
Its slower than mirror. >> probably, (most of the time for reads); for your use case: does that really matter?
It’s harder to resilver a drive than with a mirror. >> I don’t think so.
Takes more computing power (cpu [...] >> Absolutely! And at the same time that hardly ever matters*!
[...] and hard drive/head movement/etc.) ] >> I really don't think so.

Your case, as most others, means juggling with all the possibilities of allocation. So, my suggestions may not be 100% satisfactory and could be considered as variations on previous suggestions but, decide for yourself. I'm also presuming that the OS part contains only the base install and root and other management accounts; all user data is independant and stored in the data pool.

You can use three complete drives to form a separate pool with a 3-way mirror for the OS (and other things; these not being "running data"). That is a very safely configured way to do things, IMO, based on what you have written so far about your SMB target environment: overly safe. I think that the data pool is more important than the OS pool; just as Jose mentions. Therefore more redundancy would be required for the data pool. Based on this, I'd say that a pool with a 2-way mirror would suffice for the OS pool. When losing one disk, rebuilding takes about the same time for a 2-way mirror pool as for a 3-way mirror pool. You do not need the speed of a 3-way mirror for the OS pool, speed wise that's overkill, IMO.

Taking this reasoning a step further: for the OS pool you do not need very much space: a 2-way mirror of 2 * 512GB (SATA) SSDs would suffice. Here I'm transitioning from spindles to silicon. If you take a competent set of SSDs then they will be more reliable than spinning platters of rust. The added extra read/write speed is a pleasant side effect; you'll benefit from that when resilvering to a new replacement SSD in case of failure; keeping your vulnerability time window with a degraded pool narrower than with spinning platters. Also, in case of absolute disaster when the whole pool must be rebuilt from backups this will go fast. Added bonus: two/three extra usable physical 3.5 " slots in your drive cages; presuming the SSDs can be "tucked away" elsewhere in the system.

This brings me to the last item of the OS-pool: your local backups. I don't see a clear basis for that. Perhaps you're thinking of making a quick and efficient backup of the data (perhaps db data) from the data pool but, you're sort of squandering prime online disk space for local backup purposes. With ZFS's snapshots you'll be able to take a snapshot of (part) of a pool. After that you can use that snapshot as the source of your backup; no difficulties with a backup window or locking files: ZFS has done that already for you.

Next: the data pool. You’ll have to make an (business) assessment of how and when a possible expansion might be needed. First with the number of spindles at your disposal: a 5 or 6 drive RAIDZ2. When using 6 or more drives: RAIDZ3 for ultimate redundancy: that is the same level of redundancy as the 3-way mirror option. You'd be using your disk space a lot more efficiently; unless you have a clear use case that speed is that important.

When expansion becomes necessary in the future, you'll have basically two options. First replace each individual drive by a bigger one (say 8TB instead of 4TB). You'll be doing a replacement and resilver action on a disk-by-disk basis. That will probably take several days. The second is adding an extra vdev of new drives to the data pool. When using the same redundancy type that would mean doubling the pool that already exists.

While technically your data is indeed distributed, scattered if you prefer, I'm unsure as to why that matters to you in particular and otherwise in any practical way. With a data pool with a 3-way mirror, you only have a space utilisation of 33%. What do you think you could be doing with each separate disk and when would you likely need that? When you expand the pool by adding another 3-way mirrored vdev to the pool, this will not be the case any more. Your data is distributed over the two vdevs and while technically the files on the original vdev with the 3-way mirror are only on that vdev: they won't be of any use as a separate unit anymore once the second 3-way mirror vdev has been added.

While you can reason that there is an advantage for two separate pools for data and OS, in such a relatively small setting that would only be the separation of concerns, which in and of itself might be valid as a matter of personal preference. When combing the two pools you basically have the option of one big RAIDZn pool of 5 or 6 disks or, as Jose mentions, one pool consisting of (a slice over) two 3-way mirrors. I've found an overview of the various options (drawing) with their available spaces, space utilisation and redundancy useful.

Finally, to your extra HD disk on standby on the shelf. That is money paid for and not used and, I think, a false sense of security to a certain extent. Because that disk added to the (data) pool adds extra redundancy, space or speed, depending how it is deployed. You only have a slight advantage in case of a failing disk in the sense that the sysadmin (that would be you alone I presume, in the SMB use case) when phisycally at the system, could let the system start with resilvering sooner. When that disk is deployed as extra redundancy (RAIDZ3 instead of RAIDZ2) and a disk fails you're falling back to the same level of redundancy that you would have had when that disk is on the shelf and RAIDZ2 deployed. The system only has to endure the extra bit of power while operating.

Weigh your options and make a decision. With your level of preparation and management, you should also be considering the two ZFS (e)books (FreeBSD Development: Books, Papers, Slides); at least the first one. When you're concerned about speed issues: Six Metrics for Measuring ZFS Pool Performance: Part 1 - Part 2 - pdf (2018-2020); by iX Systems

___
* if you’d have a very low specced CPU without any hardware instruction support for the calculations needed, that might be an issue.
 
2-way mirror of 2 * 512GB (SATA) SSDs
Realistically, what minimum size should I consider?
a competent set of SSDs
What would be the criteria for considering an SSD as "competent"?

I see what you're saying.
By using two 3-way mirrored sets, I'm basically duplicating my need for redundancy, with no real benefit.
What you are saying is move OS to SSDs & free up 2 or 3 hot swap slots.
Then go for more drives with RAIDz2 or RAIDz3 to gain my redundancy for everything but the OS.
I have 8 hot-swap slots now, can expand to 12 in future.
I'm thinking I can go with either four drives initially, then add 4 more (= 8 max for now), then add 4 more, = 12 max. (if add one more bay).
OR,
go with six drives initially, then 6 more in future (= 12 total, max).
Then place my DATA and LOCAL_BACKUPS (or whatever I end up doing) on that one set of drives, keeping them separate by using pools or datasets.
List and weight my options, keeping ease of future expansion in mind.

I hadn't considered that once I add a second 3-way mirrored set of drives, then assemble all 6 drives into a pool, that data would start to spread out across drive sets anyway. IF I wanted all data to always be available on one drive, I would be limited to that drives overall size as the max. available size for my whole system. Thank you, I wasn't thinking clearly about this. Data across multiple drives is a given for any sizable system.

Thank you Erichans and Jose, you've got me thinking along new lines.
 
Taking this reasoning a step further: for the OS pool you do not need very much space: a 2-way mirror of 2 * 512GB (SATA) SSDs would suffice. Here I'm transitioning from spindles to silicon. If you take a competent set of SSDs then they will be more reliable than spinning platters of rust. The added extra read/write speed is a pleasant side effect; you'll benefit from that when resilvering to a new replacement SSD in case of failure; keeping your vulnerability time window with a degraded pool narrower than with spinning platters. Also, in case of absolute disaster when the whole pool must be rebuilt from backups this will go fast.
This is exactly what I did for my home system. I have two Samsung EVO 250GB SSDs in a gmirror that hosts the OS. I would've used a ZFS mirror if I were to do this again.

Each drive has three partitions, the OS partition which is about 220GB, a swap partition, and a partition that I'm using for the ZFS Intent Log.

Realistically, what minimum size should I consider?
I'm only using 6.3 out of the 220GBs in my OS volume. It's a headless server with just these packages installed:
Code:
databases/postgresql12-server
devel/git
dns/bind916
dns/mDNSResponder_nss
mail/dovecot
mail/postfix
net/netatalk3
net/rsync
ports-mgmt/pkg
security/doas
sysutils/smartmontools
sysutils/tmux
www/dokuwiki
www/nginx
 
The 512 GB value is the result of new drives coming out more and more only starting at 0.5 TB and the 250 GB costing more than 1/2 the price of a 0.5 TB drive. The shift is moving towards 1 TB as the std size.
As Jose mentions, you can see with half of that you can get by, easily.

As to competent: my suggestion would be some (semi-)pro SSD, probably MLC/TLC. Don't concentrate on speed; not overly important for an OS-pool using SATA. I'm no expert in that area and haven't compared reviews either, you'll have to do your own research.
 
Okay, so its on to the next battle....

I'll have to install from the command line in order to get the custom partitions I need.

I've watched various youtube videos which get very complicated. Much to hard for my current pay grade.
Will I need a 4-year degree before attempting to install from the command line?

Does anyone know of a good custom install script that would serve as a starting point,
which could be adapted to my purpose, rather than re-inventing the wheel?

Or, perhaps I should bail on the custom partition idea for now and do a default ZFS auto-partition install (for my O/S drive),
do my build from there, and worry about the custom partitions after gaining in knowledge / experience?

I do need to get some servers running soon, rather than months or years down the road.
 
I would do a basic install first to get your feet wet, hopefully on a system with at least two drives. Ignore one of the drives during setup so you can use it to experiment with partitioning, ZFS, etc. after you have a basic system installed.

Freebsd system administration is done mostly at the command line. It's one of the things I like about it.
 
Here is the solution to custom / command line install with partitioning of your choice:

1. Do a new install, selecting options closest to your desired final install. (in my case, one is "ZFS auto" partitioning, can select most of items I need, only the partition sizes (default use of full drive size) is wrong.)

2. Inspect contents of the FreeBSD install log at: /var/log/bsdinstall_log. Find the commands where the drive partitions are created and mirroring (if any) was set up.

3. Find examples of install scripts.
- See "Chapter 11: Complex Installation" in book "FreeBSD Mastery: Storage Essentials" by Michael W. Lucas.
- Misc. youtube videos, search "FreeBSD command line install," or "FreeBSD manual install." They can get complicated fast, I think probably overly and un-necessarily complicated. We'll see.
- Search the forums. One example:
-Etc.

4. Combine #1, #2 and #3 above, modify as needed to create & test a recipe of your own.
 
I would do a basic install first to get your feet wet, hopefully on a system with at least two drives. Ignore one of the drives during setup so you can use it to experiment with partitioning, ZFS, etc. after you have a basic system installed.

I think this is the best idea to get started.
Install with the provided installer, use ZFS Auto partitioning, install to (2 or 3) (250GB or 500GB) mirrored SSD cards, O/S only, using the default full size of SSD.
Mess around with partitioning / pools / etc to set up "DATA" and "LOCAL_BACKUP" on second set of drives.
Get a basic test server running ASAP, with file sharing and backup only. Install on-site for testing in real-world environment. Especially curious abous ZFS / file read/write performance, backups, using with Samba.
I can continue on a second FreeBSD testing server at my location, over the summer. Nobody cares what I do with that one.
Build out the 3 production servers (with more advanced features) after the summer.
 
These are my notes from when I installed my server a couple of years ago:

ZFS​

  • Remove any existing GPT partitions with gpart destroy -F before adding to the pool. Annoying boot messages about a corrupt GPT will happen otherwise

Pool and dataset creation

zpool create zfspool raidz2 da0 da1 da2 da3 da4 da5
zpool add zfspool log /dev/mirror/gm0s1d
zfs create zfspool/home
cp -rp /home/* /zfspool/home
rm -rf /home /usr/home
ln -s /zfspool/home /home
ln -s /zfspool/home /usr/home
zfs create zfspool/temp zfspool/video zfspool/postgres zfspool/tmachine

Misc​

zpool status



These are for the gmirror creation, and maybe not so applicable for your use case:

GEOM mirror setup​


  • Boots from a GEOM mirror (two Samsung EVO 850 250GB SSDs). Most of the config is from here:
https://www.freebsd.org/doc/handbook/geom-mirror.html

  • But remember to enable TRIM on the root filesystem newfs -t -U /dev/mirror/gm0s1a
  • One gotcha is that the installation suddenly stopped seeing the mirror volumes. I think I needed to have the GEOM system "taste" the mirror again
    Code:
    true > /dev/ada0
    true > /dev/ada1
  • That last tip is from here:
https://www.ateamsystems.com/tech-blog/installing-freebsd-9-gmirror-gpt-partitions-raid-1/
 
Back
Top