I'm just going to collect a few small replies to minor points that may have been ignored earlier.
Could add a card?
Do you know of any non-scsi sata cards so I can add another drive?
(I've heard that ZFS does NOT like hardware SCSI. Even though could disable,
best not to have a card with SCSI, if possible)???
On the contrary. ZFS is perfectly fine with SCSI cards. There are lots of "large" systems around, running FreeBSD, using LSI/Avago/Broadcom SAS cards, with dozens of disks. One can also use one of those cards to connect extra SATA disks (nearly all SCSI cards can handle SATA disks). And also: you can get multi-port SATA cards too.
On the other hand: With 6 ports on the motherboard, I really don't think that more disk drive ports will be needed. In particular with modern disk capacity.
Current data size: Approx. 300GB on each server.
That's very small. Modern disk drives typically come in sizes such as 16 or 20 TB. If you were to for example buy four of these drives, and then use them in a RAID-Z2 layout (which can handle two failures), you would have 40TB of usable disk space, and your file system would be less than 1% full. Even buying small inexpensive disks (I think the sweet spot for new, not used, disks may be 4TB drives) capacity is not a problem for the foreseeable future. So you don't need a huge number of drives. And you only need many because you want redundancy.
For the price of an LTO tape drive you could buy a bunch of 500 GB USB disks for off-site rotation.
I'm planning on buildling a backup routine using
External H.D's in quick-swap drive enclosures.
Internal SCSI Tape Drive
External disks in a professional setting? Risky. Now you're assuming that there are regular visits to the site, you're relying on disks that are transported and thrown around, you are relying on cables that are plugged and unplugged. I know it can be done, but I would try to avoid it. The good news about this approach is that capacity is really cheap: Put a 20TB drive into an external enclosure, using a good interface (USB-3 or eSATA), and for less than $1000 you have an enormous amount of backup capacity.
Tapes? That's all the downsides of external disks, and then some. Again you need site visits. The reliability of tapes drives ... is nasty. On paper they look great. But they have the nasty habit of failing in the real world. If I had to rely on tapes, I would (a) use enterprise-grade drives (3480/3490/3590 style, perhaps LTO if you can tolerate risk, definitely not small cartridges), and (b) write redundant tapes. But look at the cost of drives and media: Last I looked a good LTO-8 or -9 drive plus a 20-pack of cartridges brings you to about $5K or $10K. For that, you can get lots of other stuff.
If you have reasonable bandwidth, backing up to the cloud seems like the best option.
Here's an idea for a hybrid: Set up most of your data disks to be 2-fault tolerant (for example 4 disks, and then make big data partitions on them, which you arrange as RAID-Z2). Also put a small backup partition on each drive, and then make a non-redundant backup partition out of them (you get more capacity out of those). Use the backup disk partition for a first level backup, then copy these backups over the network offsite to the cloud. That gives you relatively cheap unlimited capacity in the cloud, rapid access to a local backup (even when the network is slow or down), and disaster recovery: If something destroys the whole server, you still have a (slightly older) offsite backup.
Just my opinions:
I like the idea of separate OS and Data drives. Makes upgrading easier.
A fine opinion to have, and I won't disagree with you. Matter-of-fact, the moment you start partitioning drives and using the partitions in different ZFS pools, it means the operator needs to think. For example, if you have four disks, each with 3 partitions (one tiny for OS somewhat redundant, one big for data highly redundant, and one small for backups), if one physical disk fails, you have four slightly sick pools. Orchestrating disk replacement is perfectly possible, but it requires multiple commands, and not getting things wrong. Using extra disks just to simplify the system is not a bad idea. A lot of this is a tradeoff: How much training do your operators have? How much will this system be modified in the future? Are you power- or size-constrained?
Why Mirror boot device? Protection against the boot device physically failing. You may have to be physically present to boot from the other one in the pair, but it should come up and let you replace the failed device.
For a professionally managed system, this is a great idea. For a home system, where your users (in my case spouse and child) can handle a multi-day outage, it's less important.
But one warning: If one of the boot drives fails, you will probably have to be physically present, to convince the BIOS to actually boot. Even worse, I've seen SATA drives that fail so thoroughly, they completely disable the motherboard. So in a failure case, you may have to be physically present to pull disks (one at a time), until the system starts breathing again. Not fun when it happens, but sadly it does.
In the following post, I'm going to skip all the capacity calculations, but those are clearly important.
I don't want to risk losing any 2 drives *AS LONG AS* they are not in the same VDEV, sounds scary to me.
Modern disk drives are so large, the probability of an unpredicted and uncorrectable single-sector error is getting to be significant. And the fastest way to lose data is to get the following double fault: One drive dies completely (rubber side up, shiny side down). It happens. No problem, you have redundancy, meaning ZFS will now read a while disk's worth of capacity from the other drives to put the data onto the spare. Unfortunately, during that giant rebuild/reading operation, you get a single sector error. You only lose one sector (one file), but using the "a barrel of wine with one spoonful of sewage in it is still sewage" theorem, the customer is now (justifiably) pissed off.
To guard against that, for enterprise-grade professionally managed systems, one should really have a system that can tolerate two faults.
1.) I've heard about keeping the applications separate from the operating system, but it sounded like there was no clean or simple way to do that short of major surgery by a tech with greater knowledge and ability than what I currently possess.
If by applications you mean "packages and ports": Those go into /usr/local. You could theoretically create separate file systems (or even pools) for that. In practice, that's probably silly, since they are typically quite small (dozens of GB total). In the past, the tradition was to have many separate file systems (for root, /usr, /usr/local, /var, /var/log and so on); these days pretty much the only splitting of file systems that's still commonly done is OS, user data, and backups.
Automated provisioning, and configuration management, are ubiquitous at the big end of town. But there's a lot to master if you have never been there. And the real benefits come when you have a large fleet of systems.
If you only have a few near-identical systems to deploy, then keeping meticulous records of everything you do (and aiming to script it) may be a satisfactory mechanism. You start with detailed records of how to install the root.
Completely agree. Automated install with updates and customization is possible, but really hard. For a half dozen system, it will probably not gain you anything, on the contrary, you will waste much time learning the system.
And I completely agree with the "keep a record" system. The way I do this: Whenever I do system administration, I have a separate window open, and I type into a file exactly what I did (the files are in /root/, and named YYYYMMDD.txt). If I type a command, I cut and paste it into there. If I need to explain something, I do that by adding comments. Like that, the resulting file is sort of usable as a script, which means re-doing it (for example on another machine) becomes super easy. It also means that if I lose my OS or want to re-install, I can just work through all these files, and repeat all the required steps.
The obvious advantage of tape is that, occasionally, you can put one away "for ever". But that's not quite true, as the media will eventually become unreadable unless it goes through a routine refreshment cycle.
Media is also REALLY expensive today. I just looked: An LTO-9 cartridge is over $140. Sure, it gives you 45TB, but for that, you can buy an extra disk drive, which is probably more practical.
If your data volume is only 300 GB, and the data doesn't change fast, then suspect the expected backup volume will be small, and easily handled by things more cost-efficient than tape.