OOPS EDIT: When I first wrote this, I thought you were going to use four 12TB drives. Now I see that you have 12TB total = four 3TB drives. Quite a few updates from that.
On the important data, where you said you "need redundancy": How valuable is your data? How big would the damage be if one file, or the whole file system was gone? How much work would it be to restore from an off-site backup (which you certainly have if your data is actually valuable)? How disruptive would be a multi-day outage while you rebuild your system and restore from backup? Most likely the answer is: You really don't want to take that risk.
Here's why I'm asking: With today's very large disk drives (which you are using), and the rate of uncorrectable errors not having improved with time (it is still spec'ed around 10^-15, plus or minus one, and the real-world rate is considerably worse), single-fault tolerant RAID is not good enough any longer. If one drive fails, you need to read all other drives to rebuild, and the probability of finding an error when reading all three drives is very near 1, meaning you will likely have an error during RAID reconstruction.
You can do the math yourself: 3 TB x 3 drives (after 1 has failed) x 8bits/byte x 10^-15 = 0.072. That's the number of expected read errors when you have to do a full read. Since that number is relatively small, you can roughly say: the probability that your 1-fault tolerant RAID will be damaged during a rebuild is roughly 7% (and roughly 93% of the time you will survive a drive failure). For really important data, where loss or a multi-day outage would be naughty, you don't want to take a 7% risk. That defeats the purpose of RAID.
The good news is that your valuable data is pretty small: only 1TB, out of the 12TB of raw capacity. I would definitely store that part of the data with at least 2-fault tolerance. You could use ZFS RAID-Z2 for that. But all parity or RS based encoding schemes suffer from having bad performance for small writes (ZFS partly cures that by using append-only logs, but only partly). And your capacity is so large, here is a proposal: For that 1TB, store it 4-way mirrored. That uses 4TB of space out of your 12TB available, about 1/3. Actually, in reality your 1TB of space usage might be too optimistic. If you think you need 1TB, then you should probably reserve 2TB of space, use 4-way mirroring, and there goes your first 8TB of disk space.
That leaves you with 4-8TB of disk space for the "unimportant" stuff. Even that I would not store without redundancy. Why? Not because of the risk of loss of data, but because of the work and hassle of having to recreate it, or restore it from backup. I would use RAID-Z1 (single fault tolerant RAID) for that, which gives you a formatted capacity of 3-6TB. If you get lucky (about 93% of the time), that is enough redundancy that if you get a disk failure, you can just put a new disk in and reconstruct everything without data loss. If you get unlucky (the other 7%), then it sucks being you, but that's a good tradeoff.
How to implement this? Sadly, I don't know a way to tell ZFS "take this device, and logically partition it into two volumes, take the left one and put it into a 4-way-mirrored pool, and take the right one and put it into a RAID-Z1 pool". It would be cool if ZFS could do that itself and change the size of the volumes dynamically, but I don't know how to do it (or whether it is even possible). So here is how I do it: Take each of the raw drives. Use gpart to partition them, creating a 1-2TB volume (give them logical names like "valuable1" through "valuable4"), and a 2 (or 1?) TB volume "scratch1" and so on. Then use the zpool command to make two storage pools, the first one you add all the valuable volumes to (and don't configure it for RAID-Z, it will automatically be mirrored), and the second one you configure for RAID-Z1. Then set up your file systems. I would definitely use symbolic names in gpart, it makes management so much easier than wrestling with /dev/ada2p3.
Lastly, the SSD. Using it as a boot drive is a great idea, it makes booting really fast. But be aware: You have NO redundancy here! On the other hand, there is no valuable data on the boot drive. But if your SSD fails, your computer will be down for at least many hours (perhaps days), while you drive to the store, get a new SSD, and reinstall the OS. And reinstalling the OS from scratch and getting all the little tuning and configuration right will take a long time (BTDT, tedious). Now, does this mean you should buy a second SSD right away? I think that's a waste of money for typical home users. Here would be my proposal: Have a cron job that once a day makes a full backup of the boot SSD onto your scratch ZFS file system. Then if the SSD dies, you need to (a) buy a replacement, (b) temporarily boot from a USB stick or DVD, (c) copy the backup to the new SSD, (d) do some minor tweaking to make the new SSD bootable, and (e) go drink, because the system is back up and running, with just an hour or two of work. There are many other variations that are possible. For example, you could reserve a tiny amount of space on the four big drives to have a backup bootable system there, and update that regularly. Then if your SSD dies, you just take it out, and temporarily boot from hard-disk, until you get the replacement SSD in the mail. That might be easiest if you use root-on-ZFS, and use snapshots and copy to update the backup. There are many options.
As you said, most likely your root/boot file system will not fill the SSD. You can partition it with gpart and use the leftovers for a ZFS cache. There are various ways to use it (logs, L2ARC, and all that). Personally, I wouldn't even waste my time on that. Why? For a home user, the performance of ZFS on spinning disk drives is typically adequate. Sure, it will run faster with an SSD cache, but will the extra speed actually buy you real-world happiness? Enough to balance the complexity and extra work of setting it up?
There is also a technical argument against it: SSDs don't like being written to; modern flash chips have remarkably bad write endurance. By using your only (!) boot disk as a ZFS cache, you are shortening its life, and increasing the probability that it will die. And as described above: if your boot disk dies, it will be a big hassle, and ruin your whole day. Now the write endurance is not a big effect, but for home users, this makes the convenience <-> performance tradeoff even more unbalanced.
Last two pieces of advice: Run smartd, and check regularly for any signs of problems with your disks. And scrub your ZFS file systems regularly. I do mine every 3 days (which is probably excessive), but 2-4 weeks as a scrubbing interval seems to be industry standard practice.