Samsung 860 Pros would double the cost of drives for significantly better write endurance but not significantly longer warranty.
For most commercial users (whose time is not free, and whose data is way more valuable than the hardware they store the data on), the warranty on drives is irrelevant. If you lose a $200 drive, you can throw it in the trash, or you can spend 10 hours of your time arguing with vendors and manufacturers to get a used $100 drive as a warranty exchange, but at $200 per hour for your time that is not a good investment. What matters is that you didn't lose the $20K worth of data, because you had redundant copies elsewhere. Where warranties matter is either for consumers who have ample spare time (retirees?), and for large companies: If Dell/EMC/HP/IBM/Amazon has accumulated 20 pallets of prematurely failed disks, which have all gone through in-house post-mortem testing, and returns them in a big truck to Seagate or Hitachi or WD, they do get a few million $ back. And that money is a powerful factor in the next price negotiations with disk vendors. One time in my professional career I helped organize returning over 1000 drives (a few weeks old, all from a single manufacturing batch, with an insanely high failure rate) to vendor X, and vendor X than gave us about 2x the purchase price for those drives, so we could buy equivalent drives from vendor Y and give them to our customer. Several M$ changed hands, and I think a VP of quality control at vendor X is now unemployed (sadly, he deserves it).
Next topic: write endurance. Please measure your write traffic, and compare it to the published write endurance of the drives you are considering. Most people will NEVER get anywhere close to the write traffic that causes write endurance problems. On the other hand, if you are thinking that you might get there because you really are overwriting data that fast, then there are two solutions: either you consider your SSDs to be disposable (short-lived, just replaced them when the write endurance is exceeded, consider them a consumable), or you go for enterprise SSDs (where you pay a huge premium for not having to replace them).
There is a jumper on the motherboard to use them as 4xSATA ports using a mini-SAS cable. I was not planning on using U.2 drives, and I don't really see the point of particularly-fast OS storage in this case (a mirror seems fine).
It's not about fast, it's about convenient. If you were building a hard-disk based server, then having your OS and boot on SSDs is still a good idea, because your machine comes up much faster. And the U.2 drives are physically tiny, use little power, and not very expensive.
There are two viewpoints on whether to mirror the boot/OS drive. One is: don't bother; if they fail, you can just reinstall the OS in a few hours. Nothing on the OS disk is valuable, it only costs time. The problem with this theory is: In reality, the OS install never goes quite as flawless, unless you have OS backups on other media (with local customization!), or you are really organized about recording all local customization so they are easy to redo. The other viewpoint is: mirror them, because then if one fails, you don't have to waste time reinstalling, and then you buy a new one for re-mirroring. Clearly, the balance between these viewpoints depends on the cost of your time, versus the cost of downtime.
I've also read multiple articles saying that other than obvious exceptions, the number of drives in a vdev isn't a big deal. Is that not the case?
It is a giant deal.
Let's start with data reliability. You need some redundancy, meaning at least 2 copies of the data. With the size of modern drives, as ondra_knezour already said, single-fault tolerance is no longer sufficient, since the probability if finding a second fault when repairing the first dead drive is now high. So you need at least 3 copies, or the equivalent number of copies spread over more drives (using parity-based codes, such as RAID-Z2 and Z3). Personally, I'm actually still running with just a 2-way mirror, but I also have ZFS which is really good at recovering after a double fault (it will typically destroy only one file, not the whole RAID array), and I have another 2 copies in backups, one of which is never older than two hours, and one is off-site for disaster recovery. For a good reliability server, you need at least 3 drives.
Now, if you have exactly 3 drives, you will be storing 3 identical copies of the data (mirroring), and your efficiency overhead will be 200% (for every byte stored, you have another 2 bytes of redundancy). If you have 12 drives and use RAID-Z2, then it will store 10 drives' worth of capacity (the extra two are redundancy), and your overhead is just 20%. So more drives gives you better space efficiency, at the same redundancy.
But: more drives also increases the probability that you will have drive failures. Clearly, 12 drives will have a failure about 4x more often than just 3 drives (but the will have individual read errors at the same rate, since that rate is per byte, not per byte). At 2-fault tolerance, drive failure no longer dominates data reliability, but every drive failure is a big hassle (you need to temporarily survive with less redundancy, identify and remove the bad drive, add a new one, and do a rebuilt). All these processes involve humans, which are error prone, and are the greatest cause of data loss. So having fewer drives is good, both in saved effort and in reliability.
The other side is the performance argument. Each hard disk is typically capable of 100 MByte/s and 100 random seeks/s (a.k.a. IOps). In a redundant system, all writes have a performance cost (for example 3-way mirroring the writes cost 3x more, and RAID-Z2 over 12 drives the writes cost 20% more), which does not apply to reads. And small updates in place (which ZFS doesn't do right away) have an even higher write cost. Still, more drives in parallel run faster. With 3 drives, your read/write speed will probably top out at 300/100 MB/s; with 12 drives, it's probably 1200/1000 MB/s (assuming the rest of the system is capable of it, which is very hard to predict, and even harder to achieve in the real world).
BUT: Do you have any workload that really needs that speed? More on that below,
My personal experience with SSDs is ~12 disks since around 2006 and few brands, and shows no differences and generally better reliability than HDDs.
Better than HDD, yes. Good, no. I've seen first-hand and heard too many horror stories about SSDs that brick themselves, or lose data. Still, HDDs fail so often that by comparison SSDs look good. At one point, I was working with one installation that was replacing on average 5 spinning disks per week, and each replacement was "a bit" of work.
I absolutely cannot afford enterprise SSDs. That price jump is ridiculous. If it's not viable to use consumer SSDs in a NAS, ...
Well, they are expensive for a reason. They have better write endurance (which means more internal flash, and better-grade flash chips, there is a complicated science to MLC), and much better quality control. And that doesn't just mean an extra hour on the burn-in stand before shipping them, but for example much better auditing of their internal firmware (of which they have zillions of lines). You pay for that.
Personally, if you need SSD speed or want SSD noise/power, I would go with good brand consumer SSDs (Crucial, Intel, Samsung, many others), and keep the redundancy up. Maybe even with a mix of drives from different brands, so common firmware faults are not a single point of failure.
Can a 3-way mirror of 10TB disks over 10Gb ethernet rival the performance (for network boot) of a SATA SSD? If it can, I'll consider it.
The first step in performance is: You need to specify what you need. Not what you want, but what you need to operate. Ideally, you should put a $ figure on how much extra performance is worth to you.
For example: You seem to think that the 10Gb network will be the bottleneck. That means your server needs to deliver roughly 1GByte/second. With spinning disks, that requires 10 disks, and good system tuning. With SSDs, the answer is trickier, I think real-world delivered bandwidth for large reads on SATA SSDs is about 300-500 MByte/s each, so you need 2-3 SSDs to accomplish that. But do you actually have any system that is capable of consuming a GByte/second sustained? Or do you have any tasks where dropping the speed to a mere 100 MByte/s (one tenth of your goal) will be a significant slowdown? Are your clients really all connected via 10gig? Or do you have many clients running in parallel (which opens another whole can of worms)?
All performance engineering has to start with "speeds and feeds": What's the speed of your devices, and how fast can your users feed or consume data. If you really will lose money by having less than 1Gbyte/s, then you need roughly a 10-dozen spinning disks.
There are a lot of people using those norco cases with better fans with good results. I'm open to advice about a redundant power supply ideally under $500.
Redundant fans are good, fans will fail. For example a good-quality push fan, and another pull fan. Then put in some monitoring of fan rotation if possible, and/or some monitoring of disk temperature (for example with smartctl). If the disks get significantly warmer than 40deg C, raise alarms.
Redundant power supplies I think is overkill for amateurs. It really helps if your data center is wired with redundant power distribution and redundant power sources. Then the electrician can work on one breaker box, or the hydro-electric power source can go offline during a drought while the nuclear power plant is present. Most households are not wires that way (data centers often are). But in my personal experience, power supplies themselves are quite reliable, and being able to hot-swap them is rarely useful.
It would be possible to use a 3-disk mirror of 2TB SSDs for the network boot, a 3-disk mirror for server boot, and then use HDDs for other bulk storage.
The ideal solution is to segregate your data. For each file, decide whether it needs to be on fast storage, on reliable storage, and/or on cheap storage. Make the good/fast/cheap tradeoff every single time. Then aggregate your files into "storage classes", and provision different types of storage for each (like non-redundant fast SSD for OS boot and a few high-frequency temp files, redundant high-quality SSD for valuable yet frequently read files, high-quality disk for bandwidth-intensive but not IO-intensive files, and finally redundant but cheap disks with frequent maintenance for archival storage). For amateurs, this is impractical on a file-by-file basis, but you can do it coarsely.