A lot of what is said above is correct. I want to add one observation on the relationship between ZFS and ECC DRAM.
It is often said that ZFS "needs", "wants" or "benefits" from ECC. That is partially wrong: ZFS does not keep particularly more data in memory than other file systems do. Matter-of-fact, on a typical Unix machine, fundamentally all unused memory is used as a cache for recent data, so in a nutshell all memory is used at all time, and is therefore also vulnerable to memory errors. This is true for all Unixes, and all file systems, including ZFS on FreeBSD. So from this viewpoint, ZFS doesn't need ECC any more than other file systems do.
All Unix file systems keep data and more importantly metadata in memory before it is written to disk, typically or seconds (5 seconds or 30 seconds are typical). During that time, the data is highly vulnerable to memory errors, because if the in-memory copy is corrupted now, the corrupted data will be written to disk wrong. ZFS is not particularly better or worse in how long it keeps data in memory. One thing that helps ZFS is that it calculates checksums of data very early on, so if the copy in memory is corrupted and then copied to disk, on the next read, the wrong checksum will flag the corrupted data as invalid. Which is good (at least we're not operating on wrong data), and bad (you will not get a checksum error, which most people will (mis-) interpret as a disk error.
I don't think ZFS keeps checksums of all its in-memory data structures at all times. Doing so would be prohibitively expensive, in particular in the free software arena, where too many decisions are driven by (often stupid) benchmarks; all I need to say is "Larabel". Some commercial storage systems do much more extensive checksum protection in memory, not just of data blocks, but also of internal data structures, such as allocation tables; I know of no such technology in the free software world.
The fact that ZFS keeps checksums of data on disk makes it more valuable to have ECC, but that logic is a little difficult to explain: For most storage systems, the biggest source of data corruption and data loss is hardware problems, with disks themselves (both spinning rust and flash are far from perfect), and with interfaces, including good-quality commercial ones (a colleague used to cut SAS cables that cause CRC errors in half with wire cutters, to make sure they don't get reused). To prevent data loss, you use RAID-like technologies, which ultimately rely on redundancy (trade off more storage use for higher durability), and ZFS does a good job supporting that. That leaves silent data corruption as the next biggest problems. ZFS uses checksums to guard against most of these problems; most other file system do not use checksum. Therefore, on ZFS really the largest source of data corruption are memory problems; other file systems are still dominated by disk and interface hardware problems. Therefore, on ZFS it is RELATIVELY (not absolutely!) more valuable to invest in memory protection, once you have invested in disk redundancy, and that gives you a much better hardened system.
I hope the above explanation is at least slightly clear. And having said that: Do as I say, not as I do: my personal server at home does NOT have ECC memory. I would like it to, and on the next motherboard upgrade it's going to happen, but last time I bought a new motherboard, other concerns made ECC impractical.