There are various statements in this post that are wrong, yet contain a grain of truth:
RAID 5 (neither hardware nor software RAIDZ1 included) should not be used in production period.
What is true: With modern very large drives, and the uncorrectable error rate of modern drives, any RAID code that is only single-fault tolerant (which includes RAID 5, ZFS RAIDZ1 and simple mirroring) is in and of itself insufficient, if you want better than two nines of data reliability. That doesn't mean that it shouldn't be used in production, but that for systems that do require higher reliability, it needs to be used together with other measures. But not everything needs such high standards; for quite a few uses, single-fault tolerant RAID (and even non-RAIDed disks) can be used for production.
On the top of it RAID is typically in industry used for high availability.
And for high data reliability. A system that is up and running, but return EIO when trying to access certain files is available, but not reliable. Another very simplified way to look at it is: there are two ways for disks to fail. One is for a disk to completely stop responding (electronics failure); here RAID help maintain high availability of the system as a whole. The other is for a disk to be functional, but return read errors for a certain sector; here RAID helps maintain high reliability of the one file that is stored on the offending sector.
High availability is not typically needed for home users.
Have you ever had your home server down when your teenage child needs to access the web to finish his homework, and your spouse needs to print some documents for the meeting they're about to drive to? While home users typically don't need the 5 or 7 nines of availability that can be needed in commercial settings (not in all, by the way), an availability of about 3-4 nines makes life at home much easier.
I am having hard time imagining who would need 4 disk RAID 5 in industry these days.
It is still used all the time. For example because a certain system wants to use local storage (perhaps the network can't handle full-speed accesses to a storage server), and the data is small enough to fit on 3 disk's worth of space. In that case a combination of 4-disk RAID 5 and good backup (or snapshots) with asynchronous moving of the backup/snapshot data off host gives you decent QoS with little network load at a very good cost.
Four 2TB HDDs are going to cost about $240 in U.S.
Only if you use consumer-grade hard disks. Enterprise grade hard-disks (even high capacity near-line drives), which have much better MTBF, tend to run a few hundred $ each.
A new hardware RAID controller is about $700.
In many cases, they are de-facto free, built into the motherboard (that's true of the better server motherboards); all you need to enable the RAID functionality is to buy the battery. And for add-on cards, a LSI (BroadCom) 4-port internal RAID card (the 9266-4i) can be had at NewEgg for $299. Not that I particularly endorse NewEgg or LSI, but that sets a price-point.
Plus a party should be ready to monitor and replace RAID battery which is about $100.
True. But you also need to monitor your disk drives, which have failure rates comparable to the RAID battery.
If file integrity, snapshots, and build in backup are not needed Hardware RAID wins hands down over ZFS.
The debate between hardware RAID and software RAID (built into the file system, not into a separate RAID layer below the file system) is complex. There is not one correct answer for all cases. It depends on many factors.
The only modern file system suitable for hardware RAID is HAMMER1.
I think that statement is ridiculous. Many other file systems function very well on hardware RAID. Please explain why only Hammer can do it.