There are more or less three general reasons why one would want to use ZFS.
1. Performance. That definitely needs a lot of memory and CPU.
2. Ease of maintenance. That improvement depends on how complex the specific disk layout is, and obviousely at some point the advantage is consumed by bad behaviour from a lack of resources.
3. copy-on-write allows certain configurations which wouldn't be allowed otherwise. For instance, in postgreSQL the "full_page_writes" option can be switched off, and that can make the backup size (wal archive) shrink by factor 10.
Now it makes a real difference if you need ten tapes or one tape for backup, and therefore I wanted ZFS when it appeared in FBSD. The machine at that time had 256MB ram installed, and that did not work. With 384MB it did work, but it was no fun. Finally, with 750MB it did work good, and then I moved most of the filesystems to ZFS (3 pools, ~50 filesystems). Worked solid for years. Config was
#vm.kmem_size="512M"
#vm.kmem_size_max="512M"
#vfs.zfs.arc_max="160M"
#vfs.zfs.arc_min="26M"
The database is mostly busy collecting the backup file-lists from some other systems, so performance wasn't important. And reliability was always perfect.
There is another issue, not with memory, but with weak CPUs. On certain operations, the disk access may stall for a couple of seconds. Depending on the kind of work this may be an issue or not. It definitely is an issue when scrubbing, because then the loadavg climbs to 20 and more, and the machine is unresponsive. A possible workaround is to set vfs.zfs.scan_idle to an arbitrary high value of ms (so that it is always true, i.e. there will always have been a write access to the pool during that timespan) and then adjust vfs.zfs.scrub_delay accordingly, making the scrub run very slow (but resilver will also become slow).
Over all, if there is a really good reason why one wants to use ZFS, it can be done, but it will bring along some work.