First: If your desktop has multiple disks, ZFS is the easiest way to get RAID protection against disk failure. That can be achieved with other file systems too (usually by using separate software or hardware RAID), but ZFS makes it easy, and integrates it nicely into a single management infrastructure.
About ZFS data integrity. It is recommended that computer with ZFS uses ECC RAM which is an expensive server grade stuff. Do I really need it for a desktop?
The argument is basically: using ZFS without ECC memory is better for data integrity than not using ZFS at all.
Do you need ECC memory? Well, that depends. How valuable is your data to you? Is (value of your data) * (risk of a corruption due to an error that ECC would have corrected) > (cost of ECC) ? Now, to evaluate that equation, you need to know the value of three unknowns. The first and last (value of your data, cost of ECC) only you can answer. The middle one (risk of corruption) is virtually impossible to figure out. Let's try anyhow.
ZFS is a really good file system. From a data reliability and availability viewpoint, that is mostly for two reasons: (a) it has RAID built-in, which protects against failures of disks (complete failures, and failures of individual sectors and read errors), (b) it has checksums, which protects against silent corruption of the data on the storage path (between the buffer memory inside the host, through the write, then the read, and back into memory), and (c) it scrubs, so it can find latent disk errors early. Let's talk about (b): There is well-known cases of the storage stack corrupting data. The most talked about one is "off-track writes": You send a write request to the disk, at the moment the write happens the disk is not exactly on the track but next to it (can happen due to mechanical vibration, or bad servoing), the data actually gets written, but future reads follow the track's servo information, and find the old data. This is a case where the drive will return wrong data (actually old data), without telling the host that it had a read error, and the classic example of uncorrected read errors. What to do about that? The obvious answer: Before writing, take a checksum of the new data to be written. After reading, verify the checksum. The checksum can not just be stored with the data, so the off-track write above can be detected (the drive returned valid data with a valid checksum, but not the checksum that was expected). Together, RAID and checksum take care of a large fraction of all failure modes of hardware - let me jokingly say that they handle 90% and 9% of the problems.
But there is one problem that isn't solved yet: What if the user data gets corrupted while in memory, in the buffer pool? The file system itself can not guard against that completely, because it doesn't actually have full control over the data buffer at all times (think mmap for example). And this is where ECC hardware comes into play. With my joking examples above, I could now say the following: Since ZFS has already taken care of 99% of all data loss/corruption problems, ECC is the single most important thing to do next, the largest source of bad things happening to good data. In reality, that statement is false; the largest source of data loss/corruption are (x) user error, usually by a sys admin, and (y) software defects in OS and file system, but people tend to ignore those as unsolvable, since they involve humans, while investing effort to protect against hardware problems is considered good style.
This helps understand why many people say "if you use ZFS, you should have ECC": People who use ZFS are typically people who care about data integrity and availability, and invest time and money into it; having deployed RAID and checksums, using ECC makes good sense. But it also helps understand why ZFS makes sense even if you can't afford ECC (and even if you don't have multiple disks to use RAID): Just checksums and scrub alone already help with some problems.