This is a very complicated question. One on which there isn't any good agreement in the storage research or industry community, as far as I know.
To begin with, as others said above, scrub is only one part of a strategy to keep a redundant storage system healthy. Another part of the strategy is SMART, and other forms of checking disk health (like tracking disk performance, which can be an early indicator of reliability problems). SMART is neither a panacea (which reliably predicts disk failure, long before any data is lost on the drive), nor completely useless. Instead it is somewhat reliable: it often predicts disk data loss, but sometimes drives fail (completely or gradually) without SMART giving any warning, and sometimes SMART declares a drive to be PFA dead, and then the drive continues functioning for a long period. But one should not ignore smart, just because it isn't perfect.
Another vital tool is backup. Because even with the best failure prediction and failure search, the system will fail occasionally, For example due to effects that redundancy can't help against, such as correlated failures. Those are often failures of the wetware: a human doing something very wrong. One classic example is "rm -Rf /", which no amount of RAID guards you against, but the data can be restored from a backup.
That leaves the question of: how often should one scrub? That's a terribly difficult question. There are three forces in play.
First, scrub catches errors, both CRC and metadata inconsistency errors that the disk drive (and SMART) can't even begin to catch, and latent errors of the disk hardware. Since the number of errors increases over time (sometimes stepwise, when a whole drive fails at once and remains failed), scrubbing early can only help, since it might (not always!) catch errors while their number is still small. This argues that one should be scrubbing as much as possible. A theoretically optimal implementation would be that the drive is always scrubbing (as fast as it can) when there is no foreground workload. In reality, this is not practical, since moving the head in the short idleness gap between two IOs of the foreground workload will destroy performance. But using QoS techniques such as disk schedulers that are aware of different classes of service (such as emergency resilvering, foreground workload, and background scrub) one can approximate this. AFAIK, ZFS does have some IO scheduling mechanisms. If they were perfect, then scrub should not affect foreground workload performance, which brings us to ...
Second, scrub does affect foreground workload performance. Whether the effect is huge (system basically unusable while scrubbing, need to shut all services that use the file system down before scrubbing) or small but measurable (slight slowdown while scrubbing) is a matter of a lot of debate. I think the reason for the debate is that it depends heavily on the setup of the system. My personal experience is: while scrubbing, the file system is very slow, so much so that human activities that are IO intensive (like building large software, or organizing and moving lots of files) are painful. If I hit the system simultaneously with scrub, backup (which walks all file system metadata) and the nightly periodic run, it may become so slow that I need to reboot to regain control of the system. And since I have a disk that is shared between two zpools, if I scrub both pools simultaneously, performance becomes ridiculously low. For this reason, I have arranged my scrub so it finishes in a few hours, and I start at most one scrub in the middle of the night (at 1:15, right after the last hourly backup at 1am), and I suspend most other nighttime maintenance activities (such as periodic and backup) while scrub is running, so scrub gets done by 7am, when normal human activity may resume. But: My system is very small, only three disk drives in use by ZFS (of which one is a slow backup disk), and very little memory (only 3 GiB in use, due to the limitations of a 32-bit system). And being a home server, there is virtually no activity during the night (since the humans who could cause activity are sleeping).
Part of that question of how scrub interacts with foreground workload performance is the converse: how fast does scrub run? And that depends crucially on (a) how large and complex the zpool is, and (b) what the foreground workload is doing. As an example, on my system the largest pool is a 3 TiB pool on two mirrored drives, and that scrub takes about 3-4 hours, so it is reliably done before the 7am deadline. But I know that other people (with much pools containing more and larger drives) have scrubs that take a day or two, and in some cases a week. That pretty much means that scrub has to run while there is foreground activity, and this will make scrub run for a long time.
Third, and this is the most difficult tradeoff. As I argued above, scrub improves reliability, by detecting errors early. Great. But does scrub also cause errors? The answer is: unfortunately, yes. That's because with modern disks and extremely low head fly-heights, any access to the disk (both read and write) causes "wear and tear", and makes the data less reliable. One way this is visible is that disk drive vendors no specify a maximum IO rate (the number is typically 550 TB/year), and above that rate, the warranty on the drive becomes void. The disk vendors do that for a good reason: any IO increases the number of data errors, and above a certain IO rate, their published and contractually warranted error rate (10^-14 or -15 or around there) can no longer be held. But note that 550 TB/year means that a 20 TB drive can only be read 27.5 times per year. Which means: If there was absolutely no foreground workload, then scrubbing every 2 weeks would already use all the available IO. So just from this simple bookkeeping argument, one should probably not scrub more often than once a month on a system with modern large drives. Since my personal disks are much smaller (the largest ones are 4 TB), I scrub once a week. Note that ZFS only scrubs allocated data (files and metadata), so if a file system is 50% full, only half the platter will be scrubbed.
But: Nobody really knows how much extra IO activity accelerates disk errors. An accurate measurement of that effect is required to analytically optimizing the scrub rate, to get it to the point where scrub catches the most problems, without causing more problems than it catches. The disk manufacturers have some internal measurements (which they do not share with the public, for good reasons); those measurements (and competitive pressure) is where that 550 TB limitation comes from. Big disk users (the companies that by disks by the million) have internal measurements, which are also absolutely not shared. And the academic/research literature has virtually nothing in this area (but hold that, I know some groups are working on it). So for now, I would be a little careful with scrubbing, and try to limit it to far less than 550 TB per drive per year.