The assumption that "low-end" RAID has to make is that disks are fail-fast: A disk is either working and working perfectly, or a disk is completely failed and will not return any data. Fundamentally, this legislates away disks returning "wrong" data. Fortunately, this assumption is mostly true: the common failure modes of disk involve either the whole disk going away entirely, or the disk being able to detect when it has a data error (for example with the error-correcting codes that are stored on the platter) and returning an error instead.
The reason I wrote "wrong" in quotes above is that it is awfully hard to define in practice what "right" is. There is a theoretical algorithmic definition, which involves single-copy serializability and tracking the most recent write to the address. In practice, the (real world existing) problem of off-track writes makes this difficult: If data is written while the drive is mechanically being vibrated, you may end up with two copies of the track next to each other on the platter. Future small reads can return either one or the other track, in both cases without detecting errors (if the reads are short enough and you get lucky/unlucky with seeking). Both sectors that are returned are "right" in the sense that both were actually written at some point in the past; one is more recent, but which one is de-facto impossible to find out at the time of reading. This means that the disk becomes byzantine: It will sometimes return different (but valid-looking) data for a specific sector.
In addition, RAID1 if implemented carelessly already has a byzantine read problem: If the two copies on the two disks ever diverge (which typically happens during error handling), future reads will return one or the other copy. Which makes whoever is the consumer of the data that comes from the RAID (typically a file system) pretty unhappy.
The reason I wrote "low-end RAID" in quotes above is that to me, this is the definition of low-end: RAID systems that don't keep checksums and timestamps on a sector-by-sector basis. Unfortunately, calculating checksums takes a lot of CPU resources, and storing checksums and timestamps for each data block requires not only a lot of extra storage, but also very fast extra storage (in practice, can only done with flash assisting the disk, if you want high performance and fault tolerance). Life is hard.
In the meantime, low-end RAID is definitely better than nothing: It handles 99% or 99.99% of all failures; the fact that certain byzantine behaviors don't get fixed shouldn't stop anyone from buying it. Now, if you can use better RAID (like the thing that's built into ZFS), that's obviously preferable.