It sounds like geli can offer this.
Every layer of the stack can theoretically do checksumming, error detection, and error correction. The lowest layer (the drives, both HDD and SSD) definitely do that. You typically don't actually see that, unless you use tools like SMART to check on them (or your disk dies). The transport layer (SATA or SCSI=SAS) definitely does it, but there you hardly ever see it: errors on the wire are immediately fixed by the received noticing, and retransmitting. Inside the computer, either the HBA or the memory are typically the weakest links; but memory is supposed to have ECC, some file systems have checksum protection of data structures in memory (been there, done that, got the T-shirt), and HBAs are supposed to be bug free (ha ha, funny). From that point upwards, it is all software, and every software layer could do it. But software is also theoretically intended to be bug free.
The SCSI committee, which is typically much more focused on commercial environments which care much more about data durability and availability, thought about this, and built the T10DIF standard, which allows transporting checksums all the way from the user layer to the platter, and checking them at all layers of the stack. Nice theory. It's even implemented sometimes, and rarely implemented correctly, at least from the drive to the file system layer (I've never seen it go up to user space or applications, but maybe for lack of looking).
Why do we care about T10DIF? For some illogical reason, people are always too worried about their hardware (disks, SSDs, memory) screwing them over. But that hardware has been getting very reliable, nearly laughably reliable when combined with standard recovery mechanisms (various flavors of RAID and backup). On disk drives, only one worrisome error mechanism remains: off-track writes: the disk accepts a write for a block, but on some subsequent read returns the old (overwritten) content of the block. This happens in spinning rust (due to servo problems), and in SSDs (due to firmware bugs in the FTL). The drive vendors are doing all they can to prevent that, but to really address it, they need help from the layer above, and T10DIF is exactly that. Bummer it's so hard to use.
The real problem with this is the following: If you implement checksums in a layer (like GELI), and you detect a checksum error, what do you do? The only safe answer is "fail stop". Two reasons: First, you can not return the block or file with the error to the layer above, and log an error: returning wrong data is always unsafe. But traditional operating systems (the design of Unix is about 50 or 60 years old) have no means of a lower layer telling the upper layer "I could read this thing, but something smells fishy". They can only do "OK" and "EIO"; that semantics is not rich enough for the next layer to do something useful. Second, as I said above, all the layer below should have done good error detection and correction. If an undetected checksum error managed to get through all the safety, that probably means that a systemic problem exists: the error was probably not an alpha particle (see ECC above), it was probably something worse. Best thing to do: Crash the computer, and by extension the whole worldwide internet. That's just not practical, so it is usually easier to just not look and hope that the layer above takes care of it (this is the ostrich school of management: head in the sand).
In the real world, the largest causes of data loss are NOT bit rot, nor disk failures. They are (and the order is pretty accurate): (a) eser errors (the infamous "rm -Rf *", and more elaborate versions), (b) software bugs (well known examples include ReiserFS and BtrFS: the first one murders your files, not your wife; the second one is a machine for destroying files). (c) Site disasters (fire, flood, hurricane, explosion).
So all in all: One software layer, in particular one that is not knowledgeable about redundancy at the neighboring layers, implementing checksums by itself is usually not a good investment of time and money.
The downside, if I understand correctly, is that you know there's been bitrot, but no chance for recovery. Not even with RAID 1, right?
How would GELI even know that RAID 1 exists? It is a block encryption mechanism, for one block device at a time.
Unless maybe you had GELI on individual drives and did RAID 1 over those?
You just turned GELI into a RAID system. That is about 10x or 100x more complex than what it already does. And making it work just for RAID 1 (mirroring) isn't going to solve many people's problems, since disk space is getting to be quite expensive, and the bulk of the world's data is stored in some encoding based RAID (RAID 5 or higher), which is even more complex.
This is mostly hypothetical here. I know most people should probably just use ZFS, ...
Given that ZFS exists for FreeBSD, I think putting checksums into GELI alone would be doubly foolish: Not only is it a bad investment (see above), but a good solution already exists. Here is something I would like to see instead: Teach ZFS how to work with T10DIF, and teach it to export/import checksums to user programs, in particular the large middleware layers (such as databases and object stores) that already know how to operate on blocks, and often internally have checksums of their own. This is a gigantic amount of work, and it's not clear to me it's feasible in the open source world (because it requires close cooperation across layers of the stack that have no common management or financial interest in OSS).