That means there is potential for software to think "data is written" but it hasn't reached the device yet. If you lose power at just the correct instant, you can lose data.
Yes, and this is true for nearly all (POSIX-) file systems. When an application writes to a file (and even when it closes the file), there is no guarantee that the data is actually on disk, and will be readable in the future. That guarantee only happens once the application calls some form of fsync, or has opened the file in a sync mode. But this is not what OP is seeing here.
Now extend that to metadata and I can imagine you can get inconsistency.
No. A correctly implemented file system should NEVER get an internal inconsistency. What the OP is seeing: a file exists when looking at it in one way, and doesn't exist when looking at it another way. This must be impossible. Obviously, it is possible in the real and imperfect world (the OP is seeing it). But you are right with the following observation: The fact that disk writes can be delayed is usually the mechanism that file systems develop inconsistencies. For this reason, file system implementations are super careful about writing things in the correct order, or using mechanisms such as fsck after a crash to put things back into a consistent state. It seems that in this example, something went wrong. Given that the OP has not been seeing memory errors, the likely explanation is a bug in ZFS. I've implemented file systems for a living, and I've seen and fixed so many of these things. Usually, ZFS is extremely good about these things, since a lot of fundamental design decisions (in particular the log-structured writing) make it easy to do it right and hard to do it wrong. But mistakes happen.
But I think every filesystem has the potential if subjected to sudden loss of power at the exact right instant. Think of just basic gjournal device: writes go to the journal first then the "filesystem". If you lose power when only half the data has been written to the journal, on reboot replay the journal only gets what's in the journal.
Yes and no. File systems make certain guarantees. The big ones are explained above: If a problem occurs (detected disk write error, or crash or power loss), the system will be in a consistent state, but perhaps not the state the user = application wished it to be in. If the application calls for a sync write, the data will be "on disk" in the sense that a future read will find it as written (except for future read errors occurring). Note that the state does not have to be transactionally or ACID correct across multiple files, only for one file at a time: if the application first creates file A and then B, and the power fails, B may be on disk, but A may be missing.
Few applications use sync writes, because of the massive performance penalty on spinning disks (and moderate performance penalty on SSDs). The idea is that you usually just restart the (idempotent) program, and will get the correct result. That theory doesn't always work; in particular make with sloppy makes files can easily get confused. The way I think of it: It's not data loss, if the application didn't call sync at the correct time.
But none of this discussion is relevant to the OP's problem: They caused ZFS to make a broken situation.