Both systems can mount a UFS2 filesystem on the same block device.
That's odd, I would expect the second one to see that the file system is already mounted, and claim that it wasn't properly dismounted.
They can't seem to see the changes the other did though.
Welcome to caching. Both file systems will be caching the content of the disk, and they have no idea when the cache needs to be invalidated.
I'm fairly sure there are other issues in using it this way; e.g. just how does the OS on the second server know that a write has finished on the first one?
That's only a small part of the problem. Imagine what happens when both servers want to create a new file. Both will see that there is some free space, and allocate it, and both will write in the same space; whoever writes second will win. Then both will update the directory to show where the new file is, and their two directory writes will also end up on top of each other. Unfortunately, the question of "who wins" depends solely on the luck of timing, so on disk you will probably end up with a salad: finely chopped bits of data. As long as nobody has to read back from disk and just works out of cached content in RAM, this will actually seem to work reasonably well. The moment disk reads happen, all hell will break loose.
I just find it odd that we can have things like a block disk device attachable to more than one machine without the actual software to make it a shared file system between N servers (where N > 1) being readily available and easily found.
Well, early on disks used to be connected to exactly one machine. To connect them to another one required physically moving cables (and if you have ever worked with bus+tag cables, you know that this is not easy, the cables are 1" diameter, and the connectors as big as a brick). So the whole traditions of file systems for the last ~60 years started around single-attach disks. Actually, this is not quite true: Even in the 70s and 80s quite a few disk drive models could be connected to two hosts, and moved over from one to another with a front panel switch; this was for active/passive standby configurations: one computer goes down, switch all the disks over and toggle to the second one.
SANs are a relatively new invention. Excluding the Digital CI and its first distributed file system / cluster file system technology (from 1983, extremely early), multi-accessor disk only became popular with FibreChannel in the late 90s (theoretically, it was possible to use parallel SCSI with multiple initiators, but with the cabling limitations it was impractical with rare exceptions, like Sequent). And then, as soon as Fibre Channel was broadly available, SAN file systems sprang up like mushrooms after a rain. I worked on a few of them (my signature is on the very first shipping box that SAN-FS was delivered in). But building a shared-disk file system is surprisingly difficult, because you need *all* the computers to exactly agree on who is on charge of what. That is quite the task, given that communication between the computers can be disrupted any moment. It also comes with significant penalties; for example caching things is so hard that SAN file systems initially struggled to get the same performance as single-node file systems for single-node workloads. And the software complexity is overwhelming; getting this to work bug-free and efficient takes an enormous amount of effort, which is why there are few and only expensive commercial offerings, while the free software is stuck in niches (like Ceph and Lustre serving the supercomputer market, and surviving on very lucrative support contracts from large customers, given that installing and operating a cluster file system is very hard).
The thing is: Today, with SAS, it is nearly trivial to build a 2- or 4-initiator SAN; most JBOD disk enclosures have connectors that you can have 4 initiators no problem. But using that hardware resource is very hard. Nearly all users are better off setting up a single server, and then using a network protocol.
If you think NFS has problems (you mention locking), then just switch to a modern protocol, like NFSv4 or CIFS. It is quite possible to build high-performing and efficient clusters on top of NAS technology.