One more idea, but you won't like it: It could theoretically be that your system has memory errors, or disk IO errors. What I mean by that is that UFS has no bugs, but the data on disk doesn't end up reaching fsck correctly. If your IO stack contained only normal SATA/SAS interfaces, then this theory would be very far-fetched, because from your OP it seems that the problem you had initially was repeatable, and memory errors and IO errors should usually be random, so you should problems in different places. So it seems that this theory is nonsense.
BUT: I've seen random IO errors that are completely repeatable before. In one famous example, a colleague and me were using SAS disks with hardware end-to-end checksums (a.k.a. T10DIF), and one particular SAS cable always caused a checksum error on a particular sector, which went away after replacing the cable. And in your case, you have a complex IO system underneath the file system, namely a RAID controller. And it is an elderly RAID controller, and I don't know whether the 3ware controllers were intended to have such big disk arrays created on them. Perhaps you have managed to find a bug in the 3ware firmware that consistently mangles data? Wouldn't be the first bug in RAID implementations. In particular low-end RAID implementations tend to be astonishingly bad, when used outside their comfort zone.
So here are my three suggestions. None of them are very easy to implement, so you won't like them.
#1: Assume the problem is an IO error in the stack underneath the file system, including memory. To debug this, replace your motherboard and memory (definitely make sure the new one has ECC), replace your RAID controller, replace your disks, and try again. Yes, I know this is probably unrealistic for an amateur or small business; if you are in the big leagues of ample hardware, it should be no problem.
#2: Assume the problem is a bug in UFS. In that case, it's time for the FreeBSD developer mailing lists. Ask explicitly there whether any developer has tested UFS on a file system this size. Open a bug report against UFS, see what happens.
#3: Sidestep the problem, and come up with an overall solution. What are you really trying to accomplish? Creating a large file system (you said 36TB, which means at least 3 disks, probably more). Perhaps UFS is just not an appropriate solution, or perhaps the combination of UFS + your hardware (in particular a perhaps geriatric RAID controller) just isn't going to cause you joy. So replace UFS with ZFS. ZFS is designed to have a built-in RAID layer, and ZFS is heavily used in large file systems with multiple disks. Ideally, you should also replace the 3ware RAID controller, but at least take it out of RAID mode, and give ZFS the raw disks directly. It is quite easy to build a file system that uses many disks using ZFS, and you can even use ZFS's excellent internal redundancy implementation in case you want to have the long-term reliability of a redundant file system (at that size, you really should, if you care about your data at all). Furthermore, ZFS has internal checksums, so if there is a hardware problem that causes data corruption, ZFS will detect it more cleanly.
Good luck ... you'll need it.