Let me guess: The file you're looking at is on a ZFS file system?
The problem here is more fundamental. In the old days, one file system used exactly one disk, and each disk was used by (at most) one file system. (In the above sentence, the term "disk" really means block device.) And the stat structure was born in those old days, where it made logical sense to identify file systems by the numeric ID of their disk. For the numeric ID of their disk, the major/minor number (= dev_t) was perfect.
Why did this make sense? The only test that one can usefully do with the st_dev variable is to check whether two file system objects (typically files) are on the same file system, which for example is necessary when deciding whether to hardlink them to each other, or whether a file can be rename(2)'ed to another file without copying it. So the only operation one should do with st_dev is to compare two of them for equality.
But today we don't live in this simple world any more. Modern file system software (such as ZFS) no longer has a 1-to-1 correspondence between disk drive and file system. For example, the home directory of my server is physically stored on disks /dev/ada2p1 and /dev/ada3p8(it is mirrored), which have device numbers 0x98 and 0xb2, but because I use GPT labels, ZFS finds them under /dev/gpt/hd1[46]_home which has device numbers 0xa0 and 0xcb. And in ZFS, one pool (which corresponds to a set of block devices, which are then parts of physical disks) can contain multiple file systems, so it wouldn't even work to construct a fake device ID, for example by concatenating the ones of the physical disks. Today, the file system ID has a m-to-n relationship with the device ID of the disks. The solution to this is that ZFS (and other such file systems) have to create virtual (that is: fake!) st_dev numbers. It so happens that ZFS chooses very large 64-bit numbers for st_dev. On your machine it happens to have the highest bit set; on my machine, it happens to be 3876826178434374726 for the home file system (which is a little smaller, and doesn't happen to have the highest bit set).
If the cython unit tests have a problem with "negative" 64-bit st_dev numbers (only negative because they're interpreting a uint_64 incorrectly), the problem is with the unit test.