By all my experience with FreeBSD in most cases you either get an error message, or the process will end. It may take a long time, maybe even days, but eventually it ends. Then it's either done, or you get a (late) error message. But that a process simply stucks infinite without any sign of life whatsovever under FreeBSD is a very rare exception. To be clear: I'm not talking the software you add with ports/packages. I'm talking FreeBSD.
This ain't Windows

But I also know there can be circumstances, when this may not be the case, and errors are not detected and handled correctly in time, depening on many possibilities. And I don't know your directory, and how it came to it, and what else might be...
I thought about to attach an etxra SSD at my machine, and do some time measurement experiments on dirs with 5k, 10k, and 15k files, just to get some values one can extrapolate to get at least a roughly estimated idea about which times can be expected on >31M files for certain actions. By my gut feeling only I would say you have to wait several
hours until something like
ls
can even finish on such a large amount of files - you are
way beyond those "normal default" 10k
ralphbsz mentioned. But that doesn't mean the OS couldn't handle it at all.
Anyway one cannot expect the reactions of the OS with 31M files is as fast as with directories containing <=10k files, simply because it's
a lot more to be handled. Even the fastest hardware working with lightspeed needs time.
However,
I'm also thinking practical, which means:
If there is no valuable data needed to be rescued, what is the easiest, quickest solution?
Of course, you already got to it on your own: copy the valueable data, wipe the crap clean, and start all over on a clean drive again. I would do it the exact same way.
Point is there are lessons to be learned from that (I tell this, because this is an open forum anybody on the internet can read here, so don't take that personal on you):
Maybe you kind of "heired" this directory. But if you "produced" it by yourself, there had been some error in testings. If for example some one wants to log data on files, after a while one looks into the directory just to check the shit works as intended. In this case it should have been attracted attention:"Almost 400 files within 3 minutes! crap. Something must went wrong." So one had to check if either the amount of files produced per minute was what was intended, or to think of an routine that limits the number of files, and automatically deleted all old ones.
On the other hand there are cases such an amount of data really needs to be saved, e.g. some measurement of a technical device, physical experiment, data collecting buoy in the ocean,...whatever.
Then there is to think of how to organize this data, since 31M files is nothing any human will ever analyse by hand, but being processed by computers.
And even if there is for whatever reasons no other way to place it all into single files, then those files for sure don't get random names, but be named by some kind of a comprehandable system, and better be distributed into directories, also named senseful...because random file names on 31M files is garbage, no matter what they contain.




