I've maintained both FreeBSD and RHEL/CentOS HPC clusters for 10 years, serving a wide variety of disciplines (biology, business, chemistry, engineering, physics, political science, psychology, etc).
Some takeaways from that experience:
Many technologies in HPC and HTC (high throughput computing) are vastly overhyped solutions looking for problems. People hear about the next big thing in parallel computing and assume everybody needs it to compete, but that's never the case. CUDA is a great example. It's indispensable for a few niche applications (e.g. machine learning) where it provides vastly better performance and cost/performance, but it's of no use to the vast majority of HPC users. Programs have to be completely rewritten to use CUDA and there's no justification for doing that in most cases. Our main general-purpose cluster has ~2000 CPU cores across 100 nodes that have averaged ~70% utilization and 2 GPU nodes with two boards each which sit idle much of the time. I have some colleagues in physics with their own cluster who invested in several GPU nodes that never once got used and ended up scrapped after sitting idle for ~5 years.
Machine learning itself is a software solution looking for problems. Again, indispensable to a small fraction of the population, but most people have no use for it.
Parallel filesystems are only useful on large clusters or if there are unusually I/O intensive jobs frequently run. Most HPC jobs are CPU and memory-bound. That said, FreeBSD has PFS options now if you need them. Gluster and Ceph are in the ports tree. Sites I'm aware of that are really serious about parallel I/O use an appliance such as Panasas, Netapp, or Isilon (all of which are based on FreeBSD, BTW).
Don't get swept up in the hype about "cool" new technologies. Find out if they're really useful to you before making any decisions.
We run CentOS on our big clusters primarily due to the need for mature Infiniband drivers for MPI apps and support for commercial software. If you need to run one or a few closed-source Linux binaries, FreeBSD might be a good choice. The Linux compatibility module works fine for most scientific apps. It's just a matter of installing the right Linux shared libs, same as on RHEL/CentOS. It only tends to have issues with system software that uses more esoteric Linux system calls. If you have a wide variety of closed-source software, it's probably easier to run CentOS. I did experiment with FreeBSD Infiniband and it was almost ready for prime-time as of about a year and a half ago. I built one FreeBSD+ZFS file server in our CentOS cluster and it performed about as well as the CentOS servers (better in some benchmarks, slower in others). The Infiniband driver was still a bit glitchy at that time, but maybe that's been fixed by now.
Outside engineering, the vast majority of the HPC software is open source and much of it is already in FreeBSD ports. There are a few challenging apps (e.g. SRA Toolkit) but most of them become more portable over time. Qiime comes to mind. It was a pain to port to FreeBSD 6/7 years ago, but Qiime 2 got a new, very clean design and it's now trivial to port all of the modules I've tried. FreeBSD ports makes it trivial to deploy the majority of popular scientific software. Contrary to popular belief, the ports tree has a lot of scientific software already in it, and the vast majority of dependencies for apps that aren't yet ported.
I've generally found FreeBSD better at holding up under load. We frequently had CentOS compute nodes and file servers freeze up while the FreeBSD cluster handled the same jobs without a hiccup. I've narrowed my focus to bioinformatics now and do all my current work on a FreeBSD cluster.
I found the canned cluster-management suites like Rocks to be buggy and incomplete in terms of routine management tasks like user-management, OS updates, etc. Rocks actually prevented us from installing critical security updates during our brief test run with it.
So we developed our own portable cluster-management tools for maintaining both the FreeBSD and CentOS clusters:
http://acadix.biz/spcm.php
I just committed this to FreeBSD ports. It's still alpha quality, but gradually progressing. It should be good enough for anyone interested in playing with a FreeBSD HPC cluster. I do most testing under VirtualBox using a NAT network (with DHCP disabled, so the head node can provide DHCP and PXE install services).