I don't think so.with *BSD is no longer used in HPC, and hasn't been in a decade or more (including its derivates, such as Ultrix).
The only restriction the BSD license has is the presence of the copyright in the source code.license restrictions in FreeBSD
I have not heard "HPC" and "PC" in the same sentence, ever. HPC used to be done on large machines (360/91, 370/195, CDC Cyber, ETA, Cray, ...), then on minisupers (Convex, Sequent, ...), then on clusters, sort of starting with the SP2. A lot of the issues that arise when doing HPC do not even exist on a single PC.Some of beginner HPC users interested ... in their PCs and not using clusters.
Ok, I got the answers are not at all convincing and seem a bit biased towards using Linux For example, several universities use H100 and also RTX6000 GPU on PCs for HPC AI and data mining which is very economical way for HPC. This is a just small example that can use PC for HPC but never mind. Thank you.I have not heard "HPC" and "PC" in the same sentence, ever. HPC used to be done on large machines (360/91, 370/195, CDC Cyber, ETA, Cray, ...), then on minisupers (Convex, Sequent, ...), then on clusters, sort of starting with the SP2. A lot of the issues that arise when doing HPC do not even exist on a single PC.
I'm sure that if you want to do numeric computation on a single BSD machine, the only stopping block would be the lack of already ported software. And porting from Linux should not be too hard, as long as the compilers are available, and the code is not too Linux-specific. The performance might not be as good (as the tuning is lacking), but it might be viable. The only question is: Why would one want to start that on BSD, if the growth path (scaling out to multiple machines) is 100% occupied by Linux?