compile with more cpus

Hi i would like to modify my /etc/make.conf to use 4 out of the 8 cpus i have for compiling all my ports, could anyone help with the code needed?
thanks
nedry
 
Like tingo mentions many builds need to go in a certain order
 
Hi i would like to modify my /etc/make.conf to use 4 out of the 8 cpus i have for compiling all my ports
I can't tell if a make.conf setting is possible, but you can assigne processor sets to processes (commands) with cpuset(1), ex: cpuset -c -l 0-3 /usr/bin/make. You can set the command as an alias for make for example.

Out of curiosity, why do you want to limit the build process to 4 processors?
 
  • Compiling with more CPUs is in general a good idea, because you get parallelism.
  • Don't expect it to scale linearly. Compiles are typically IO limited (meaning file system or disk limited). The big step is to have two compiles going at the same time, so one compile can use the CPU while the other is waiting for IO. But ...
  • The optimal number of parallel tasks per CPU depends highly on the type of compile and the system (relative CPU to IO speed). I did an extensive study about 25 years ago (on an 8-CPU HP-UX machine with more fibre channel cards than you can imagine, and several hundred disks), and found the optimum speed at about 8-10 tasks per CPU (meaning about 60-80 tasks total). More recently, I did this on a Intel chip with a cluster file system using 10gigE, and the optimum was much lower, about 4 to 8 tasks.
  • However, makefiles and the make system need to be set up for parallel compiles, usually with the -j switch. And this tends to really stress make files; any sloppyness in the dependencies tends to kill parallel compiles. So don't get your hopes up.
 
Keep in mind that clang is a memory hog. Building world with 8 parallel processes on a Core i7 with 16 GB of RAM crashed the system because it went out of memory. Building with 4 parallel processes goes fine.
 
Putting together a new system with Ryzen 7 3700X (8 cores/16 threads). What would be the optimal number (for maximum speed) to use with the -j switch on that system?
 
It depends on how big/convoluted a build tree, how much RAM, and I/O speed. If you are not limited by RAM and using hard disks rather than SSDs try 17.
 
extensive study about 25 years ago ..on an .. HP-UX machine
  • So don't get your hopes up.

Ralph, perhaps it's soon time to upgrade your hardware from your honorable HP-UX 😁
25 years later it`s really (of course not always) possible to compile e.g. with -j16 and that's absolutely fun to complete a make buildworld in 90 minutes or so ....
  • So get back to hopes up.
 
I don't do buildworlds unless I have to, but I do kernel builds and maybe some ports once I get a handle on synth or poudriere. Always nice when builds don't take long. The old system I was using was pretty slow. I actually tried a buildworld on it once, but it would have taken way too long so I gave up. The new machine will have 8/16 cores, 32gb memory, and nvme drives, will be interesting to try -j16 on a kernel build and maybe give buildworld a try just to see.
 
Ralph, perhaps it's soon time to upgrade your hardware from your honorable HP-UX 😁
25 years later it`s really (of course not always) possible to compile e.g. with -j16 and that's absolutely fun to complete a make buildworld in 90 minutes or so ....
  • So get back to hopes up.
Honestly, in the last ~5 years most of my compiles were no longer parallel compiles on a single machine, but using large clusters, of dozens to thousands of machines. In that configuration, I typically run a very small number (often 1) compiler per host, and let the network take care of the IO. So the question of "how many jobs per core or per CPU chip" has become less relevant.

I do understand that few users have access to clusters that size, other than at work.
 
.... I actually tried a buildworld on it once, but it would have taken way too long so I gave up. ......
in normal case the (really) way too long world build is only once.
you can then continue with
Code:
NO_CLEAN=yes
.
... in the last ~5 years ... thousands of machines. .......
greta-thunberg-seitenhieb-gegen-donald-trump.jpg
😁
 
Anecdote: About 20 years ago, when building the Linux kernel and gcc + libc still took overnight on a typical machine (which at the time was a Pentium with one IDE disk), a colleague performed a Linux compile on a Cray. It went so fast, he initially thought that he had turned a "pretend only" mode. No, within O(minutes), the machine had recompiled everything. And then Sequent and Convex mini-supercomputers showed up in significant numbers, and builds were never the same.

As to the picture of Greta: Today there are literally tens or hundreds of millions of computers involved in distributing images, facts and videos of and about Greta. The CO2 footprint of millions of people watching a video of her giving a speech in Davos is very large, and completely dwarves the footprint of crossing the ocean on a sailboat. We can now argue whether that's a good or a bad thing, but I'd rather not argue that, since it will get emotional.
 
Honestly,....
... The CO2 footprint...
O.K, back to honesty ..: Honestly, the footprint on my electricity bill was the reason I stop using self financed clusters. 😁

now running world build with the -j4 flag e.g on a single RPI4 : started compile yesterday ...
still compiling that slug ....
the first little step, FreeBSD could do for saving the world could be to make the GENERIC-NODEBUG the default
for current-images to save compile time 😁

P.S: I can imagine what happened on a Cray at that time, even today a -j16 is a liberation
for today I thought about e.g. a thunderX but that would be the same step back to inacceptable electricity bills and CO2-footprints
We are hanging here(for the first) .., Kernel Panic :)
 
Based on the size of CPU cooler going into my new desktop system I don't think it's going to be too carbon footprint friendly. My laptop computer is a real power miser in comparison.
 
Back
Top