Insights from building Ungoogled-Chromium on a modern desktop machine.

A re-occurring question is why chromium is slow to build.
Doing a build of Ungoogled-Chromium on a large enough Desktop PC.
Total build time 2 Hours and 3 minutes.

CPU Intel Core Ultra 9 285K 24 cores without hyperthreading.
GPU not relevant for this build-run : Nvidia RTXPRO 4000.
Memory 64 GB DDR5 5600 DUAL Bus ( 2x 32 GB )
Storage 1 TB M.2 Gen5 NVME.
Vendor: Shuttle SB860R8 Barebone chassie.

FreeBSD 15.1 Stable AMD64
Desktop Manager in use while doing build: KDE/Plasma 6.6.4
FileSystem OpenZFS

Memory Allocation during run.

ZFS ARC Cahche up to 30 GB ( not limited )
Active Memory: up to 28 GB
Inactive memory: up to 8.5 GB
Allocated SWAP: up to 500 MB. ( Increases as run progress. )

Resident sizes of up to 2.5 GB per compile task.

As the build consumes up to 2.5 GB resident memory per compile task/thread at its most hungry state.
And Memory is needed for the filesystem , if this build is attempted with 16 GB RAM only ,
you can run 4 or 5 compile threads and limit ZFS to a 5 GB Arc. looking at a completion time of maybe 8 - 10 hours.
With 32 GB Ram , Maybe 10 compile jobs and 10 GB ZFS ARC running for 5 Hours.

A Machine with 16 GB and default 2 GB SWAP. will run out of Virtual memory, unless its restricted in number of build tasks and
Filesystem cache (ZFS ARC) usage.
 
Me having 12 cores , 64GB , takes me about 33 hours.
One hour on redcore-linux, but that is due to 1000's of patches needed for freebsd & ninja.
No i go for the binary ...
 
metric presented seems within reason. Given the complexity of modern browsers I'd probly disable all parallelism in the build and expect the machine to chug away for hours/days/weeks. intelligent rebuilds seem to be non-existent in modern complex projects. Seems like "make clean; make all" is about all anyone ever does anymore. The reasons why open up a whole other topic of discussion.
 
I built it now in just a bit over 4 hrs while working at the computer too, not much but browsing and some text editing. It is a horrible piece of software with 2GB source code...
 
So the problem of building anything still comes down to "it takes too long". The simple fact of being able to build Chromium on a system, with a GUI and the GUI remains relatively responsive, is incredible to anyone that has had to submit batch builds of Ada on a Vax and "came back tommorrow" to find out the build failed.

Parallel builds are good, just like multiprocessing you need to pay attention to the rendezvous and what should happen if a thread fails. Lots depends on the "makefiles" being correct.
 
Parallel builds are good, just like multiprocessing you need to pay attention to the rendezvous and what should happen if a thread fails. Lots depends on the "makefiles" being correct.
I think there are probably a couple of graduate dissertations still to be written in the field of build systems research.
 
Parallel builds are great for utilizing "all the resources" (mostly cpu). A problem is when "build step N depends on build step (N-5) completing correctly" That's where "depends" comes into play; step N should not start until step N-5 completes successfully.

I've seen builds that complete correctly with zero parallelism but fail with something above "X". Lots of time and effort to track down and fix. But if it's fixed for parallel, it works for single thread.

Yeah, my opinions. I have zero idea on "what is best". "build as fast as possible" or "build correctly every single time" flip a coin, ask the magic eight ball.
 
I have seen all kinds of odd things happening while building ports. However, parallel builds fail so rarely due to parallelism that it is not worth trying this unless you have a build fail.

Ungoogled Chromium takes roughly 2.5 hours to build for me - only one desktop for me that has it, AMD Ryzen 3900X (12 cores, 24 threads) with 32 GB RAM and mirrored SSDs with zroot. ccache4 is in use.
 
I've tracked today the entire build, first half is really fast, but somewhere along 30k file count the engine stuff starts being built and things slow down to an extent of seeing a static screen for a few seconds. I don't think there is a problem with parallelism considering what kind of code is built under what kind of conditions. Just check those blink engine compile commands per single file, they're larger than some python projects out there...

On one hand, Xeon E5-2699v4 performing at 50% of speed of Core Ultra 9 285K I'd say is correct, but on the other hand, I should've disabled HT and ran 22 opposed to 44 jobs because the way I see it HT can hamper massively-including parallel builds like this one. The jobs are long running and waiting for something to complete on the 'virtual' core can be painful and I think this is what is happening. The speed dip at about 60%-80% of the build system is large, it took over 75% of the build time.
 
What is interesting.
On Redcore(gentoo) it takes one hour.
On FreeBSD 30 hours.
Same thing. All from source. No binary.

I say neither case is relevant, there is no chance you're building Chromium in an hour on a 12 core machine in 1 or 30 hours if the build system is correct. You'd need 8 GHz CPU for that 1 hr. Check if you have make.conf flags that could interfere with port building in parallel.
 
On one hand, Xeon E5-2699v4 performing at 50% of speed of Core Ultra 9 285K I'd say is correct, but on the other hand, I should've disabled HT and ran 22 opposed to 44 jobs because the way I see it HT can hamper massively-including parallel builds like this one. The jobs are long running and waiting for something to complete on the 'virtual' core can be painful and I think this is what is happening. The speed dip at about 60%-80% of the build system is large, it took over 75% of the build time.

I've never seen Hyperthreading being slower than not using it when running compilers.

Security is a different matter of course...
 
Back
Top