Solved Synth, a few questions

Greetings all,

as the title says, I have a few questions regarding synth(1).

1. If I understand John Marino's explanation on github, synth(1) keeps track of built ports, cf. Cached port options, so it can check for obsolete/incorrect options when rebuilding a port. Thus, would it be wise to initially install synth(1) by bootstrapping, i.e., compile or install synth(1) from a package, and rebuild synth(1) with synth(1)?

2. The man-page recites in regards to the builders and the jobs:
Code:
"Num. concurrent builders:  The represents the number of ports that can be
                  simultaneously built on the system. The
                   selected    value is influenced by the number of
                   physical    cores and hyperthreads the system has,
                   the amount of memory on the system, the amount
                   of available swap, and if that swap is a    solid-
                   state drive or not. Generally memory is the
                   limiting    resource when tmpfs is used, so    the
                   default value for the number of builders    is
                   generally 75% of    the number of CPUs of the sys-
                   tem.  The user is free to balance jobs versus
                   builders    as well.

 Max. jobs per builder:     If memory is constrained, it's often a better
                   performance tradeoff to reduce the number of
                   builders    and increase the number    of jobs    per
                   builder.    The default value varies depending on
                   the system, but it will never exceed 5."

What would be a recommendation for a machine with i5, 2 physical cores, and hyper-threading enabled processor, and 8 GB RAM, 4 GB swap, ccache(1), and tmpfs(5) enabled?

Kindest regards,

M
 
What would be a recommendation for a machine with i5, 2 physical cores, and hyper-threading enabled processor, and 8 GB RAM, 4 GB swap, ccache(1), and tmpfs(5) enabled?
My experience on such large-class machines: Absolutely at least one per "CPU", where "CPU" means the thing the OS sees as a CPU: In your case, that would be probably 2 cores x 2 for hyper-threading = 4. If you are running with less, then OS scheduler will leave one of the hyper-threaded CPUs idle, and you will probably notice the slowdown. Given that most compile things do lots disk IO, I've seen benefit from running even more parallel jobs, so if one gets stuck waiting for a file, something else can run (and in the meantime, the file system prefetcher/write-behind can do some large IOs). Personally, I've been running with 2-4 jobs per virtual CPU, which for you would mean 8 or 16. But then, that's against a file system with many disks. If you have a single disk drive, you'll probably just end up being disk limited, so the extra jobs won't help (they probably won't hurt from the disk IO viewpoint). And depending on how much memory the machine has, you might end up swapping (never happened to me, but then I didn't have to pay for memory myself, so I typically had many dozens or hundreds of GB per machine). As long as you watch for swapping, the largest possible value should be best.
 
Hi ralphbsz,

thank you for the reply. If I understand you correctly, in your experience it is worth to set the number of builders to 100% of the virtual cores instead of the man-page suggested 75% and to adjust the number of jobs to a maximum preventing or resulting in minimal swapping. Is this understanding correct?

Kindest regards,

M
 
Please search this forum for the number of builders/threads suggestions by John Marino, the author of synth. IIRC he explained very well what to use given your CPUs and RAM amount.
 
Hi tankist02,

does my original post, citing form the man-page, not imply that I have searched? I admit that given the long introduction thread and its follow up, I might have missed the information, hence my request for help form people who have some experience. So, what added value your reply has?

Kindest regards,

M
 
I looked at the long thread where John Marino introduced synth. There is indeed discussion there about the number of parallel jobs. But there is even more discussion about the heavy use of swap that happens if you have too many jobs (and swapping makes things run very slow).

As I said above: My experience which I quoted above (which fundamentally says: start at two compiles per virtual CPU and work your way up until you get to diminishing returns) comes from machines with at least 64GB of memory, often with much more. On those machines, running a dozen or two dozen compiles in parallel will not swap, so memory restrictions are just not an issue; the goal on those machines is to maximize use of the CPU (which is the ultimate bottleneck there). On low-memory machines (anything with a single digit number of GB), the answer has to be tempered though: once you start swapping significantly, additional compile jobs will hurt, not help. So watch the swap.

And using a RAM-based /tmp file system should be a pretty obvious requirement.
 
Hi ralphbsz,

thank you for the response. I think that it confirms the initial strategy to set the number of builders to the number of virtual cores and adjust the number of jobs to prevent swapping. The deleterious effect of swapping has been mentioned in several references I found.

Kindest regards,

M
 
Back
Top