Building the entire pkg repo with Poudriere

True, but I assume it will run 12 build jails simultaneously. I don't have a Ryzen 5600X yet. My i7-4785T is a 4-core, but it reports 8 in htop and runs 8 jails comfortably.
It has never been clear to me. What are "virtual cores" Some circuit state doing fast content swithching to make it look like more cores than physical. Processors actually should have a glass window like long ago so you can check it with a magnifying glass. What are we paying for? 😁
 
If has never been clear to me. What are "virtual cores" Some circuit state doing fast content swithching to make it look like more cores than physical. Processors actually should have a glass window like long ago so you can check it with a magnifying glass. What are we paying for? 😁

It's hyperthreading. Basically two front ends feeding one back end of a core. Brings total throughput to about 118% of just using one frontend.
 
If i remember good, the CPU wil check can i do parallel two assembly instructions.
& then there is also jump prediction.
There is an architecture & there is a micro-architecture.
The CPU behaves like a "small operating system", with micro-code.
 
If i remember good, the CPU wil check can i do parallel two assembly instructions.
& then there is also jump prediction.
There is an architecture & there is a micro-architecture.
The CPU behaves like a "small operating system", with micro-code.
My knowledge at this level doesn't go beyond IBM 8086. I can imagine a construction that combines short binary values of different threads into a long modern register to save 1 separate instruction.
 
It's not this. CPU's evolved. As you know first 80286 with protected memory, then 80386 with virtual memory.
It's not saving one register now, CPU's check assembly code they must process, then check if they can do these things in parallel.
No human intervention needed to save a register.
[ LDA , was an instruction on Commodore 64.MOS 6502.]
 
I want to build the entire pkg repo using Poudriere.
If there are no custom port configuration, pkg-fetch(8) [2] could primarily be used instead, to get a complete official FreeBSD package repository, in considerable shorter time than building the ports tree in days/weeks [1], which will involve almost certainly build failures, which requires manual intervention, which will prolong the process even more.

All the ports distfiles could be fetched on a regular basis (after updating the ports tree) for building packages as a fallback plan.

[1]
I couldn't find much data on the official package building cluster hardware, the one source I could find is "https://freebsdfoundation.org/our-work/infrastructure-support/" for the arm64 package sets on 48-core Cavium ThunderX (gcc compiler benchmarks here).

If it takes 120:49:45 hours for the (assumed) ThunderX to build 36798 packages in this example, I can imagine, that on a
i7-4785T 35W TDP, 2.2Ghz, 4-core, 8-thread.
it takes a lot longer.


[2]
Example "all-at-once":
Code:
# pkg fetch -r FreeBSD-ports -o /packages/FreeBSD:15:amd64 -a
...
The following packages will be fetched:

New packages to be FETCHED:
        0ad: 0.28.0 (1 GiB: 0.92% of the 159 GiB to download)
        0d1n: 3.8_1 (245 KiB: 0.00% of the 159 GiB to download)
        0verkill: 0.16_2 (243 KiB: 0.00% of the 159 GiB to download)
        1password-client: 1.12.4 (3 MiB: 0.00% of the 159 GiB to download)
        1password-client2: 2.32.1 (10 MiB: 0.01% of the 159 GiB to download)
        1password-client2-beta: 2.33.0.b.02 (10 MiB: 0.01% of the 159 GiB to download)
        2048: 0.9.1_1 (11 KiB: 0.00% of the 159 GiB to download)
        2bsd-diff: 2.11.1_2 (22 KiB: 0.00% of the 159 GiB to download)
        ...
        zziplib: 0.13.80_1 (106 KiB: 0.00% of the 159 GiB to download)
        zzuf: 0.13_2 (129 KiB: 0.00% of the 159 GiB to download)

Number of packages to be fetched: 36871

The process will require 159 GiB more space.
159 GiB to be downloaded.

Proceed with fetching packages? [y/N]:

Example "in-groups", in case of download interruptions, easier to resume:
Code:
  # pkg fetch -r FreeBSD-ports -o /packages/FreeBSD:15:amd64  -g '0*'
...
New packages to be FETCHED:
        0ad: 0.28.0 (1 GiB: 99.97% of the 1 GiB to download)
        0d1n: 3.8_1 (245 KiB: 0.02% of the 1 GiB to download)
        0verkill: 0.16_2 (243 KiB: 0.02% of the 1 GiB to download)

Number of packages to be fetched: 3

The process will require 1 GiB more space.
1 GiB to be downloaded.


# pkg fetch -r FreeBSD-ports -o /packages/FreeBSD:15:amd64  -g '1*'

New packages to be FETCHED:
        1password-client: 1.12.4 (3 MiB: 12.99% of the 24 MiB to download)
        1password-client2: 2.32.1 (10 MiB: 43.47% of the 24 MiB to download)
        1password-client2-beta: 2.33.0.b.02 (10 MiB: 43.54% of the 24 MiB to download)

Number of packages to be fetched: 3

The process will require 24 MiB more space.
24 MiB to be downloaded.

etc.

A local repository for distribution via LAN or WAN can be created with pkg-repo(8)
 
  • Thanks
Reactions: vmb
surprisingly the most important question hasn't been asked yet:

WHY?

Usually you only build the packages you have actually installed in your infrastructure (resp. the hosts using your repository). Building *all* ports is just a waste of resources and energy - and let's not ignore the fact that it will take literally forever (with *A LOT* of build failures) on that puny desktop system.
Bloatware like rust, so-called 'modern' browsers or stuff like electron, iridium etc pp will take multiple days until they fail due to memory exhaustion and/or are being killed as runaway builds. It's completely pointless (attempting) to build those if they aren't even needed.
 
One must evade a few ports i mention , electron , zed , chromium.
Also when one basis port upgrades from version X to X+1, it can ask to rebuild 1000 depend ports. This happened to me. Even if it is a small security fix. So one ends up in an endless loop.
 
Back
Top