Solved how to proper buildworld in 2024?

I have a script that I run once a day (more or less) that builds current, stable/14 and stable/13. I build current for arm64 as well. Another script lets me review git log since last commit these three branches were built locally, to keep an eye on what is changing. Anyway here is a brief log:
# mkw curr 14 13 -A aarch64 curr &
### when it is done:
# grep 'built in' ~/a*64/*/err
/root/aarch64/current/err:>>> World built in 119 seconds, ncpu: 16, make -j18
/root/aarch64/current/err:>>> Kernel(s) GENERIC-NODEBUG built in 244 seconds, ncpu: 16, make -j18
/root/amd64/current/err:>>> World built in 129 seconds, ncpu: 16, make -j18
/root/amd64/current/err:>>> Kernel(s) GENERIC built in 300 seconds, ncpu: 16, make -j18
/root/amd64/stable/err:>>> World built in 89 seconds, ncpu: 16, make -j18
/root/amd64/stable/err:>>> Kernel(s) GENERIC built in 114 seconds, ncpu: 16, make -j18
/root/amd64/stable13/err:>>> World built in 135 seconds, ncpu: 16, make -j18
/root/amd64/stable13/err:>>> Kernel(s) GENERIC built in 48 seconds, ncpu: 16, make -j18

This build machine is a 5 year old Ryzen machine, with WD "Gold" disks.

For every successful build I install kernel, world on at least the VMs & pi4 but may also do so x86-64 machines (all except the build machine, which gets updated about monthly).
Have you considered only building if changes are present? Though often there are multiple commits per day, some days don't get any. If there is no changes then you can reduce build + install time and reduce flash memory wear if, for example, using a memory card for the rasperry pi. You could monitor git log for changes; most recent commit is supposed to be first but the dates labeled on the commit do not always also stay chronological. Maybe `git show` could be more efficient workflow?
 
I was asking about all of them though I think I mainly used to use it with /usr/src and wondered what happened. Today I found out but that makes the partial support of make update seem inconsistent. As running git manually works to update src, doc, and ports I would have to recommend that over `make update` so only 1 thing is learned and used everywhere, unless support comes back.
 
If you are using zfs, you could use beinstall.sh script, which will install the new kernel and world in a new boot environment. This is what I do:

1. cd /usr/src
2. make buildworld
3. make buildkernel
4. tools/build/beinstall.sh

This will install kernel, world, merge config files and update packages in a new boot environment. If it fails, it will delete the boot environment, leaving your current environment untouched.
5. shutdown -r now
6. make -DBATCH_DELETE_OLD_FILES delete-old
7. make -DBATCH_DELETE_OLD_FILES delete-old-libs
 
Noob question: Is it possible to build a fresh v14.1 system to a fresh disk?

I have lots of ESXi VM resources, and no time constraints.
My thoughts are about a fresh build of FBSD 14.1, stripped down to only ZFS and SAMBA for use as my private LAN file server for Windows clients.
 
Noob question: Is it possible to build a fresh v14.1 system to a fresh disk?

I have lots of ESXi VM resources, and no time constraints.
My thoughts are about a fresh build of FBSD 14.1, stripped down to only ZFS and SAMBA for use as my private LAN file server for Windows clients.
I don't see why not... if you can install 13.2-RELEASE on an ESXi VM using directions you find on the Internet, 14.1 should not be a problem if you use the same directions. And Handbook actually has pretty good and usable directions that do cover installation on ESXi...
 
My thoughts are about a fresh build of FBSD 14.1, stripped down to only ZFS and SAMBA
As you have the time and the resources, build a custom VMWare image using the release(7) scripts. You could rip out everything you don't need from the base (src.conf(5)) and kernel. And add Samba to the image, ready to go.
 
[...]

and here is where I dont do anything:
delete-old and delete-old-libs
because I have to type 'y' and press enter for every file[/CODE]
I can't remember the reference, but this seems to work:
yes | make delete-old
yes | make delete-old-libs
 
A quick addition that may be useful to others. Those on ZFS have a faster (and still fool-proof) method of cleaning world:
1. do your last make cleanworld ever on this system
2. remove any remainders (possible when building e.g. i386 on a default amd64 build etc): rm -rf /usr/obj/*
3. create a snapshot of your /usr/obj dataset, in my case: zfs snapshot r/usr/obj@clean
4. from now make cleanworld is replaced by: zfs rollback r/usr/obj@clean
 
A quick addition that may be useful to others. Those on ZFS have a faster (and still fool-proof) method of cleaning world:
1. do your last make cleanworld ever on this system
2. remove any remainders (possible when building e.g. i386 on a default amd64 build etc): rm -rf /usr/obj/*
3. create a snapshot of your /usr/obj dataset, in my case: zfs snapshot r/usr/obj@clean
4. from now make cleanworld is replaced by: zfs rollback r/usr/obj@clean
Isn't there a lot more time and overhead with rolling back a snapshot then there is with destroy+create a new dataset?
 
Isn't there a lot more time and overhead with rolling back a snapshot then there is with destroy+create a new dataset?
Code:
# zfs list -t all -r r/usr/obj
NAME              USED  AVAIL  REFER  MOUNTPOINT
r/usr/obj        6.87G   427G  6.87G  /usr/obj
r/usr/obj@clean    64K      -    96K  -
# time zfs rollback r/usr/obj@clean
0.012u 0.000s 0:00.81 1.2%      68+136k 7+0io 0pf+0w
It's next to immediate basically. Destroying & recreating a dataset is more cumbersome assuming that certain options are not inherited e.g. no need to try and compress anything that is known to compress very badly or not at all. No matter how good the heuristic is, doing a simple boolean check will always be faster.
Code:
# zfs get all r/usr/obj | grep local
r/usr/obj  compression           off                       local
# zfs get all r/usr/obj | grep compress
r/usr/obj  compressratio         1.00x                     -
r/usr/obj  compression           off                       local
r/usr/obj  refcompressratio      1.00x                     -
 
You can also do a `mv` on the contents of /usr/obj and remove the old one with a background job in parallel to a new build.
I've certainly used this technique myself for some large tasks. If /usr/obj is its own dataset, then you need to move its contents to a subfolder within it and then do that delete so you don't have to wait while the copy runs as a copy+delete. Seems the real work ZFS has to do for either a destroy or rollback to an empty dataset goes into the background with the initiating command and even a zpool symc quickly returning while the drive is still chugging for many seconds after.
 
Code:
# zfs list -t all -r r/usr/obj
NAME              USED  AVAIL  REFER  MOUNTPOINT
r/usr/obj        6.87G   427G  6.87G  /usr/obj
r/usr/obj@clean    64K      -    96K  -
# time zfs rollback r/usr/obj@clean
0.012u 0.000s 0:00.81 1.2%      68+136k 7+0io 0pf+0w
It's next to immediate basically. Destroying & recreating a dataset is more cumbersome assuming that certain options are not inherited e.g. no need to try and compress anything that is known to compress very badly or not at all. No matter how good the heuristic is, doing a simple boolean check will always be faster.
Code:
# zfs get all r/usr/obj | grep local
r/usr/obj  compression           off                       local
# zfs get all r/usr/obj | grep compress
r/usr/obj  compressratio         1.00x                     -
r/usr/obj  compression           off                       local
r/usr/obj  refcompressratio      1.00x                     -
With a build from the same stable/14, reboot, then run a ZFS command:
time zfs rollback puddle3/91/usr/obj@clean
1.59 real 0.00 user 0.00 sys
time zfs destroy puddle3/91/usr/obj
1.62 real 0.00 user 0.00 sys

so I'd put that within the area of measurement error. I'd presume a move is still faster as long as it is moved within the same dataset. The disk had load for many seconds after that so work takes time but I haven't found a way other than a stopwatch to try to sort that out; even my usual `zpool sync` returns while work is still being done. Anyone have a better way to measure time it takes?
As for compression, my "uncompressible" /usr/obj seems detrimental to turn off compression for:
zfs get all usr/obj | grep com
usr/obj compressratio 2.38x -
usr/obj compression on inherited from puddle3
usr/obj refcompressratio 2.38x -
du -hd0 /usr/obj/;du -Ahd0 /usr/obj/
5.8G /usr/obj/
13G /usr/obj/

Which brings up the question of what happens if we allocate more relevant resources...
If a little is good
zfs get usr/obj | grep com
usr/obj compressratio 3.42x -
usr/obj compression zstd local
usr/obj refcompressratio 3.42x -
du -hd0 /usr/obj/
4.1G /usr/obj/

more is better
usr/obj compressratio 3.68x -
usr/obj compression zstd-12 local
usr/obj refcompressratio 3.68x -
3.8G /usr/obj/

and too much is just right
usr/obj compressratio 3.78x -
usr/obj compression zstd-18 local
usr/obj refcompressratio 3.78x -
3.7G /usr/obj/

A hard drive is a common point to be bottlenecked on a computer and when you increased its throughput 2.4x (raw data throughput, this number is a little less on SSDs and much much less on magnetic media due to seek times) for very little additional CPU load its hard to justify turning off and hard for the savings of a boolean operator to ever make up that difference. Obviously compression isn't free, so using the highest settings will bottleneck on my CPU for write speeds except if writing to miserably slow USB sticks, but compiling has never shown to be fully bottlenecked on just writes either to my slow magnetic drive or to an excessively high ZFS command (a balancing point is achieved before that though).

Even on a high speed magnetic hard drive the impact should be tested to find if/when it negatively impacts the build job or the rest of the system since there is enough idle cpu normally left around to handle "some" compression (though requesting more jobs than CPUs further minimizes idle CPU), decompression is much faster, and ARC also stores more data when the data's on-disk compression ratio is higher but requires decompression step with its CPU+RAM use to decompress it.

Compression is so fast with lz4 that many users don't have fast enough disks to justify turning it off for most datasets even if there wasn't an early abort, which there is.

If performance was the goal, keeping /usr/obj, using WITH_META_MODE and using ccache would be done instead of restarting the build from scratch for both rebuilds and upgrades. If your concern is space, then compressing /usr/obj with paq8 compressors (better than what zfs's zstd-19 can achieve) is still not as good as deleting the data when not in use so deleitng is likely the best choice.
 
As for cumbersome to set zfs properties, it can be done on the creation command and with a later set command. Tedious is pretty easy to overcome by putting the command in a shell script that can perform that and other build steps all in one. If insisting the properties must be inherited to be easy enough to accept, you can make an unused/unmounted dataset with the properties and make /usr/obj a dataset that inherits from that one. There are other settings worth consideration besides compression too.
 
Back
Top