Re-use downloads for freebsd-update

Dear community,
usually an upgrade of FreeBSD using freebsd-update(8) is nothing to worry about. Just run fetch and install as documented and take care about the packages. And that's it. But there might be an option to speep up and save bandwidth if more than one system is to be updated.

As far as I remember the downloaded files end up in /var/db/freebsd-update. And if I do remember correctly it is safe to delete everything from time to time. So it seems to be robust somehow. This leads to an idea - if my assumptions are correct.

Depending on the internet download can be time consuming. If I have upgraded one system from for example FreeBSD 11.4-RELEASE to FreeBSD 12.1-RELEASE a lot of files have been downloaded. Can I simply copy all files from /var/db/freebsd-update to a different machine with an empty /var/db/freebsd-update to have no need for an additional download? Or will this end in a mess?
Dear Zvoni,
thank you for the hint. This sounds more systematic than copying tar balls of directories around. From my understanding the setup tool of the server will fetch the dvd image. According to the DVD ISO image contains
-dvd1.iso: This file contains all of the files needed to install FreeBSD, its source, and the Ports Collection. 
It also contains a set of popular binary packages for installing a window manager and some 
applications so that a complete system can be installed from media without requiring a 
connection to the Internet. This file should be burned to a DVD using a DVD burning application.
For new installations this sounds perfect. For now I have two questions.
  1. Does freebsd-update upgrade only requires files which are contained in the ISO?
  2. Sooner or later there will be provided binary patches. According to the documentation it is possible to upload a new base and kernel to the FreeBSD-update server which has been compiled somewhere by the local admin. But is it also possible to download the patches as it is done by freebsd-update fetch and distribute the patches by the FreeBSD-update server?
The first point seems to be necessary when the FreeBSD update server should support upgrades. The second point would be more a nice to have because the required bandwidth to download the patches only is much smaller.
I have found a related posting in the freebsd-questions mailing list from 2018.
References: <>
Message-ID: <>
Date: Fri, 10 Aug 2018 11:46:56 +0200
The message leads to an older thread
SirDice proposes a caching proxy and shows the configuration of www/apache
A second link leads to which is dated 2016. A little bit scrolling shows some ideas about pre-seeding freebsd-update.
One statement is
freebsd-update is said to use some kind of streamed http transfer which plays badly with squid proxies!
Is this still correct or applicable for freebsd-update(8)?
Below if the stuff related to my question.
Preseeding freebsd-update
What you can do to speed up your major updates is to use a script that uses the 
freebsd ISO to pre-seed the update directory. If freebsd-update finds all the right 
files in it's cache dir, it will happily use them, which can save you many hours. 
Extracting the files from the loopback mounted iso takes like a minute with this script.

Found at:
The header of the script says
# freebsd-update is a clever script that downloads a lot of bsdiff
# patches and whole files when patches are not suitable. The result of
# this process is a collection of files in
# /var/db/freebsd-update/files. If the files already exist, it will
# not fetch them again.
Then just copying all files from /var/db/freebsd-update/files should be the solution. Is this ok or will it mess up things?
At the office we've a caching proxy (www/squid) and run updates thru it. On FreeBSD in ~/.cshrc:
setenv http_proxy http://proxy:3128/
setenv HTTP_PROXY http://proxy:3128/
On Debians in /etc/apt/apt.conf.d/90proxy:
Acquire::http::Proxy "http://proxy:3128/";
Acquire::https::Proxy "http://proxy:3128/";
But I believe updates have to run via HTTP (not HTTPS) in order to be cacheable.
Dear Bobi B.,
it seems to work, but there have been no big updates so far. Just the packagesite.tgz and meta.txz. For the record what I have configured:
The acls are needed in the server and the clients squid config and describe what pattern to be cached. I guess the update pattern could be skipped because I have seen txz files only.
acl vuln url_regex vuln\.xml\.bz2$
acl pkgs url_regex [a-zA-Z0-9\/\._\-]+\.txz$
acl updt url_regex ^http://update[0-9]*\.freebsd\.org\.*
This tells the server to allow caching of the items above. But skip the rest.
cache allow pkgs
cache allow vuln
cache allow updt
cache deny all
This is about refresh and expire the data is considered to be fresh 60min*24h which is one day. ignore-private seems to be necessary, I am not sure why. I have also not fully understood the effect of the percentages and how data is expired. The dot is the regular expression which matches everything. Caching is limited by the acls.
refresh_pattern .   0   100%    1440  ignore-private
Finally the instructions on the client. First the path to the router is specified. This is taken if the update server is down. The update server listens at 3128. The path is configured for the items as in the first block.
cache_peer router-IP parent 80 0 no-query no-digest default
cache_peer update-server parent 3128 0 no-query no-digest
cache_peer_access update-server allow pkgs
cache_peer_access update-server allow vuln
cache_peer_access update-server allow updt
cache_peer_access update-server deny all
never_direct allow all
The following lines configure traffic not matching to the acls to leave the proxy the normal path. This is the standard traffic.
always_direct deny pkgs
always_direct deny vuln
always_direct deny updt
always_direct allow all
To see what is happening I have made two small scripts. Beside one line they are similar. They are only tested in my environment with a few test files. The first one just lists the content of the cache.

for datei in $(find /var/squid/cache/ -mindepth 2 -type f)
        address=$(xxd -s 80 -g 0 -c 256 $datei | head -n 1 | sed -e 's/.*\ http/http/'|sed -e 's/\.\..*//')
        echo $address
The second one deletes file by file. I think I will do that before huge updates to be sure that the cache is fully available.

for datei in $(find /var/squid/cache/ -mindepth 2 -type f)
        address=$(xxd -s 80 -g 0 -c 256 $datei | head -n 1 | sed -e 's/.*\ http/http/'|sed -e 's/\.\..*//')
        echo $address
        squidclient -m PURGE $address

Nevertheless the real update server is still something to be checked. But also the simple copy of files.
Last edited: