Local (intranet) cache for distfiles & packages

This should be simple, but I could not find the answer.

I would like to set up a central pool on my intranet for downloaded distfiles and any packages previously compiled or cross-compiled for slower/386 clients. In Debian, a good tool like apt-cacher did this and you just prepended
Code:
servername:port/
to the beginning of the source url. If the package was not in the pool, apt-cacher would fetch it and place it in the pool. What would be the FreeBSD way to do this?
 
Breaking the steps down simplifies the question:

1. It's easy enough to point a client to the pkg repository on the intranet server (with PACKAGESITE in environment) and to tell such client to prefer packages before distfiles (use pkg_add or portmaster -P). The server is running an http service so setting it to point to the relevant directories is also easy.

2. If the package is not in the PKGREPOSITORY # portmaster -Pg will instruct the creation of a pkg for the port being built, so that the pkg can be placed on the server. Again, easy enough.

3. When a client ends up building a port because it was missing on the intranet server, how then can the client send the downloaded distfile and built pkg to their respective directories? (http or ftp will require read/write access).

4. Here is the tougher problem: How should the directory structure be set up / managed on the server, considering that there will be 2 different architectures for the distfiles (386 + x64) and considering that the resulting pkg / binaries would also be in the same predicament? It seems the management of the pool should be left to a version tracker or something like it (lightweight solutions preferred).
 
The simplest way is to NFS export /usr/ports/. You can export it read-only. Clients can build things but may need WRKDIRPREFIX set to a writable filesystem.

Another option is to use Apache (or another webserver) and export it with that. Clients can set PACKAGESITE to the web server's path.

When you build packages (with make package, portmaster or portupgrade), just make sure /usr/ports/packages/ exists. The rest of the directory structure will be correctly filled when needed. For different architectures you obviously need separate package directories. Have a look at the official repositories for ideas about the structure.

It could be as simple as http://www.example.com/FreeBSD/i386/usr/ports/packages/ and http://www.example.com/FreeBSD/amd64/usr/ports/packages/.
 
The simplest way is to NFS export... Another option is to use Apache or another webserver
The server is running an http service
NFS seems like an additional and unnecessary "resource requester" on the client side. http/ftp is already running for other purposes anyway and that's what I meant with setting PACKAGESITE

For any package not found on the intranet http site, the client will go to outside PACKAGESITE, download the distfile and build the pkg. What then? How do the pkg and its distfile get included into the central repository? The client could upload these files to a shared & writable folder on the server, but then the admin has to manually place these in their respective repository directories. This is where the proxy comes into use, as any downloaded distfile goes through the server central repository.

And what about distfiles common to both architectures, like flashplayer or any other 32 bit only port? It will end up being downloaded twice or will require admin intervention.
 
You need to define your purpose, then select technology that suits you best:

1. you need to build all things on centralized system?

2. you need centralized system to store distribution files and build everything on each particular system?

3. you need centralized system to store packages?

In FreeBSD, the distfile and the package file are two completely different things. The distfile is (usually) the source code of the package. The package is built from code in the distfile, after applying FreeBSD patches and after applying your selected build options. You may build different packages (with different options, for example) from the same distfile.
 
Beeblebrox said:
3. When a client ends up building a port because it was missing on the intranet server, how then can the client send the downloaded distfile and built pkg to their respective directories? (http or ftp will require read/write access).

4. Here is the tougher problem: How should the directory structure be set up / managed on the server, considering that there will be 2 different architectures for the distfiles (386 + x64) and considering that the resulting pkg / binaries would also be in the same predicament? It seems the management of the pool should be left to a version tracker or something like it (lightweight solutions preferred).
For distfile caching, that's fairly easy - squid. If that's not good enough, you need to setup a distfile FTP mirror and configure it in your /etc/make.conf. The latter is much more bandwidth hungry if you don't have enough machines that benefit from it. (same as APT mirrors)

As for caching package builds, I've been toying with this recently, but there are two hurdles I haven't crossed yet:

1. The ports system has some hackery in it to overcome a design shortfall of pkg_add's "-r" feature in that it does not follow dependencies recursively/hierarchically. Instead it relies on the port creating a complete, correctly ordered, single level deep dependency list for each port, and for some reason unknown if packages are built using "make package" this list isn't always created correctly, breaking pkg_add -r. Hard to explain without seeing it happen.

2. Handling i386/amd64 package differentiation is easy. That's already handled by pkg_add's "-r" feature. What's slightly complicated when you start building various custom systems is that when ports are built on a single system, they link to base libraries that are present on that build system, but which may or may not exist on your other systems. Classic example: Kerberos. If you build your systems with kerb disabled you probably would have noticed that some packages from official mirrors don't work. I don't think there is a reasonable fix for this though.

I have a really neat build tool that I use for i386 and amd64 package builds which fetches and updates packages from official mirrors if no custom build options are defined, builds locally otherwise, and caches the resulting pkgs locally for future runs (updates). I mostly use it for keeping my NanoBSD systems up to date at the moment. Not much point taking it further until (1) is resolved though. :/

If you've got some coding skills, there are some really cool things that can be done to improve FreeBSD's binary packaging system. Take a look at pkg_add source sometime... :)
 
Better late than never

@ aragon: Sorry for the late response, I was off-planet for some time...

Your comment re squid: my idea exactly and I thank you for reading between the lines / implications in my post. Infact I am also questioning privoxy (problem posted here) and I am trying to learn why it is not possible to combine (privacy + package cache + web cache) functionaliy into one app => squid. It seems to me that this solves a number of problems, if you consider the total cache usage on disk from the multiple browsers many of us use.

Coding Experience in C: NONE. Why? So much to say, so much of it of little of significance.

Thanks again for the post : )
 
Back
Top