Airgapped FreeBSD hosts, how do we install stuff on the and upgrade them?

Hiya,

So, we have quite a few hosts that are on networks with no internet access. And of course I'd like to keep them updated.

We have the possibility of setting up an internal repository for this(that can be allowed to have limited internet access), we already have those in place for Red Hat, AlmaLinux and Debian. But I haven't been able to figure out how to do a similar thing with FreeBSD.

We need to be able to install from ports(not necessarily from packages), and we need to be able to upgrade whole systems too like when using freebsd-update.

Can anyone help me out? I haven't really found anything useful when searching.

TIA

/tony
 
I've not done this but a lot of people here have experience setting up and running their own package repos using ports and poudriere https://www.digitalocean.com/commun...m-to-create-packages-for-your-freebsd-servers

As for doing the same for base I'd guess it's possible using a source tree, I just don't have the specifics on how.
Hi mer,

Yeah, I've already got that up and running. However I don't think it'll be easy to keep track on what packages are needed on what systems and I'll probably end up with terabytes of weird packages. And it doesn't solve the issue with updating the OS as you mention.

/tony
 
Yeah, I've already got that up and running. However I don't think it'll be easy to keep track on what packages are needed on what systems and I'll probably end up with terabytes of weird packages.
That's the problem with your own repo; you need to build everything or keep track of what you need and then build only that. Keeping your repo source up to date is also your responsibility.
On each host served by the repo you can do:
pkg prime-list to find out what each has installed

I think there are hooks in the ports building to say "build only these" or "build everything but these"
 
freebsd-update(8) can be pushed through a caching proxy. This has several advantages, all systems get their updates from a single source on your network. And the first one actually downloads the updates from the internet, the next systems will get the updates from the local cache, this greatly increases the speed.

Example nginx.conf:
Code:
{...}
    proxy_cache_key "$scheme$request_method$host$request_uri";
    proxy_cache_path /var/cache/fbsd-update levels=1:2 keys_zone=fbsdupdate_cache:10m
                      max_size=5G inactive=14d use_temp_path=off;
{...}
    server {
      listen 192.168.1.1:80;
      server_name fbsd-update.example.com;

      root /var/cache/fbsd-update;

      access_log /var/log/nginx/proxy-access.log;

      location / {
        proxy_cache fbsdupdate_cache;
        proxy_cache_lock on;
        proxy_buffering on;
        proxy_http_version 1.1;
        proxy_cache_revalidate  on;
        proxy_cache_valid      200  7d;
        expires max;
        add_header X-Proxy-Cache $upstream_cache_status;

        proxy_pass http://update.freebsd.org;
      }
I've changed /etc/freebsd-update.conf and set ServerName to fbsd-update.example.com (internal DNS points this to the nginx host).
 
However I don't think it'll be easy to keep track on what packages are needed on what systems
This directly contradicts with a "secure" network. For one, you need to know exactly what's installed, where and why. Second is standardize, all systems should have the same packages (for their role of course), same versions, same configuration (standardize how applications are configured), etc. This is why building and setting up a local repository can help immensely. Because you can keep control over what gets updated, at what time and which configuration options are enable or disabled. And because there's only a single source you can be sure everything has the same version.

Sure, it's not going to be easy to get this done, especially if this is a network that's been growing kind of 'organically' and people are used to configuring everything themselves. But you need to do this in order to get a better grip on the overall security and stability.

Finally, a small nitpick, these systems are not airgapped. They're networked.
 
This directly contradicts with a "secure" network. For one, you need to know exactly what's installed, where and why. Second is standardize, all systems should have the same packages (for their role of course), same versions, same configuration (standardize how applications are configured), etc. This is why building and setting up a local repository can help immensely. Because you can keep control over what gets updated, at what time and which configuration options are enable or disabled. And because there's only a single source you can be sure everything has the same version.

Sure, it's not going to be easy to get this done, especially if this is a network that's been growing kind of 'organically' and people are used to configuring everything themselves. But you need to do this in order to get a better grip on the overall security and stability.

Finally, a small nitpick, these systems are not airgapped. They're networked.
I agree, it kinda does. I meant: There's going to be some work figuring out what packages should be built. The hosts are going to be standardized, I think I just have to figure out a bulletproof way of getting poudriere to build only what's necessary and needed on the hosts.

And yes, you're absolutely right. They're networked but running in close-to-isolated networks.

Thanks for your help, this is my understanding so far:
1. Use a caching proxy for updating the OS.
2. Create a list of all needed packages and get poudriere to build them and put them in a local pkg repo.
3. Install packages only from the internal repo.

Does this sound about right?

/tony
 
Back
Top