Make pkg connect to my Poudriere repo

Discussion: I set up a Poudriere repo on host A (192.168.1.149). It has all the packages needed to bootstrap pkg on another host with a barebones install. My issue seems to be with bootstrapping pkg on a bare-bones host B (192.168.1.120), a practice VM. On host B, I pointed pkg(8) to the Poudriere repo, and intentionally disabled the official repos.

Doing some research, I learned that pkg(8) uses fetch(1) to get just about everything. Reading through the manpages, including fetch(3), got me to conclude that there's 3 different URL schemas available to feed to fetch(1):
  • HTTP (Port 80): I succeeded in turning that off: Thread http-https-on-apache-2-4.82266
  • HTTPS (Port 443): Yeah, I did set that up, but self-signed certs have been a time-consuming struggle that never got resolved properly. I don't want to use Let's Encrypt on an internal, homebrew project.
  • FTP (Port 20/21) : I'm aware of pitfalls here. Also, seems to be off by default on 13.0-RELEASE (I know 13.1 is out there).
Other URL schemas (sshfs://, sftp://, etc) just don't seem to be available on a barebones install where pkg is not even bootstrapped.

Question: I'm thinking of editing /usr/local/etc/pkg/repos/custom.conf to put in the URL like this: ftp://192.168.1.149:22/path/to/repo. Basically, using port 22 in the FTP URL schema. Will that work? Or are there other ways to accomplish the bootstrapping?

If there's other ways to make my Poudriere repo actually usable for bootstrapping pkg on remote hosts, let's talk, please! :)
 
The sane way is http(s). Setups for nginx (or, if you prefer, apache) are quite simple. In your internal network, there shouldn't really be a necessity for encryption, but that's of course up to you.

Note that, IIRC, ftp got removed from pkg just a few days ago.
 
So, looks like one way to do that is to set up an Apache VHOST on port 8080 (like https://192.168.1.149:8080) that only accepts internal connections, and does not require SSL certs?
 
Why a nonstandard port? Sure, you can do it that way. Does the machine in question also serve "public" http content? Don't you have a firewall and/or reverse proxy?

In my setup, I have a public http(s) host running nginx in a DMZ, it's the only one reachable from the outside and acting as a reverse proxy for anything that should be exposed, TLS is implemented there. The host in the internal zone serving my poudriere repos isn't reachable from the internet....
 
There are multiple ways to accomplish this. Initially I had set up an NFS share for this purpose and used,

url: "file:///pkg/..."

and later,

url: "pkg+http://..." -- I use http because my lab network is separate from the rest of my network in the house.

Yes, you can use FTP but the pkg maintainer announced a few months ago that FTP was deprecated, that it will be removed from a future release.

FTP being a fairly chatty protocol -- there is one control session and it opens a data session for each file transferred -- is not the most efficient way to fetch files. You're probably better off using nginx. Here is a good primer.
 
I've setup a nginx server in a jail, which serves the repo
nginx.conf:
Code:
location /data {
            alias /usr/local/poudriere/data/logs/bulk;
location /packages {
                    root /usr/local/poudriere/data;

Local.conf:
Code:
Poudriere: {
   url: "http://127.0.0.1:14000/packages/ap-ports",
 
Why a nonstandard port? Sure, you can do it that way. Does the machine in question also serve "public" http content? Don't you have a firewall and/or reverse proxy?

In my setup, I have a public http(s) host running nginx in a DMZ, it's the only one reachable from the outside and acting as a reverse proxy for anything that should be exposed, TLS is implemented there. The host in the internal zone serving my poudriere repos isn't reachable from the internet....
Nice ideas here! I haven't gotten to public-facing content yet, my goal is to first iron out the toolchain and make sure it works end to end, and THEN polish it.

My reasoning for non-standard port is that it could be a way around self-signed SSL certs not working properly. Getting both ends set up properly - that proved to be pretty time-consuming.
 
My reasoning for non-standard port is that it could be a way around self-signed SSL certs not working properly.
It isn't. Certificates have nothing to do with ports. You can serve via a self-signed certificate on port 443 just as you can use a non-self-signed certificate on port 10240.
There is nothing wrong with using self-signed certificates while listening on port 443.
 
It isn't. Certificates have nothing to do with ports. You can serve via a self-signed certificate on port 443 just as you can use a non-self-signed certificate on port 10240.
There is nothing wrong with using self-signed certificates while listening on port 443.
Well, when I use self-signed certs while serving content up on port 443, pkg complains 'Self-signed cert!' and refuses to proceed. I was thinking, maybe I didn't install the certs correctly on the client. Either that, or I didn't generate/specify the certs correctly on the server to begin with.

I tried looking for a good HOW-TO on the Internet that walks me through the steps (and avoids lengthy explanations of tangents along the way), but gave up. jbodenmann , I even tried following your blog - I replicated all the steps you outlined, tried to pay attention to order, host, a lot of other details, but still no go.

In all honesty, I like the idea of self-signed certs, but after spending too much time on them, I figured I gotta try something else just to get things moving on my project. I have intentions to come back later and polish/change some steps to reflect 'Best Practices'.
 
Note that there are (possibly) two certificates involved here: One that you use to sign the packages generated by poudriere which mainly serve the purpose of making it harder for an attacker to inject modified packages into your clients. And a second certificate which would be used for the TLS connection when serving packages over HTTPS.

I'm using self signed certificates for signing the packages built by poudriere (this is illustrated in my blog post). The `pkg` client can certainly handle that situation.

When it comes to serving the packages, on an internal network you might want to just use HTTP. In my setup, local clients directly talk to the webserver running on my poudriere host via HTTP. External clients (fetching the packages over "the internet") communicate over HTTPS.
You can setup NGINX to terminate those TLS connections.

In any case, using self-signed certificates for the HTTPS connections is certainly possible. You just need to make sure that your clients have those certificates registered.
Other than that, you can get free TLS certificates from Let's Encrypt which take the pain out of this.

I have recently published a blog post illustrating how to setup a "central" reverse proxy for all your web services which handles (selective) TLS termination in one central spot including the scripts I created to acquire, renew and revoke Let's Encrypt certificates: https://blog.insane.engineer/post/freebsd_simple_hosting/

I even tried following your blog - I replicated all the steps you outlined, tried to pay attention to order, host, a lot of other details, but still no go.
What issue(s) did you run into? Something different from the self-signed certificate issue?
 
What issue(s) did you run into? Something different from the self-signed certificate issue?
Nope, same issue. That leads me to think that I didn't install it correctly. Either that, or I didn't generate it correctly. There's quite a few places where I could have messed up, and lining those ducks up proved to be a time-consuming challenge for me. If I get the SSL details to line up, then it wouldn't matter how I try to make the HTTPS connection.

I plan to save the Let's Encrypt for a later, public-facing project that's on the back burner until I figure Poudriere out end to end.

But man, just connecting with smart people on the Forums - that gets my own thinking unstuck and out of a rut. I now have some ideas. :D
 
I'll keep questioning the value of TLS inside your own private LAN ... depends of course, but I'm doing fine without. Just as a little reminder.
 
I'll keep questioning the value of TLS inside your own private LAN ... depends of course, but I'm doing fine without. Just as a little reminder.
One might go as far as stating that even over a public connection downloading FreeBSD packages over HTTP is sufficient if the packages are signed. But you know...
In my case I just have the infrastructure setup to handle TLS termination conveniently so I just use HTTP for clients on the same network and HTTPS for clients connecting over "the internet".
 
In my case I just have the infrastructure setup to handle TLS termination conveniently so I just use HTTP for clients on the same network and HTTPS for clients connecting over "the internet".
That's exactly what I do, briefly explained above. The one "www" host that's exposed to the outside enforces TLS (by redirecting to https) and acts as a reverse proxy for several internal hosts without TLS. My private LAN is only used by family members and me, and I'd say in such a LAN, even signed packages aren't necessary. TLS would be needed for web apps/services requiring authentication, but I don't have those so far, therefore plain http will do. I do have guest access in my Wifi, but these clients are placed in a separate guest network zone without any access to the internal LAN.

Of course, this public www host uses a pulic cert (issued by letsencrypt). If you really think you need TLS internally and don't want to use public certs for that, I'd recommend to create your own CA. Self-signed certs really don't scale, you always have to provision them to any machine that should trust them. With your own CA, you only have to provision your root cert once and any internal cert you issue will be trusted automatically.
 
Back
Top