Pkg versus Ports (mixed approach) confusion

Hi,
I have a newbie question related to being careful not to mix the PKG binary approach with the (building from) Ports approach. The handbook warns against using a mix of these two approaches and the likely eventual collision of binaries, but I think I might be reading it too strictly.

Does it mean that if you install any single piece of 3-rd party software using pkg, for example Git, then you should not also use the building from ports approach. This seems to be what it's saying but I think I'm miss-reading it. It would also seem to imply a contradiction as, how do you clone without having git in the first place.

Perhaps what it really means is that you must not mix the same piece of 3-rd party software. For example, installing 'firefox' from pkg but compiling 'kicad' from source (and installing the locally complied 'kicad' ) is fine.

Hopefully, my confusion is clear and easy to put me right!

Many thanks.
 
No, the issue you may run into is that the packages are all built using the default options. If you build from ports you can change those options. This could lead to discrepancies with dependencies. The package might depend on application A while the port (with different options set) might depend on B. Then a pkg-upgrade(8) might decide to reverse that, removing B and (re)install the package with dependency A.

If you don't change the options of a port (any port) then there's really nothing to gain by building from ports in the first place.
 
Thank you. But still a little confused. For example, I'll use an extreme case. In the handbook it says that if you did not install the ports during the installation then you must use Git to clone the ports repo as a first step to building from ports. The problem is that Git is not there. So the handbook advises using pkg to install Git. And then using Git to create the clone. But haven't we already failed then.... as we now have a mixed bag and we haven't even started. What if we want to change some of the config options of Git?

I hope you can see my specific confusion .... perhaps it's a narrow case but I can't see past it ....

Thank you for your help
 
Ah, right. Yeah, that's a bit of a chicken and egg problem. The base includes another tool for getting a ports tree; portsnap(8). But that tool is slated to be removed in a future version, so the handbook doesn't mention it.

The 'problem' with portsnap(8) is that it requires additional infrastructure to generate the right files for it. So when the ports tree was migrated from subversion to git this had additional issues. Additionally, portsnap(8) can only fetch the 'latest' ports tree, you can't use it to fetch the 'quarterly' branches.

For now we just don't have a 'good' tool included in the base. In a way this is the same situation when I started with FreeBSD (around the 3.0 era), the only way to get a ports tree (or even an OS update) was to install a CVS tool via a package.

If you don't want to install a 'full' git client, you could use more basic 'flavor' pkg install git-lite. Alternatively net/gitup is also very useful (and may hopefully someday be included with the base OS).
 
Well, base does include fetch(1) so you can run fetch https://cgit.freebsd.org/ports/snapshot/ports.tar.gz ... that's what I do on a brand-new installation. Download the tarball into an empty /usr/ports (very carefully, making sure that the result is still the traditional /usr/ports structure), compile a few things (with my options) and get going that way. But then, it's easier to convert /usr/ports into something that's tracked by git.
 
I install git and poudriere-devel. Then I use poudriere to build the packages I want, including git and poudriere-devel. Configure pkg to point at my local repo, pkg upgrade -f and I'm good to go.
 
I install git and poudriere-devel. Then I use poudriere to build the packages I want, including git and poudriere-devel. Configure pkg to point at my local repo, pkg upgrade -f and I'm good to go.
And that's exactly the approach I would recommend. Always use pkg to manage your installation (using binary packages). If you need custom build options, build your own repository. Avoids tons of possible issues.
 
I install git and poudriere-devel. Then I use poudriere to build the packages I want, including git and poudriere-devel. Configure pkg to point at my local repo, pkg upgrade -f and I'm good to go.
isn't that a bit too much for OP? I spent months on my Poudriere project, pkg can finally see my repo (after a lot of trial and error), and I'm still not done. TBF, I did have to set that project aside for a few months (circumstances beyond my control). But I am looking for opportunities to get back to it, and to get it back up and running. My next step is frankly git - getting a handle on the git commands, and then figuring out if I'm even doing the right thing with my repo.
 
isn't that a bit too much for OP? I spent months on my Poudriere project, pkg can finally see my repo (after a lot of trial and error), and I'm still not done.
This just leaves me wondering what you're doing?

When I first tested poudriere, I went through the well-commented config file and after that had it run a first bulk build less than an hour after installing it. :-/

Invested some time later to acutally make it use ZFS (the first test was on some VM using UFS) and setup nginx to actually serve the repo via http, but this didn't really take MUCH time either...
 
This just leaves me wondering what you're doing?
I tried to set up Apache with self-signed certs so that I have an https:// (as opposed to unsecured http:// ) URL on my LAN, but self-signed certs (not Let's Encrypt, I don't want that) proved to be a hurdle to set up properly. I ended up giving up on Apache altogether, and setting things up so that the repo is served up over ssh:// 😅 There are options to consider just about every step of the way, and it's kind of a tradeoff between quite a few factors:
  • Ease of initial setup. Apache can be quite the rabbit hole. Yeah, it's powerful and flexible, but set up right, can be VERY useful beyond serving up the pkg repos.
  • Best practices. I don't exactly want the server to be unsecured with a default/initial setup, so I want to make sure to not neglect some basic security setup chores beyond the defaults.
  • Making sure the data can flow after all that setup. There's a few places to check. If pkg can't get to the repo, something's up, and I need to back up and re-evaluate my choices.
 
FWIW, I mix in a few self-compiled ports with my otherwise pkg-driven application set. I have very few problems. I keep close track of what is custom (usually to enable jackd audio) and reinstall them after global pkg upgrade. So far it is working fine, although it requires some manual work.
 
astyle it's not necessarily "best practice" to use TLS on some internal(!) web server serving nothing but static(!) content without any authentication. I'd say in most scenarios, you just don't need/want TLS for that. All it would counteract here (as long as your firewalling works correctly and it's really only reachable in your LAN) is someone in that very LAN impersonating the server (e.g. by manipulating DNS) and serving fake content. Is this really a concern in your LAN?

Then, if you use TLS, self-signed certificates are the most cumbersome solution for anything other than just local testing. If you don't want to use an existing CA, better setup your own and issue "real" certificates. There are lots of script-based solutions for that making it easy with OpenSSL.

And finally, I personally think the nginx configuration is a bit simpler or easier to grasp than apache, but that might be personal taste.

All that said: None of this is necessary to use poudriere. If you just use it on a local machine, you can just access the repository locally with pkg, no webserver needed at all.
 
So I guess if I follow a mixed approach I have to realise that after each 'quarter' the pkg manifest will point to the (new) most recent quarter and thus I must also update the Git repo to follow suit. Then on top of that, if if perform a pkg-upgrade, I will have to recompile and reinstall my custom built applications over any that have been clobbered by the pkg-upgrade.

The Poudriere project sounds a little scary to me ... I don't mind learning but I don't want to disappear down a rabbit hole for a year. I also read about the 'Synth' project ... is than an easier avenue - at present I am only using my setup at home for hobby use.
 
astyle it's not necessarily "best practice" to use TLS on some internal(!) web server serving nothing but static(!) content without any authentication. I'd say in most scenarios, you just don't need/want TLS for that. All it would counteract here (as long as your firewalling works correctly and it's really only reachable in your LAN) is someone in that very LAN impersonating the server (e.g. by manipulating DNS) and serving fake content. Is this really a concern in your LAN?

Then, if you use TLS, self-signed certificates are the most cumbersome solution for anything other than just local testing. If you don't want to use an existing CA, better setup your own and issue "real" certificates. There are lots of script-based solutions for that making it easy with OpenSSL.

And finally, I personally think the nginx configuration is a bit simpler or easier to grasp than apache, but that might be personal taste.

All that said: None of this is necessary to use poudriere. If you just use it on a local machine, you can just access the repository locally with pkg, no webserver needed at all.
Ahh... I do have rather Napoleonic plans for my LAN - like being able to take my FreeBSD-based laptop somewhere out of my home, and properly update it (using my custom-compiled stuff) while on the road. I set my stuff up with that kind of thing in mind.

My thinking goes, "Simple stuff that works on a carefree home LAN is good, but I'd rather hone my craft using enterprise-grade practices"...
 
So I guess if I follow a mixed approach I have to realise that after each 'quarter' the pkg manifest will point to the (new) most recent quarter and thus I must also update the Git repo to follow suit. Then on top of that, if if perform a pkg-upgrade, I will have to recompile and reinstall my custom built applications over any that have been clobbered by the pkg-upgrade.

The Poudriere project sounds a little scary to me ... I don't mind learning but I don't want to disappear down a rabbit hole for a year. I also read about the 'Synth' project ... is than an easier avenue - at present I am only using my setup at home for hobby use.
I tried Synth... And what I discovered is that it offers a subset of Poudriere's features. Yeah, it's simpler to set up (I had it going within a day), but it still involves really long compilations if you recompile every single package on your system.

I decided that Synth doesn't fit with what I personally want to do, so I went with Poudriere. My take is: OP should consider how long is the list of the custom-compiled packages. ALL of my packages are compiled with custom options (that's nearly 1300 packages!), but I only want to be able to upgrade KDE against the rest of those packages. So my list of custom-compiled packages (that I feed to Poudriere) has 183 lines. That's too much to do by hand. If OP has just a few packages (like 10 or so) that need to be recompiled after getting clobbered by pkg upgrade, then doing it by hand in /usr/ports may be a more viable option.

It actually took me a few years of messing around in /usr/ports before I figured out that I need to get beefier hardware and to take a look at Poudriere.
 
astyle ok, I see where you come from. But then, let me challenge a few points:
  • "Enterprise grade" and "self-signed certificates" is pretty much a contradiction. Enterprises will either buy certificates from some public CA (if global trust is needed), or operate their own PKI issuing their own certificates (if organization trust is needed), most often a combination of both. Well, there will be one single self-signed certificate in that scenario: the one of the organization's root CA.
  • An enterprise will do risk assessments, ROI-calculations, and so on. If the question is whether TLS is needed in some internal network segment (which definitely causes maintenance cost), the result can very well be: no. It depends on lots of factors of course, like, how is this segment protected, who has access, what nature are the internal services used there, etc.
  • This isn't even a contradiction to still exposing some of these services to the outside as well. The "enterprisey" solution is always to use a reverse proxy for that purpose (which has other benefits like allowing some application-level firewalling). There's nothing wrong with terminating TLS at that reverse proxy. I do that btw for my private infrastructure, I'm running nginx as a reverse proxy in my DMZ. It uses letsencrypt certificates, because they're simple, automated, free of charge and, as far as I can tell, secure.
 
Yeah, sometimes people discover that there's a mismatch between what they want to do and the hardware they have access to. For example, I did not have an appreciation for how important beefy specs are for compiling - until I started messing around in /usr/ports... That's when I learned that yeah, the extra coin can be worth the extra capacity. That being said, I'm still not gonna get a Threadripper, and gonna stick with an R7...
 
I mix in a few self-compiled ports with my otherwise pkg-driven application set. I have very few problems.
Same here. Sometimes there are good reason to use options, for performance, or to remove default options for security.
All problems I've ever experienced is because of applying defaults, and always these could be solved by mixing with pkg.

The arguments of both camps hold water, but if you know what you're doing it's OK.
(but to reach such level you need experience, which especially gain while making mistakes...)
 
Sometimes there are good reason to use options, for performance, or to remove default options for security.
And being able to alter the code sometimes comes in handy as well. That's why I use ports.
However, for some inhabitants of (mostly) /usr/ports/lang that take a very long time to build I make an exception. If an update is needed I use packages for those, then start portupgrade to do the remaining work. It makes running portupgrade a lot faster.
 
Back
Top