Ports huh? What are the advantages and do they outweigh the disadvantages?

Haha, what's a normal user?
If it's a end-user with a single workstation graphical desktop, then there is no need (and probably neither the necessary compute power) to compile locally.
But, if you run your own webserver, your own nameserver, your own application servers, mailservers, VPN servers and so on (and to me it feels very normal to do so ;) ), then comes a point where some things need patching to fit in the environment.
Could be professional sysadmins too. Most of the people I work with use nano. I don't but it's a different world today than it was 20 years ago.
 
  • Like
Reactions: mer
Could be professional sysadmins too. Most of the people I work with use nano. I don't but it's a different world today than it was 20 years ago.
nano, ugh. It's definitely not in my top 10 list of editors, but I like it better than emacs, but then again, I prefer edlin to emacs, not to mention ed, which is in my top 10. I know this is the ports thread, but what brings the nano thing round is that in Gentoo, that was the first (only) package I depcleaned out of the install...
 
nano, ugh. It's definitely not in my top 10 list of editors, but I like it better than emacs, but then again, I prefer edlin to emacs, not to mention ed, which is in my top 10. I know this is the ports thread, but what brings the nano thing round is that in Gentoo, that was the first (only) package I depcleaned out of the install...
Ouch.... editors/nano is the first port I install... I do like it, it's easy to use, and has very minimal deps, even with all options turned on.

cy@ : The very reason they're pros is BECAUSE they're anything BUT normal users... I don't think normal users know what brainfuck even is. :p
 
Ouch.... editors/nano is the first port I install... I do like it, it's easy to use, and has very minimal deps, even with all options turned on.

cy@ : The very reason they're pros is BECAUSE they're anything BUT normal users... I don't think normal users know what brainfuck even is. :p
Well, nano's not horrible, but it's really annoying to be popped into when you're thinking vi and some variant of hjkl and i and :wq show 3 seconds in....
 
I personally feel that apart from on/off some specific options/flavors, with also some dependency, use of default packages are good enough for most users. I think users of servers with lot of securities only need to spend some time for ports. Once you set the options , then no need to redo the same for every upgrade. But I have seen some ports takes lot of CPU time and also get compiled each time again with other ports e.g rust,llvm etc. . May be some one have a list of those ports and way to avoid the repeated compiling using portmaster.
 
For normal users there is no need these days to use ports directly.
I have to disagree, unfortunately. If you add an "almost", I'm fine, but there are special cases, and we're missing a few features to handle them well with binary packages. Just two examples coming to my mind quickly:
  • Sometimes there's no "sane" set of default options, e.g. because some upstream source contains lots of plugins, each useful for "someone" but pulling in its own dependencies (seen such a case recently). Depending on upstream nature, we have workarounds available like e.g. slave-ports or flavors, but the real solution would be sub-packages.
  • Dependencies on different "flavors" of the same thing, like e.g. different major versions which are supported in parallel. Our current solution is DEFAULT_VERSIONS which is build-time only. For a flexible binary solution, we'd need something like Debian apt's virtual packages together with a "Provides:" metadata field. If the dependency involves linking a shared lib with incompatible ABI, this won't help (and there's no way to solve this with binary packages ever), but often enough that's not the case.
 
I have to disagree, unfortunately. If you add an "almost", I'm fine, but there are special cases, and we're missing a few features to handle them well with binary packages. Just two examples coming to my mind quickl
Agree. Also there are different types of end-users, whoa are completely "normal". Some run servers in datacenters, some do use FreeBSD inside a company, some do use on the private desktop or laptop. I think all these users are normal, but their needs might be very different. Personally, I am compiling all from ports, on servers and also on desktop. Still agree that there are completely valid use cases for pre-built packages.
 
May be some one have a list of those ports and way to avoid the repeated compiling using portmaster.
The ones that typically take a long time are indeed compilers (gcc/llvm/rust) and other building stuff that is not needed during runtime. Check /var/log/messages to see how long the build took. You can prevent pkg autoremove from removing these ports with e.g. pkg set -A 0 devel/llvm15. But you will still have to update these ports every now and then, and the updated port will usually be available before the package.
 
  • Like
Reactions: mer
The ones that typically take a long time are indeed compilers (gcc/llvm/rust) and other building stuff that is not needed during runtime. Check /var/log/messages to see how long the build took. You can prevent pkg autoremove from removing these ports with e.g. pkg set -A 0 devel/llvm15. But you will still have to update these ports every now and then, and the updated port will usually be available before the package.
The longest I have built is probably www/chromium. In general, server software is usually smaller and build faster, desktops are more complex systems.
 
So, if you know ports and gentoo, is there significant common ground that they're kinda like each other, or are there significant differences - either in tasks performed or in outcomes that they are worlds apart? And, what drives you to choose ports and is it all compile and wait, 7 days a week, or what? Maybe I'm over simplifying things, what are the actual advantages/disadvantages? More than just speed/customizability vs compile cycling all the time?
Gentoo is a rolling release distribution, meaning there is no standardized release cycle. It was[*] also based on the need that you've got to compile all the stuff you want to use by yourself.

[*] Gentoo always shipped since a long time binaries for applications many want to probably use, but which do require quite capable hardware and lots of time to compile, like Chromium, Firefox, Rust, Libreoffice etc. Since 29/12/2023 though Gentoo is also offering daily binaries on X64 and ARM64 for all ebuilds in the official sync tree, which makes it behave here now in the direction of Arch Linux.

Gentoo is also opinionated in the Linux world due to not having systemd as default init system, instead it uses OpenRC.

FreeBSD on the other hand is a stable release cycle binary distribution OS, where you can compile stuff on your own, but don't have to. Sometimes though it might be a necessity if certain flags you want to use are not compiled in the default binaries.

So for Gentoo, until recently due to its introduction of official binaries for all stuff in portage, compiling most stuff on your own is the default experience. For FreeBSD it is entirely optional.

So far I had not much reason to use ports in FreeBSD, since its binaries with default settings work good enough for me. Portage though is another thing. Since it has to deal with tons of more complexity than ports has to do, it is much more prone to break.

The official Gentoo forums are full of questions of people who are asking for help, when upgrading something like e.g. due to OpenSSL, Python version or similar switch is not working as expected. And since Portage is written in Python, when Python is not working on your system any longer on Gentoo you're basically screwed. Also sometimes QA on ebuilds is not where it should be, and therefore they do not compile properly.

Anyway, managing a whole system with Portage is a fucking nightmare. Sooner or later in its lifetime it will bite you, hard, due to stuff not compiling, circular dependencies, gone ebuilds as dependency and other nifty things, even if not following ~amd64 and only amd64. Also be very sure to read "eselect news" regularly, otherwise you will be fucked as well.

Ports is not, because the default use case is to supplement a binary base system, not to run it. People who like to run all of FreeBSD using ports though on their own sometimes seem to run into similar problems like with Portage. Even then still though the release cycle of FreeBSD seems to make this happen much less frequently.
 
When y'all talk about building, what kinds of systems are we talking about for the build environment - what's a reasonable configuration to build on? I know it's not a simple question, but I'm an end user, I consume FreeBSD. I've always used package (99% of the time). I do have spare hardware laying around, or can eventually lay hands on a new machine. What's entry level that isn't going to take 36 hours to build the 961 packages I've got installed on my desktop?
 
I have to disagree, unfortunately. If you add an "almost", I'm fine, but there are special cases, and we're missing a few features to handle them well with binary packages. Just two examples coming to my mind quickly:
  • Sometimes there's no "sane" set of default options, e.g. because some upstream source contains lots of plugins, each useful for "someone" but pulling in its own dependencies (seen such a case recently). Depending on upstream nature, we have workarounds available like e.g. slave-ports or flavors, but the real solution would be sub-packages.
  • Dependencies on different "flavors" of the same thing, like e.g. different major versions which are supported in parallel. Our current solution is DEFAULT_VERSIONS which is build-time only. For a flexible binary solution, we'd need something like Debian apt's virtual packages together with a "Provides:" metadata field. If the dependency involves linking a shared lib with incompatible ABI, this won't help (and there's no way to solve this with binary packages ever), but often enough that's not the case.
Thank You, this is something to chew on. Problem that I see: to create "sub-packages", somebody (i.e. the poor victim port maintainer) would need to work this out, and then write some test-cases to verify that the components do still play together after the next upgrade.

My usual approach would be to separate things betweeen an interactive design phase and an automated build phase. I did this with the firewall: there is a design step where I say which communications I need, and afterwards the deploy is done by scripts. All the usual pitfalls get coded into the scripts, so they happen once and then never again.
From that point on, one might simplify the design phase, make it fewer and predefined/grouped choices suitable for some end-user. The point being, that the actual work, creating and rolling out the configurations, is already scripted and does no longer change during this.

Now when I'm trying to do the same with the ports configuration, I face the problem that it is highly recursive. Lets assume, in this quarter the vlc port gets a new option foo. So I would like to get a message "you have these new options: ... what are your choices?". Make the choices, and then push the actual build to some available headless node where I don't pay for the electricity. ;)
But that doesn't work, because the foo option then requires the foo library and the bar subsystem - and these tend to bring along their own options and prereqs for whatever.

But given that it would work in such a way, then one could go on and simplify the decision step to some easy-to-understand choices, like "Do you want to use JACK for your audio?", and for these limited number of possibilities one could then create pre-built packages (or sub-packages, or whatever).

But that's just a dump from my current mindset - maybe there are other approaches...
 
When y'all talk about building, what kinds of systems are we talking about for the build environment - what's a reasonable configuration to build on? I know it's not a simple question, but I'm an end user, I consume FreeBSD. I've always used package (99% of the time). I do have spare hardware laying around, or can eventually lay hands on a new machine. What's entry level that isn't going to take 36 hours to build the 961 packages I've got installed on my desktop?
My assessment is at a minimum a Ryzen 5000 (or an Intel that is also from around 2020/2021) and 32 GB of RAM.

If your build is lined up well (no errors that stop the compilation process entirely), expect at least an overnight build - like, start it, go to bed, and be greeted by freshly compiled packages in the morning.

I actually build with a Ryzen 5 1400 from several years ago (2017, iirc). Yeah, it's not nearly as fast as more recent stuff, but I have learned to schedule the work for overnight.
 
When it comes to X or Wayland I don't trust having that many options and just prefer to use the shipped binary packages. In the case of Linux, openSUSE Tumbleweed & Fedora are the only rolling-release distros I can trust because of openQA which actually tests the graphics stuff. Debian & QubesOS also use it. OpenQA works by running a VM to capture the output, click on any part of it or type commands. The whole intention is to make sure the whole system runs as expected.

System & development packages I prefer to compile them on my own, as they're fun to play with. Graphics is just a way for me to run a web browser and other apps. Their internals are not that interesting to me. I just expect that they work.
 
When it comes to X or Wayland I don't trust having that many options and just prefer to use the shipped binary packages. In the case of Linux, openSUSE Tumbleweed & Fedora are the only rolling-release distros I can trust because of openQA which actually tests the graphics stuff. Debian & QubesOS also use it. OpenQA works by running a VM to capture the output, click on any part of it or type commands. The whole intention is to make sure the whole system runs as expected.

System & development packages I prefer to compile them on my own, as they're fun to play with. Graphics is just a way for me to run a web browser and other apps. Their internals are not that interesting to me. I just expect that they work.
Did you know that FreeBSD uses Jenkins (https://ci.freebsd.org/) for continuous testing and integration of its own builds? Jenkins seems to be pretty similar to OpenQA in some respects.
 
electron is also a big one.

Which is no wonder, because it contains Chromium and adds Node.Js to it. Also Latex can be surprisingly time consuming as well.

decuser: if you are serious about compiling stuff on your own, then the rule of thumb is simple: the more cores you've got at your disposal, the better. Compiling in most languages can be spread wonderfully among the cores. Some languages, like Rust, at least in the past had the nasty habit of compiling some stuff single threaded while only doing the rest then on multiple cores, but this seems to have been resolved by now I've heared.

In case you've got a small network, there is also stuff like distcc around, which can distribute the load of compiling amongst all computers in the network.

And if you do feel confident enough in having an object file cache on your system, you can also add ccache to the solution, which of cause has its own set of problems.
 
This bring another issue to be taken care for compilation of big number of ports. I think circular dependency are well managed by package building system of FreeBSD repo from port tree.
Yeah, avoiding circular deps is one reason for the conservative option selection. I like to turn on as many options as I can, and then do trial and error on my ports. Once that is done, I can generate a list to feed to Poudriere... But even then, I've seen the ports themselves break circular dep hell... as in, sometimes, older ports do have a circular dependency, but later versions don't (even if you turn on all the available options).
 
Back
Top