Why Software isn't Allowed to Pack it's Own Libraries in 2018?

Hi folks,

It's 2018. I've decided to install xpdf binary, using pkg. 1 GiB of files to be downloaded - mostly Qt libs. Okay... And, after installing all this stuff, it broke my Firefox. Doesn't start (sqlite lib mismatch). Broke the VLC, too. Maybe this is radical, but why the heck, in 2018, we still can't allow each app in FreeBSD, no mater how small it is, to pack whatever libraries it needs along with its binary distribution? Pack everything statically. Let the file size grow to tens or hundreds of Megabytes. Do we really value small footprint and reusable libraries more than the time we spend fixing those complex systems? Why?
 
Let me guess: do you perhaps mix the installation of ports with binary packages? That could explain something here.

Anyway; check /rescue. There are statically compiled binaries available on FreeBSD but fact of the matter is that this is highly inefficient. For both storage space as well as runtime. Just because you may have 20Tb of space to waste doesn't imply that this is the same for every FreeBSD user out there. If you then consider that nearly every binary uses libc... uhm...

Code:
peter@zefiris:/home/peter $ ls -l /usr/bin | wc -l
     487
peter@zefiris:/home/peter $ ls -l /usr/sbin | wc -l
     244
So: 731 binaries.

Code:
peter@zefiris:/home/peter $ ls -lh /lib/libc.so.7
-r--r--r--  1 root  wheel   1.7M Jul  8 11:15 /lib/libc.so.7
2Mb * 731 = 1462Mb aka 1.5Gb.

And that's just one library for just two binary collections.

Code:
peter@zefiris:/home/peter $ ls -l /usr/local/bin | wc -l
    3577
I don't think this is a good idea :p

You're also ignoring a very important detail... Most ports were not specifically build for FreeBSD. In fact: their developers sometimes have no idea what so ever that it's also being used on FreeBSD. So why assume that this is a FreeBSD issue only? The concept of dynamic linking really isn't limited to FreeBSD alone, it's applied all over.

Fact is that we can link statically if we'd want to, but we usually don't because it has dozens of side effects. Having to rebuild an environment such as LibreOffice every time there's a small change in one of its libraries? No thank you!
 
"ls -1 /usr/bin /usr/local/bin | wc" -> I have 1239 executables on my system (some of those might be shell scripts, but then there are also executables in other places). My system doesn't have X or a GUI installed. Let's guess there are 1000-2000 executables on a normal system. I might be off by a factor of 2 or 3.

I have a development machine where all executables are statically linked (don't ask, it's a strange project). They seem to be roughly 500 to 2000 MB in size. On the other hand, we know that simple utilities (like busy box or the the stuff in /rescue) is about 10MB. For grins, let's say on average 100MB per executable. Again, I might be off by a factor of 2 or 3 or 5 or 10. Multiply these numbers out, and you have 100-200 GB of disk space used for executables alone. Which is ridiculous; a normal FreeBSD installation (including configuration files, log files, ...) fits easily into 3GB.

Now let's calculate how much memory this will use while running. "ps aux | wc" tells me that I have 68 processes running, and that's without GUI and Xwindows. Multiply that by 100MB of static binaries, and we have used 7GB just for loading executables. Double that for a machine with X, and the smallest FreeBSD machine would need 16 Gig to run correctly without swapping. For the stuff that's linked against the full X libraries, my estimate of 100MB above is probably low by a large factor. Oh, and loading a 1GB executable from a normal hard disk takes 10s.

Statically linked executables are simply no longer feasible in normal production environments, excluding strange research environments.
 
To answer your question/complaint directly blackhaz ;
It is precisely because it's 2018 that this still occurs. It's called "symbol crash" and it occurs because until you, or somebody else really clever uses a language other than C || C++ to write the majority of programs written for all operating systems including the operating systems, themselves. This will continue. You have built some programs that build their support libraries. All from a different period in time, and from their respective versions. Then decide to put some prebuilt versions from a different time on your system, that ultimately overwrite much of the already established support framework (libraries). Then complain, because you didn't think it through. Nothing personal, but It's a bit like having a pile of different car parts from different years, and complaining, because they don't all fit together to create the car you want.
But because it's FreeBSD. You have the convenience of creating any number of jail(8)s, and building whatever you want. So long as you build the different versions in their respective jails. You can then run them all w/o interference, or problem. :)

--Chris
 
But because it's FreeBSD. You have the convenience of creating any number of jail(8)s, and building whatever you want. So long as you build the different versions in their respective jails. You can then run them all w/o interference, or problem. :)
And how to proceed correctly in the construction of to standard jail or of jails numbers? The description in Handbook is quite confusing.
 
Maybe this is radical, but why the heck, in 2018, we still can't allow each app in FreeBSD, no mater how small it is, to pack whatever libraries it needs along with its binary distribution? Pack everything statically.
Yes, so if there's a security issue in one of those libraries you need to rebuild a gazillion applications instead of just one library? That's really smart!
 
I don't understand this whole argument. I built sysutils/flashrom with STATIC_ARG 's and it worked just fine.

So what I have is a motherboard firmware updater on a UFS formated thumddrive that works on FreeBSD 9,10,11 and 12.
What libraries? They are baked in.

I see no reason without some work why you couldn't build a whole Xorg for kiosk use as static build.
It's on my bucket list. xorg static compiled.

Problem is you have to mod every dependency makefile too for a static and every dependency of that dependency.
And so on.
So this only works for your own custom frozen /src branch. Complete with security flaws as SirDice points out.
Oh yea and what about updates.
Then you must keep your custom /src tree up to date and rebuild yourself.

Looking at your question I bet xpdf can be built statically. One line in the make file and chase the dependencies too.
 
I guess what I am saying is FreeBSD gives you the freedom to do it however you want. Just read some.
They have a default setup with freebsd-updates keeping you safe. I understand 1GB for xpdf is ridiculous.
The other side of the fence is do it yourself.
Think of the children. My gosh, build it yourself? Isn't that illegal. It probably will be soon.
 
FWIW the 1GB size imposition for xpdf is the exception to the rule, and 2, is due to the t1 (Type 1) fonts required to read many (e)PDF documents written (compatibility). Technically speaking, it's otherwise not a requirement. In fact IIRR tex(1) is optional, if you build graphics/xpdf from the ports(7) tree.
As to builtin (static) libs; The ports(7) framework being what it is. It can be as simple as make -DSTATIC_LIBS or a knob in make.conf(5) that reads:
Code:
STATIC_LIBS=true
But doing as blackhaz did above, and installing packages (pkg(8)) afterwards, would quite probably still cause problems.

--Chris
 
And how to proceed correctly in the construction of to standard jail or of jails numbers? The description in Handbook is quite confusing.
From 0 to 10 jails, in ~90 seconds: by Chris Hutchinson ( Chris_H ) :)

Prerequisites:
A previous build/install world/kernel has already been performed on the host box
(the box the jails will be used, and run on).
The host box already has a copy of src, and ports.

What's missing:
The necessary bits to provide access to the internet. I have intentionally
left this part out. As it adds another layer of complexity to this process, and
does not lend itself to the task of basic jail(8) creation. Those even
somewhat familiar with jail will have no difficulty adding the
few necessary bits to permit internet access from within the jail(8)s. :)

File system layout:
Code:
/var/jails/one
/var/jails/two
/var/jails/three
/var/jails/four
/var/jails/five
/var/jails/six
/var/jails/seven
/var/jails/eight
/var/jails/nine
/var/jails/ten

rc.conf(5) (jail(8)) additions:
Code:
jail_enable="YES"
# redundant, but placed here for clarity
jail_list="one two three four five six seven eight nine ten"

jail.conf(5):
Code:
exec.start = "/bin/sh /etc/rc";
exec.stop = "/bin/sh /etc/rc.shutdown";
exec.clean;
mount.devfs;

path = "/var/jails/$name";

The following could easily be condensed into one small(er) script.
But I've broken the steps out in hopes of making things (hopefully) clearer. :)

Create ten jail dirs:
Code:
jails="one two three four five six seven eight nine ten" \
for name in $jails do \
mkdir -p /var/jails/$name \
done

Assuming you've already built "world", and a custom kernel:

cd /usr/src/

make installworld DESTDIR=/tmp/jailprimer

make distribution DESTDIR=/tmp/jailprimer

Code:
jail="one two three four five six seven eight nine ten" \
for name in $jail do \
cd $jail && rsync -a /tmp/jailprimer . \
done

Code:
jails="one two three four five six seven eight nine ten" \
for name in $jails do \
mount -t devfs devfs /jails/$name/dev \
done

copy jail section from /etc/defaults/devfs.rules to /etc/devfs.rules

Code:
jails="one two three four five six seven eight nine ten" \
for name in $jails do \
devfs -m /jails/$name/dev rule -s 4 applyset \
done

The following two steps are a complete waste of time, and space.
As they would be much better implimented with nullfs(5), or
symlink(2) to read only copies of the file trees, within
each of the jails. But is used here for illustrative purposes.

Place copy of src in each of the jails:
Code:
jails="one two three four five six seven eight nine ten" \
for name in $jails do \
cd /var/jails/$name/usr && rsync -a </path/to/virgin/src/tree> . \
done

Place a copy of the ports tree within each of the jails:
Code:
jails="one two three four five six seven eight nine ten" \
for name in $jails do \
cd $name/usr && rsync -a </path/to/virgin/ports/tree> . \
done

Finally
inspect, then copy /etc/resolv.conf /jails/<jail-name>/etc/

login to perform tasks -- root password, adduser(8), newaliases(1), tzsetup(8)

Do so for each of the jails (<jail-name>) thusly:
jail -c path=/jails/<jail-name> command=/bin/sh


done!
and hope this helped. :)

--Chris
 
ralphbsz sorry to bust your milchmädchenrechnung, but I fear you are not off by a factor of 10 but a lot more.

Static linking pulls in what you reference, not the complete lib. And demand loading is also a benefit for memory usage. But static linking is still a bad idea to combat mixing ports and packages.
 
(edit) These examples, although entertaining, are massively flawed, please see Phoenix' post below for an explanation on that. To my defense I didn't study or check in on this (as mentioned below), but in all fairness I still managed to massively overlook the obvious :rolleyes: Oh well :p

Static linking pulls in what you reference, not the complete lib. And demand loading is also a benefit for memory usage. But static linking is still a bad idea to combat mixing ports and packages.
Then picture me surprised...

Code:
peter@zefiris:/rescue $ ls -lh ls
-r-xr-xr-x  139 root  wheel    11M Jul  8 11:18 ls*
peter@zefiris:/rescue $ ls -lh /bin/ls
-r-xr-xr-x  1 root  wheel    35K Jul  8 11:17 /bin/ls*
peter@zefiris:/rescue $ ldd /bin/ls
/bin/ls:
        libxo.so.0 => /lib/libxo.so.0 (0x800828000)
        libutil.so.9 => /lib/libutil.so.9 (0x800a45000)
        libncursesw.so.8 => /lib/libncursesw.so.8 (0x800c59000)
        libc.so.7 => /lib/libc.so.7 (0x800eb8000)
peter@zefiris:/home/peter $ ls -lh /lib/libxo.so.0
-r--r--r--  1 root  wheel   121K Jul  8 11:16 /lib/libxo.so.0
peter@zefiris:/home/peter $ ls -lh /lib/libutil.so.9
-r--r--r--  1 root  wheel    88K Jul  8 11:16 /lib/libutil.so.9
peter@zefiris:/home/peter $ ls -lh /lib/libncursesw.so.8
-r--r--r--  1 root  wheel   383K Jul  8 11:16 /lib/libncursesw.so.8
peter@zefiris:/home/peter $ ls -lh /lib/libc.so.7
-r--r--r--  1 root  wheel   1.7M Jul  8 11:15 /lib/libc.so.7
All libraries combined are 892K + 1700K, so roughly 2592K (2.6M). Yet the statically linked ls is still much larger than that, as we can see.

So then I wondered what could have caused this. Suddenly a theory hit me: what about the dependencies of those libraries themselves? That led me to this:
Code:
peter@zefiris:/rescue $ ldd /lib/libxo.so.0
/lib/libxo.so.0:
        libutil.so.9 => /lib/libutil.so.9 (0x80121d000)
        libc.so.7 => /lib/libc.so.7 (0x800823000)
peter@zefiris:/rescue $ ldd /lib/libutil.so.9
/lib/libutil.so.9:
        libc.so.7 => /lib/libc.so.7 (0x800823000)
peter@zefiris:/rescue $ ldd /lib/libncursesw.so.8
/lib/libncursesw.so.8:
        libc.so.7 => /lib/libc.so.7 (0x800823000)
peter@zefiris:/rescue $ ldd /lib/libc.so.7
/lib/libc.so.7:
So then I did a new calculation: (3x1.7M) + 88K which is roughly 5188K (5.1M).

5.1M + 2.6M = 7.7M, which sits much closer to the actual size of the statically build ls and seems to make the theory that everything gets build in also somewhat plausible I think.

Note: I haven't read up about this or anything, I'm simply basing myself on findings within /rescue which seem to contradict the idea of partially used libraries (I haven't tested everything ;)).
 
it broke my Firefox

What GUI system does Firefox use? Can Xpdf not just use that rather than dragging in pointless dependencies?

The answer is generally no and that is the main issue with FOSS. Developers drag in so many dependencies causing this issue in the first place.

My favorite solution was in fact how Solaris 10 handled parallel package repos (/opt/sfw, /usr/sfw, /usr/csw, etc) and then using the system GTK+ for everything else. I wish ports did similar, i.e /usr/firefox, /usr/xpdf containing all their deps as well as the program itself. The closest I can get to this is actually an entire jail.
 
What GUI system does Firefox use? Can Xpdf not just use that rather than dragging in pointless dependencies?

The answer is generally no and that is the main issue with FOSS. Developers drag in so many dependencies causing this issue in the first place.

My favorite solution was in fact how Solaris 10 handled parallel package repos (/opt/sfw, /usr/sfw, /usr/csw, etc) and then using the system GTK+ for everything else. I wish ports did similar, i.e /usr/firefox, /usr/xpdf containing all their deps as well as the program itself. The closest I can get to this is actually an entire jail.
I already alluded to just this when I brought up jail(8)s. Which is effectively the same thing. What's more; FreeBSD' ld(1) does (unlike most, if not all other OS') allow one to run different versions of libs within the same application. This is a fairly complex topic to cover in a simple fashoin. But for illustrative purposes; as an example, I built a copy of the Apache web server some yrs. back that could, and did run both the php4 extension, as well as the php5 extension. Both within the same Apache instance, and suffered no symbol clash.
But that's as far as I'm willing to take this. As the efforts necessary to fully impart the necessary knowledge, far exceeds the time I have here, w/o preventing me from participating elsewhere on these Forums. As well as tend to my other daily tasks. :)

--Chris
 
Note: there's only a single binary under /rescue. The individual programs are just hardlinks to it. It's not a great example to use for static vs dynamic linking binary size.

But it's a great example of what one can do with static linking, crunchgen, and creative coding. It's basically a giant case statement where the value of argv[0] (the name of the running program) determines what functions are called.
 
Note: there's only a single binary under /rescue. The individual programs are just hardlinks to it.
<facepalm> :oops:

Thanks for the update, now that I look at /rescue again with Midnight Commander I have a hard time wondering how the heck I missed all the binaries which have all the same size o_O

You learn something new every day :beer:
 
Hi folks,

It's 2018. ... Maybe this is radical, but why the heck, in 2018, we still can't allow each app in FreeBSD, no mater how small it is, to pack whatever libraries it needs along with its binary distribution? ... Why?

Wow, what a wakeup call -- it's 2018!? (/sarcasm). The bundling approach that you describe was invented in the late 1980s for NextStep, later to be adopted by Apple Computer for the MacOS X system. So that idea is 30 years old. It primarily allows the distribution of proprietary applications as black boxes of bundled libraries, resources, and executables. It creates many problems. I think that there are modern ways of avoiding those problems (unpatched and unknown security vulnerabilities and mysterious and impossible to debug crashes chief amongst them). As I see it the modern competitors that resolve the problem of putting together compatible versions of the multitude of dependencies are (1) pkg() and other modern package managers along with the CI infrastructure to continuously build coherent sets of packages, and (2) Nix or Guix package managers to allow many "environments" of packages to co-exist such that each application can use whatever library versions, compilers, tools, ... that it needs. (1) as used by FreeBSD allows us to get security updates applied to all installed software without leaving any bad, vulnerable old library versions lurking about.
 
Let me guess: do you perhaps mix the installation of ports with binary packages?

Nope, I'm having a 100% binary system, 1200+ packages.

I still think MacOS (NeXTStep?) bundles is the best approach for packaging software for workstations. This works really well on MacOS. When I want to upgrade, say, web browser or something else major in FreeBSD, and it wants a new library, pkg immediately wants to upgrade gigabytes of other stuff. The dependencies often go all the way to some very basic system level. If I want to upgrade Firefox I may be forced to upgrade, say, Thunderbird, TeXLive, Qt Creator, or some other part of my working apps. (That's just an illustration. I always run into forced upgrades with different software...) I think this shouldn't happen. I don't think the approach of sharing all the libraries works for desktop systems.

Is FreeBSD a good choice for desktop - that's a different question, of course, but I'm very close from switching to FreeBSD as my main desktop now, from MacOS, and I'm feeling very excited about it. I've replicated pretty much the whole workflow on FreeBSD (office suite, mail, web, Photoshop CS6 under Wine or Krita, Scribus as DLP, TeX, rsync for automated backups, and so on.) The only bit that's scaring me is binary software upgrades and finding a good compatible laptop with non-16:9 IPS "Retina-class" screen.
 
blackhaz,

I actually tend to dislike the "package manager paradigm" that UNIX-like operating systems currently all seem to follow. I find that it makes us too dependent on the internet and servers. However I do agree that it is convenient, especially compared to obtaining, i.e development libraries for Windows. But I feel it is possible to get best of both worlds.

This is probably not good advice but what I tend to do is download the entirety of the FreeBSD package repo for my architecture (amd64) (http://pkg.freebsd.org/FreeBSD:11:amd64/latest/All/).
It is about 60 gigs as I recall. I then store it on a USB hard drive. Any time I need a new package, I simply plug in the hard drive, mount it and install the package. That way if the server changes packages I don't need to update lots of them and risk breakage.
If I need the very latest version of a package (i.e Gimp, Blender), I then build it from a port (because an updated binary package would require updated dependencies that I do not have), instead a port will compile the software against my current versions.

Every 6 months, I tend to do a clean wipe with the very latest version of FreeBSD and packages again. The 60 odd gigs takes me about a day to download. I am actually quite pleased that the FreeBSD packages repo is a little bit smaller than Debian or Fedora :). My only gripe is that I have to scrape the pkg.freebsd.org website for everything (via curl). I much preferred when it was on FTP.

This is for my personal laptops, workstations. For production servers, I do not do this. Because they have very few packages on them, the risk of breakage is minimal anyway so I just keep them up to date like normal via pkg which includes security updates.

You might already do this but try to avoid big bloated software like Gnome, KDE, etc. They are one of the big causes for all these dependency breakages. This isn't FreeBSD's fault. They are sloppy software.

And for temp stuff that I just want to try out, I don't quite trust that uninstalling a package always removes the cruft so I recommend Jail or VirtualBox.
 
FreeBSD takes a very strict stance on dependencies in its ports/packages infrastructure, no fuzzy logic allowed there. This means that if a port/package needs libfoobar.so.0.1 there better be libfoobar.so.0.1 provided by another port/package or no dice. This is the reason why mixing ports/packages from different origins is a very terrible idea on FreeBSD. The ports system however doesn't exclude a possibility of a port including all its shared libraries in the port itself and packaging them with the main application/library/whatever the port provides, I'm pretty sure some ports are already doing this on small scale and in theory you could have OS X like bundles in FreeBSD using the already existing ports system and package manager.
 
Back
Top