Curing libhell

The only real thing that makes all Linux/Unix distros incredibly difficult to use is libhell. Ever install something that was made for KDE 3.5.1 but you are running KDE 3.5.2 and so libqtcore* is all set for version 3.5.2 and not 3.5.1 which has lost the key functionality and now Pidgin/Skype/etc wont work? Windows basically solves this by saying "if you need the lib, ship it with your own copy, if we update and your stuff doesn't work, too bad for you". Anyways, long story short, my suggestion is to keep a reference of older versions for software, never clean them. Access the most recent version first and if the application fails on that they can do one of two things: A) Setup a standard that allows the application to load a specific set of libraries by name and version instead of just name or B) If an application fails to run due to an out of date library, have the OS iterate through each version of those libraries till it finds one that works. Associate that library with that application and every time it loads in it's libs it loads the proper ones. This honestly wouldn't be that hard, it would just have to be loaded into the OS class loader framework.

My question for you guys is this: I can probably branch a copy of the distro off and add this in, would the fbsd foundation want something like this? I'd rather not waste my time if I was just fixing this for me and I don't feel like making RasperinBSD. It's a fairly core change. But would add solve a lot of serious headaches with your advanced and run-of-the-mill users.

If I confused people I can go into more detail, I'm running on one hour of sleep (3 day old just joined my wife and I) and I am assuming a lot on how the class loader works. But I have a few ideas on how to make a similar system to this work.
 
Great idea, interested in hearing/seeing where this thread leads. I may be able to lend a hand as well if needed somewhere. PM me anytime.

Keep up the good work (and congratulations on the new addition to the family!)
 
You might want to take a look at what portupgrade does with old libraries - see the contents of /usr/local/compat in particular.

Also, you might consider PC-BSD way of solving this problem.
 
I agree with the problem but my solution to this is; any software that I code for FreeBSD and it uses libraries that are not in base + xorg meta package, I ship with the software (similar to Windows)

I just tell my users to extract it into /opt and make a script to set PATH and LD_LIBRARY_PATH. (Kinda similar to how I knocked together that PortableFlashFirefox application a while back)

This doesn't work for Linux because there really isnt such thing as a "base".

Other than this, I guess you can just sell your soul to FreeBSD ports / packages :D

I think if there was going to be a new "distro" from FreeBSD, I would love to see one that doesn't use a package manager, but instead distributes all the libraries it needs with an application. Sure there will be duplicate libs... but hay, Windows and Mac OS X do it and they seem to do just fine ;)

It might just about be possible to use OpenOffice then haha
 
kpedersen said:
I agree with the problem but my solution to this is; any software that I code for FreeBSD and it uses libraries that are not in base + xorg meta package, I ship with the software (similar to Windows)
The biggest problem I have with this type of setup is when there's a security problem in that library. Instead of just updating that library port and any port that depends on it (easily done with portmaster/portupgrade) I now have to find out (usually by hand) if some port uses it and what version it contains. I then have to hope each port that has the vulnerable version 'built-in' gets updated.

It's the *nix equivalent of DLL hell.

Not at all welcome, at least not on my ship.
 
At least the software will work with minimum effort.

It may not be such a great idea from a security point of view, but perhaps that is why FreeBSD / Package management has never really taken off for home users.
 
That actually may be a better idea, just write a port that patches the kernel so that it stores a "local" (a symlink to that version) copy of all libs (excluding core FBSD system libs, which could still cause a problem but a whole lot less). I didn't consider the security concern, but on the same point, I'd rather have all of my software work and not half of it with it mildly/severely more secure.

For example, think if you were running a production server. Say it's a production OpenLDAP and MySQL server, and you running the most recent MySql 5.1.x and OpenLDAP 2.4.x and mysql 5.5.x releases with better support. Well both of these share a requirement for kerberos (and more) libs (iirc) well 5.5.x is going to release a whole lot later and require a much newer copy of these libs. Well now you have the choice of running a whole lot more secure, better performing MySql server or you are SOL.

Now what you are doing updating to production before your subsequent test/etc environments, we will not address that.

That's a point where someone looking for a great server is unlucky, now let's consider those running a FBSD in non server mode also (pick me! pick me!). I don't know how often software wont work because it needs a lib from KDE 4.2.x (+1) from the other lib which was KDE 4.2.x, making stuff just NOT work and me throw my hands and go back to windows. :D

Lastly but not least, like I said earlier there are many ways to approach this problem:

A) Every application keeps a local copy of ALL libs, when it goes to access those libs it pools for them from it's local resource (think seperate vm for every application if you were thinking this in a java way). To do mass updates the OS would only need to keep a reference table of where all the libs retain (can cause disconnects for applications that got their libs manually updated but the table didn't reference. This could be solved by having a background process pinging libs with a version request).

B) Keeping the same local concept but instead of having the libs local, have one lib per revision stored on the OS, and have a symlink to that in the local revision. Then if you did a mass update all it would do is update the symlink and not actually lose the library. As an OO programmer I find that a much better solution.

C) As suggested above have the OS actively search&destroy for the lib it's looking for.

D) etc, how PC-BSD probably does it (but I don't know)

E) Give everyone free beer and free pizza and shut up :D

Some food for thought, I'm actually going to look into what PC-BSD actually does different then FBSD because I was under the understanding that it was a modified kernel with a special installer. And that was it. Sorry, I have to ask also, can a port even make kernel patches? Maybe set it up as a flag in the KERN-CONF that can trigger the use of this? And lastly, I'm actually fairly new to this, how does one go about making a major change to the Kernel like this and getting it released with all future iterations of FBSD?
 
And since I can't (find) edit (F) Look at what portupgrade does and /usr/local/lib/compat/pkg but I use portupgrade and run into this issue all the time. So I may have an active bug to relay. :D
 
rasperin said:
A) Every application keeps a local copy of ALL libs, when it goes to access those libs it pools for them from it's local resource (think seperate vm for every application if you were thinking this in a java way). To do mass updates the OS would only need to keep a reference table of where all the libs retain (can cause disconnects for applications that got their libs manually updated but the table didn't reference. This could be solved by having a background process pinging libs with a version request).
All fine and dandy until that reference table gets corrupted.

B) Keeping the same local concept but instead of having the libs local, have one lib per revision stored on the OS, and have a symlink to that in the local revision. Then if you did a mass update all it would do is update the symlink and not actually lose the library. As an OO programmer I find that a much better solution.
Which is more or less how it's done now.

Code:
# ll /usr/local/lib/libsomething.*
-r--r--r--  1 root  wheel  176990 Feb 19 13:21 /usr/local/lib/libsomething.a
lrwxr-xr-x  1 root  wheel      11 Feb 19 13:21 /usr/local/lib/libsomething.so -> libsomething.so.5
-r--r--r--  1 root  wheel  150700 Feb 19 13:21 /usr/local/lib/libsomething.so.3
-r--r--r--  1 root  wheel  150700 Feb 19 13:21 /usr/local/lib/libsomething.so.4
-r--r--r--  1 root  wheel  150700 Feb 19 13:21 /usr/local/lib/libsomething.so.5

just opening libsomething will open the latest but there's nothing stopping you from opening a specific version.
 
What's the problem?
There's already a term "statically compiled": if you see e.g. www/opera port, it uses statically compiled qt. IMHO, there's really a vanishing minority of software which would have issues when its dependencies have been upgraded at a step of one least-significant version number (i.e. 3.5.1 -> 3.5.2) . And if there IS such software, is this a problem of the operating system or that of the software? I guess the latter.
If, though, the version numbers of dependencies change too often, there's always a way to compile some software statically and forget about any dependencies for it at all.
Is there any better solution (local libs is basically the same idea as static)? - Surely not multiple libversions :) And if already implemented, why reinvent the bicycle? Just use it.
 
When possible I always do a static compile, unfortunately that isn't always possible.

What I do think is really quite funny is when programmers try to use GTK (or GTKmm) on Windows. Because GTK does not support statically compiling (for windows at least), the developers need to distribute well over 10 megs of runtimes with their application which may only be 1 meg :p

They tried to have a standard GTK installer (Installs to Program Files\\Common Files) but obviously all the different GTK applications ALWAYS need a different version.

Moral of story: Dependencies are a pain :p
 
How do you define a static compile on the make/pkg_add? Also! You ask why? I gave a good example, but another one is I had one machine running the latest port of postgres and of mysql after installing postgres I got an out of date error on kerberos.

Actually what stemmed this was work I was doing in curing a probelm with libqt_dbus and kopete.

Thanks!
 
rasperin said:
How do you define a static compile on the make/pkg_add?
If this can not be done now due to the ports/packages system imperfectness :) this doesn't mean that it can not be ever done. I also thought about many features that could be very useful for ports system, among which are e.g. static compilation and automatic stripping of libs.

rasperin said:
I gave a good example, but another one is I had one machine running the latest port of postgres and of mysql after installing postgres I got an out of date error on kerberos.
Well, of course the static compilation is no way a cure-all, as there is no cure-all at all :) but I bet 90% of such problems could be solved, and the rest would probably make the ports system way too complicated for `everyday' use if trying to solve them within this system. At least, adding some `USE_STATIC' make flag would not be a big hack, while redefining the way the libs are located would.
(What I deem could be also useful is to have a way to define which libs should be compiled statically and which should not, but this seems to be a `big hack', too.)
 
Back
Top