Mac OS X is doing it since the times of NeXTSTEP. And it is doing fine. Way better than Linux, actually.
Problem is not about lacking desktop integration, since there was always something like FreeDesktop to define a proper way to integrate applications, but even after establish it the two bloated and crap desktop enviroments (KDE and GNOME) still prefer to do things by their own.
The problem was:
- ideology (see Ulrich Drepper);
- libc (GNU Libc was designed to be static linking unfriendly for idelogy reasons. See #1).
And do you know what is ironic? Ulrich Drepper, who always talked bad about static linking, now is researching at the UKL project, to make UNIKERNELS based on Linux. And Unikernels are just the next evolution of static linking. LOL
Static linking is NOT about linking the entire library into your binary, but it's about linking only the object code that contains the routine(s) from the library you need.
It's dynamic linking that forces you to reference the *entire* library. Always.
There's no problem in static linking, as I said.
Also sharing library is the EXACT reason why Linux sucks at desktop. I don't want to update the entire OS just to update a stupid application.
Flatpak approach is insane. It's the SAME horrible approach of WinSxS/Visual C++ Runtimes in Windows.
And it sucks. It always sucked.
As I said before, this is not related to dynamic or static linking.
It's a matter of API and ABI lifecycle. Something that in Linux lands no one seems to understand.
After all, they find funny to link a bloated libc with versioned symbols.
With a static linking binary you could just binary patch it, you know? Or even replace it entirely if you are lazy.
Are you saying that download ONE binary is more difficult than download a package, extract it and do something else?
And look, since statically linked binaries do not depend from the loader, you wouldn't even see something like your application crashing suddenly if you're upgrading it while running it.
Linux distributions nowadays are huge as much as Mac OS X.
And Mac OS X forces you to static link your applications, or mixed link them if you're sharing code when you're distributing a lot of them in a single package, like an application suite.
How so? I already answered you about that.
In fact, it's way more simple than with dynamic linking binaries, since you don't have to deal with things like dependency hell and circular dependencies.
This has nothing to do with static linking binaries.
In fact, it has to do with the shitty decision to dinamically linking them, since a statically linked binary can be distributed WITHOUT any worries in any Linux-based operating system out there, without any regards about dependencies or the libc it is adopting.
This is not valid for dynamically linked binaries, since they are TIED to the environment they were built.
This does not solve the problem.
In fact, it could be considered a weakness.
Think about having two copies of LLVM, one in base and another one as dependency of Mesa.
Are you saying that this kind of bloating is better than having a smaller and more compact statically linked binary?
The purpose of Flatpak and Snap, just like the purpose of Docker, was to circumvent the dinamic linking problem.
And they are circumventing it in a HORRIBLE way, since packaging an ENTIRE environment just to be sure that your application wouldn't break is INSANE, and defeat the point of dynamically linking it.
Dynamic linking has only ONE sane use case: make plugins. STOP.
And that's why there is mixed linking that helps you to deal with something like that.
Other than that, it's just a technically inferior, unsecure and slower solution.
Ideology is the main reason of these idiotic "solutions", like Flatpak.
Thanks God Rust and Go are inverting this stupid tendency.
harmful.cat-v.org