Flatpak and snap are just obscene.
They were never the solution in the first place, just like containerization was not a solution in server-space. And more and more people are opening their eyes, finally.
The false myth where commercial applications couldn't port their software to Linux due to the fragmentation of its ecosystem was always a plain lie.
And the fact that even now that Flatpak and Snap are here to """"solve"""" this problem, no one is porting any famous commercial application to Linux is proving my point. No Photoshop, No Microsoft Office etc.
If someone wanted to distribute proprietary software on Linux, he could just:
- statically linking his application;
- distributing a dynamically linked binary with its own dependencies packaged in the same TAR, like VirtualBox and VMware Workstation and even ANCIENT games like Uplink do since always;
- producing binary for the most common distribution (Ubuntu/RHEL).
That said, the real problem with Linux (and modern UNIX in general) was the pushing of the concept of dynamic linking too far that produced dependency hell just for running an application. And that just because false arguments like:
- you could just recompile a dependency without recompiling the entire application;
- you could get better security;
- you could run applications faster;
- you could get smaller applications.
The first point was invalidated by something like OpenSSL and GLIBC, with its own versioned symbols, many times. It was also invalidated by the rising of Continuous Integration, where you're going to recompile your software many times anyway. There are also things like ccache that helps people when they're are recompiling stuff many times.
The second point was invalidated by huge security holes like LD_PRELOAD and LD_LIBRARY_PATH.
The Umbreon rootkit was a clear demonstration about how false security in dynamic linking is.
The third point was invalidated in computer science since the debate between microkernels and monholitic kernels. Eons ago.
And we're still waiting for a technical demonstration about how a binary that have to rely to the ELF loader to scan recursively for dependencies can start faster than a binary that has every routine it needs built-in.
The fourth point was invalidated by using a more sane libc than that huge amount of crap that is GLIBC, and also by Mac OS X since it uses static and mixed linking since always.
Even its progenitor, NeXTSTEP, adopted that kind of linking, in a period where memory/CPU/storage were a tiny fraction than what we have now.
Think about it: with a statically linked binary you could run a 32-bit application in a 64-bit environment without installing any 32-bit library.
And the adoption of workarounds like huge tarballs, with the binaries and their dependencies distributed together, or like Docker containers is a just a prove of that. And even Go and Rust, that were designed to statically linking their binary by default, is another prove of that.