The snap is a walled garden made by ubuntu

It doesn't even mention you could have different snaps installed with a vast amount of ancient and potentially vulnerable dependencies baked inside of them. Hurray for embedded libraries and such :rolleyes:
 
Yep, and no resource sharing... if you run 15 snaps all of which use the same version of a dependency (as a shared library), you'll find yourself loading that dependency into memory 15 times.
 
They even require some random running [clown]snapd[/clown] service to work.
Didn't know about that one. But I guess something has to unpack the snap before the (core) OS can run it regularly.

EDIT: I'm glancing over the man of snapd and I it seems like it would have some network connectivity thingy going on. At least there seems to be a way to query the network connectivity status of snapd. What?
 
Indeed. All of the disadvantages and none of the benefits of static linked binaries. Well done Linux community! ;)

They even require some random running [clown]snapd[/clown] service to work.
Hopefully it hangs consuming 100% of CPU from time to time.
 
Had to (re)install Ubuntu for a client a couple of months ago (old system, upgraded many times, /boot became way too small). That snap stuff was the first thing I removed after the initial install. Yeah, no thanks. I'll stick to apt if you don't mind.
 
  • Like
Reactions: mer
The sad part is, the push towards snap/flatpak is more of simplicity/laziness than anything else. Instead of distro's curating and making sure packages work, they push all that work to upstream. So instead of having a team dealing with the bug fixes, they now only need a simple script closing bugs telling them complain somewhere else.

I can see on the developer side on going to snap/flatpak in that they don't have to their version of dependency hell, from having to check all the distros and see what the common minimum versions everyone supports.

Sad part is that we didn't learn from the hell/mess from the Log4Jam and openssl vulnerabilities; the entire Linux community is going to be biten even harder when the next major storm comes around.
 
They give the impression that the linux kernel version does not count anymore. Which is offcourse false.
It kind of doesn't as Linus is (in)famous for ranting "WE DO NOT BREAK USERSPACE!" The irony is that userspace is still a hot mess because notably Ulrich Drepper and others didn't get the memo. One of the advantages of having an OS that releases kernel + base at the same time, and with care to not break backwards compatibility. Now let's talk about Pkgbase ?
 
It kind of doesn't as Linus is (in)famous for ranting "WE DO NOT BREAK USERSPACE!" The irony is that userspace is still a hot mess because notably Ulrich Drepper and others didn't get the memo. One of the advantages of having an OS that releases kernel + base at the same time, and with care to not break backwards compatibility. Now let's talk about Pkgbase ?
I'll just do a simple
Code:
make installworld
make installkernel
etcupdate
make -DBATCH_DELETE_OLD_FILES delete-old
make -DBATCH_DELETE_OLD_FILES delete-old-libs
This is so simple i do not consider pkgbase a real improvement. Altough the idea is good.

In src.conf i have:
Code:
WITHOUT_SENDMAIL=yes
So no sendmail when i build world
 
Now let's talk about Pkgbase ?
I really don't get that take in your context. Actually, I think pkgbase is a very nice idea, allowing users to leave out stuff they won't need without compiling themselves. It also creates problems of course, e.g. you have to start actually tracking dependencies to "base packages" from ports. So, not sure whether it will ever take off.

But in any case, there's one thing pkgbase will never change, and it's exactly this:
[…] having an OS that releases kernel + base at the same time, and with care to not break backwards compatibility […]

Seriously, it would never change the integrated development model of the base system. All it would change is the distribution of the binaries, from rather monolithic tarballs to individual packages.

So, what's your concern about that in this context?
 
Nix (and Guix) solved all those problems better than Snap or Flatpak...
The problem with Guix is mainly that Guile is slow. They use a lot of GNU Guile in their operating system for eg the package manager but also for initialization processes and other things. If they had chosen Chez Scheme they would have had a very similar language that performs much better.

NixOS is therefore faster than Guix in terms of performance in some things, but NixOS is more a mix of different programming languages, and less structured in its general approach.

Out of Flatpak, AppImage and Snap I observe that AppImage usually gets the best performance in reality (boot times, total MB disk usage, raw performance after startup, etc)
Snap ranks the worst of the three here overall.
 
So, what's your concern about that in this context?
My concern is that Pkgbase only makes sense if you want to release kernel and base independently. It only makes things more complicated otherwise.

More, smaller, tarballs = more work to install, and as you said yourself "It also creates problems of course, e.g. you have to start actually tracking dependencies to 'base packages' from ports."
 
I tend to agree. If we don't like to discuss old unsupported versions of FreeBSD because they are time consuming for us to debug, I think trying to debug some guys half-installed Frankenstein ('s monster) FreeBSD base is going to be much worse.
 
My concern is that Pkgbase only makes sense if you want to release kernel and base independently.
No. Then you just got it wrong. Maybe you should have had a look first, the base Makefiles long since support building packages. It's basically just an additional variable telling which packages the built binaries belong into.

Splitting base, releasing parts individually, is and was never planned (and would make no sense, if you wanted that, you could just convert everything to ports). All pkgbase was ever about is not having to install everything if you don't need it, pretty similar to how you can use all these WITHOUT_* knobs when building from source. A possible benefit is simplified distribution of the binaries, e.g. for security patches.

More, smaller, tarballs = more work to install
Again, no. There's already pkg, together with the also existing concept of meta-packages, it will just install everything by default.

I think trying to debug some guys half-installed Frankenstein ('s monster) FreeBSD base is going to be much worse.
If you imply you could mix packages from different base versions, then no, this will never work. Dependencies will be on exact versions, so pkg would refuse installing such a mixture. If your concern is just "missing" base packages causing problems for someone, well that's exactly what I meant with ports having to track base dependencies as well for pkgbase to be production ready.

You can already leave out stuff building from source, but ppl doing that are expected to understand the problem if they're missing something some port would require to work. But once you support binary packages, of course you'll have to make sure all dependencies are correctly set.
 
Splitting base, releasing parts individually, is and was never planned (and would make no sense, if you wanted that, you could just convert everything to ports). All pkgbase was ever about is not having to install everything if you don't need it...
These two sentences directly contradict each other.
 
Back
Top