Linus Torvalds Begins Expressing Regrets Merging Bcachefs

TBH, ZFS only working well on FreeBSD is a downer. Would rather have it accessible everywhere. FreeBSD having to rebase from OpenZFS (Formerly ZoL) now is kinda funny, though.

It's a little weird that two BSDs with cranky userbases (Open, Free) have features they just dangle over everyone else (openssh, ZFS). I'd prefer people pick FreeBSD cause they like it not because they think they have to.
 
It's usually not about doing a better job, it's people simply doing the job. That's why ZoL took the OpenZFS lead: more people doing the work.
 
But does this mean more people from Linux come here,
and ask how, and why not FreeBSD becomes more like Linux?
NO NO NO my friend!!!!

I have fled Linux a week ago! I was on Arch for my laptop and Debian on my PC.

At some point I had enough of Systemd shenanigans ( sometimes sleep works, sometimes not...sometimes you shutdown your computer and it goes down in 10 seconds, the next time it blocks on a service and consumes all the 2 minutes and a half allotted...and other horror stories ). Then, at some point, I have been forced to use more and more flatpak crap. Why? Because repos aren't always updated. The problem child in my case was Telegram ( let's pray for Durov :'‑( ). Currently I am using the 5.2.3 release on FreeBSD ( they have manpower scarcity, but they get to maintain ports updated to an acceptable degree...if only some Linux distros could do it too ), not the latest and greatest but one that supports everything. On Debian I was forced to use an older version ( many posts/channels showed the infamous warning about having a Telegram version that is too old to show the content ). And yes, use the flatpak version...so you can happily forget about clicking on links and have the browser open them automatically!!! Heck, how many years flatpak has been around? And they have such a bug!?!

And yeah, what are they really trying to do with flatpak? Do we really need to download gigabytes-sized "frameworks" just to install an user program? And another program wants another version of some already installed framework/component. And the thing has walls like a prison, but a badly designed prison where common users have problems to run software ( I was using the flatpak version of vscodium and neovim too...another can of worms! ), while attackers can surf over the bugs in their super duper "packaging, deployment and execution environment".

And what are they trying to do by merging bin and sbin with their usr counterparts and getting rid of /usr/sbin in the process too? But not all of them agree and they don't really know why this is needed in the first place. Maybe Poettering will convince them some day that /usr/local is apostasy, because, you know, we only need flatpak. At least distros like Gobolinux have a clear vision to solve specific problems. Like it or not, they are coherent. The coherence in the Redhat-driven Linux world is nowhere to be seen.

Sorry for the rant, but I have been patient with them for 10+ years, also because FreeBSD was too immature on the hardware front ( I run more or less new hardware ).

So, no please, don't become like Linux!! Do you want to force me to migrate to Open/NetBSD? Or maybe Haiku ( if it gets better at supporting hardware ).

Another absurdity I have found no explanation for, is KDE being utterly bugged under many many Linux distros. The only one working as expected is Neon. But do you know what? I am using KDE5 ( because they say KDE6 has some nasty bugs to squash ) on FreeBSD and I have had only one crash of the main panel. Maybe the Linux distros add some "not so much tested" patches.
 
The Flatpack love is why I don't use Ubuntu at all anymore. Insane.
Agree. The big problem is that it is badly designed. Flatpak and Snap are horrible solutions to a moderate problem.

Considering the obvious solution is to force a bit of stability on the base part of the userland. After all glibc has a robust retrocompatibility built-in. Gtk and Qt give the same guarantees inside their major versions. These are the main pain points related to the porting of proprietary software under Linux. I honestly don't understand why the overengineered solution proposed by flatpak is even needed. Sandboxing? But it can be done in a separate, fine grained, and per application way, by using Seccomp and containers. And of course don't do it for the entire desktop environment, because it isn't needed at all.
 
It's usually not about doing a better job, it's people simply doing the job.
Of course.
But there is doing the job, and doing the job.

As I said in my first post:
Doing the job is one thing.
To decide what's going to be released is something else.

Any development engineer experienced immature rubbish was released into the market, faced the consequences it caused, footed the bill for decisions made by others too quickly, maybe also was taunted for he or she was part of the team working on that crap knowing better, but his or her concerns were ignored, know what I mean.

Just because most doing it that way is no reason to do it likewise.
(As we say in germany sarcastically: 'Eat crap! Billions of flys do. They can't be wrong.')

It's simply no style to blame who did the better job.
That's either to be an incentive to do it better oneself,
or just stand by the own way, and live with it, and its consequences.

In the end all what counts is if the user/customer buys it.
If not, it's not the customer to blame.
(It's also human to blame others. It's so much easier than to work on oneself.)

Especially for some software developers sometimes it's hard to accept,
if users reject their ideas, being too stupid to recognize the ingeniousness, while actually it's yet unfinished garbage, simply not needed, or useless.
But - depending on the environment - they sometimes simply can push it into the market anyway.
I do not mean put it at free disposal (website, github, ports tree etc.)
I mean feed it unasked to the user by 'upgrades' the user didn't asked for.

And just there was a lot of work done on it neither makes it good, nor it's a reason to press it on the user.

It's not about better or worse developers doing a good or better job.
The 'job' also includes if, when, how, and under what criteria decisions are made to release,
or to keep it in the lab.

As far as I experienced Linux to me sometimes it seems to be bit too open, so more often immature work is released too quickly into the wild which better stayed in the labs a while longer.
While I feel within FreeBSD the release process is more strict - which I appreciate, because it leads to more reliable products, and less crap.
And I object to anyone who want soften that, because I smell 'but I want my crap being released; even if it may not perfect yet; but what is; and I will work out the flaws later.'
As I know for many cases:'No, you won't. You got your release. Now you are going for another fancy idea.'
(I ment the 'royal you', not you.)

For that not always the developers are to blame.

And when this results in something users don't like, or even reject,
then it's neither the user, nor the programmer coding some source sections to blame,
but to think about other things, like reconsider QM, doing projects the way engineers are teached, the criteria for commits, etc.

That's all I was trying to say in short in my former posts.
 
NO NO NO my friend!!!!
If that was ment funny, forget this my post.
Sorry, I read your post too quickly, and answered too quickly, so I misunderstood you. :cool:

Otherwise:
I didn't say 'all' from Linux came to FreeBSD, I ment 'some' - I am capable to make distinctions (except when I'm joking, 'cause being reasonable kills the joke.)

And while we are counting beans:
I'm fully aware only some (few?) FreeBSD users use FreeBSD only exclusively,
while most also work with, and sometimes even on other systems, including Linux.

But I cannot help my feeling those are quickly pissed when someone says something against Linux/Windows.
And I also cannot help this may occur from another humanlike trait:
because of there is something right in it ?:cool:

After all we are in the FreeBSD, not a Linux forum.
 
Native ZFS seems to be a huge draw toward FreeBSD for a while now.

And a good one I think. People with enough knowledge to prefer certain filesystems over others are welcome in my book.
I jumped to FreeBSD due to beastie :D Plus i think people will be drawn to a solid FS it makes life better .
Agree. The big problem is that it is badly designed. Flatpak and Snap are horrible solutions to a moderate problem.
I believe Ubuntu almost forcing you to use flatpak`s right ?

It's a little weird that two BSDs with cranky userbases (Open, Free) have features they just dangle over everyone else (openssh, ZFS). I'd prefer people pick FreeBSD cause they like it not because they think they have to.
People tend to avoid researching and spend time testing so they stick to things they know, only when danger kicks in or lesson learned - then they tend to start exploring. People tend to put opnssh or zfs as a key as they read it somewhere :)
FreeBSD known for its stability as well and also no systemd and jails and bhyve and when you look deeper - you figuring out that userland is way better and easier to manage and also if you look at the packages you can install is one of top leaders... . AUR takes a lot of attention for its amount of packages but FreeBSD is close by but for me its kinda makes no sense as NixOS i think is a leader on packages you can get ... oh did i mentioned FreeBSD wiki and handbook ? Hanbook gives you almost everything you need, wiki gives you rest of it. P.s. Im sure if CUDA was available on BSD - many more poeple would jump from Linux to FreeBSD or other BSD`s. And still you can switch to FreeBSD almost for everything as majority people still do only web browsing and ocasional document print :D
 
Well, I trust it way more than BTRFS.

Actual use here is ZFS when on Linux, of course ;)
Early on, BTRFS was completely awful from a reliability, resilience and durability viewpoint. A well known file system developer (probably the best known one today) referred to it as "a machine for destroying data". I think one of the reasons is that it bit off too much. It started with Ohad Rodeh's (pretty genius) idea of how to use modifiable pages in a B-tree for a file system; alas Ohad was not heavily involved in the day-to-day software engineering. It also took a lot of ideas from ReiserFS (except obviously the murdering your wife part). The total amount of design work to be done probably overwhelmed the very small implementation team (for much of its early life: one person!), which led to the train wreck we got. It was also mainstreamed way too early; I think part of it was that the big distros (who have large development teams) were tired of ext2/3/4, and it got endorsed by the leader of the ext2/3/4 team as being the design for the future (damning it with faint praise).

I have not used or seen heavy use of Bcachefs, but I think the history repeats itself, just more extreme. The design goals are even more complex, now including built-in RAID (and some silver bullet to fix the write hole). The implementation team seems to be a single person, roughly one or two orders of magnitude too small for a system of that scope. People I know who decide which file systems to use on Linux (for big and important deployments) laugh about it, and go back to ext4.

Here is my hierarchy for the most reliable and durable storage systems:
  1. Commercial systems that are only sold to very large customers, such as Spectrum Scale, Lustre, or well-supported Ceph. All these are very expensive, and difficult to support. On the other hand, the commercial support organizations are fantastic, and worth every M$ (no, you don't get them for pennies).
  2. Good cloud storage solutions, such as Amazon's S3 or Google's GCS. Per byte, they are not very expensive, and they have fabulous resilience, and for commercial customers decent support. Big disadvantage: your data is at the other end of a potentially slow and unreliable network link (like your home DSL line). And you have to deal with a support organization for which 99.99% of all customers are small.
  3. ZFS. Of all the work of the late 90s and early 00s, this one has had the benefit of great designers, architects and coders early on, and it has stood the test of time well. Plus it is available for free. I trust nearly all my data to it.
  4. ext4 or XFS. Both very very good file systems, the people who work on it have a lot of experience (duh) and have learned from their own mistakes (kudos for that). But they doesn't have a built-in RAID, which is a sea change in good things happening. So only for single-disk single-node systems. FreeBSD's UFS is similar in quality, but with a smaller feature set, and fewer developers.
  5. On Macintoshes, the new Apple File System is very good. But given that it is de-facto only used on Macs and iPhones, it is not really relevant to this discussion.
  6. Everything else I know about is either crap, has rotted away, or a special-purpose solution. There might be good things out there that I don't know about.

EDITed: Added XFS to the category of "very good single disk file systems", thank you for that input.
 
Only supports one version of electron. Way better! Top leader! Packages disappear out underneath you if a security fix breaks it. Can't mix ports and packages. Easier management!
You can mix and match ports and packages in sane ways. Electron ? who needs electron ? ! :))) and all these disappearances , one version etc - you think it does not happens to Linux. ? My understanding in management is a bit different not only having a package or multiple versions of package. P.s. how many packages can break your system on Linux or wont brake but have security breach problems ? :) there are right ? . there are no best out of the best os in todays world and hopefully it wont be. so you have some trade offs there or there.
 
You can mix and match ports and packages in sane ways
No, you can't, and I encourage you to get into an argument about it with the people who have participated in all the threads about if this was possible or not.
who needs electron
Everyone who needs an app that depends on electron.

I've used yum and apt and all other sorts of linux package mangement and they either work or they don't, vs FreeBSD where maybe something will break, hope you read UPDATING, GLHF. I appreciate that I can fix it (I could probably fix linux if I actually had to, but... NEVER HAD TO), but I don't appreciate the hosing. Forum littered with examples.

Yeah, sure, there is no "best" but let's not pretend FreeBSD is something it is not.
 
No, you can't, and I encourage you to get into an argument about it with the people who have participated in all the threads about if this was possible or not.
Although you can use poudriere-devel to build a local repository leveraging pre-built where possible and with your custom settings where needed. So it’s very close to mixing ports (custom options, locally compiled) and packages (buildbot / pre-compiled).

Certainly, the easiest I’ve come upon if you need to do this in a sane way, compared to what’s available in the linux distros I’ve tried.
 
Well, once the instructions hit the handbook on how to do that, LMK. If it were that "easy" I would think somebody would have suggested this every time...
 
Well, once the instructions hit the handbook on how to do that, LMK. If it were that "easy" I would think somebody would have suggested this every time...
 
ZFS is equally large and complex and would have suffered the same fate, but it had the benefit of Sun QA and resources to get over the hump.

This is really what it comes down to. It'll be nearly impossible for a one man dev team to replicate this.

I watched an interview regarding ZFS sometime ago. They had mentioned one of the design constraints of ZFS was complete data path integrity. It could never corrupt data, under any cirucmstance. Only they then started added other features once that stability baseline was met. I just wish the VM subsystem and ARC played nicely for better write I/O. I don't even think the illumos folks fixed that issue yet. Oracle did, unfortunately.
 
ext4. A very very good file system, the people who work on it have a lot of experience (duh) and have learned from their own mistakes (kudos for that). But it doesn't have a built-in RAID, which is a sea change in good things happening. So only for single-disk single-node systems.
I think that "only for single-disk single-node systems" is a little unfair. You can sit md on top block devices to form RAID sets (also block devices) -- and that's what I do on Linux to get a mirror for /boot (ext2 file system). You can also manage aggregations of block devices with the Logical Volume Manager. Admittedly, md and LVM are not integral to the file system, but LVM is pretty much de rigueur for any serious Linux system (typically using ext4 or xfs file systems).
There might be good things out there that I don't know about.
XFS is now the default file system on Red Hat Linux. I have used it extensively on large file systems without any problems. One of its great virtues is that (unlike ext4) time to fsck is zero. That matters quite a lot if you have an Internet facing application with paying customers and a few hundred terrabytes to fsck before normal service can be resumed. [I didn't design that, just inherited it!]
 
I think that "only for single-disk single-node systems" is a little unfair. You can sit md on top block devices to form RAID sets (also block devices) ...
Absolutely, you can run a single-disk file system on top of a separate RAID system. In the bad old days, we would buy or build RAID array boxes (physical pieces of sheetmetal or whole racks) that had a single SCSI/FC port and looked to the host like a single "disk" (in reality block device), and internally had anywhere from a handful to a thousand disks. But there is enormous power in integrating the RAID layer with the file system layer. The single biggest one: when a disk fails, you don't have to immediately resilver unallocated space, and you can treat metadata and data separately, and ... many more things. All that is not available when using ext4 or other "single disk file systems".

XFS is now the default file system on Red Hat Linux. I have used it extensively on large file systems without any problems.
Indeed, XFS was a very good file system. One of its senior developers is the brother of a neighbor of ours (the neighbor is not in hi-tech); another senior developer was a colleague for many years in a previous job (after Silicon Graphics for him). I didn't know that XFS is still getting maintained so well, and that it is now the default on RHEL. Well, good to know, and I'll add it to my category of "file systems good enough to be seriously used".

One of its great virtues is that (unlike ext4) time to fsck is zero. That matters quite a lot if you have an Internet facing application with paying customers ...
Absolutely, instant (or deferred) fsck is a great thing. And "instant" may mean: there is no fsck; normal operation maintains all the necessary invariants, and after a system crash there is no need to do anything special. Many modern systems do that by internally using logs (which may not be visible to users) and replaying the logs on every restart. One of my favorite colleagues (sadly no longer with us) actually turned that into a design principle: never implement "clean shutdown". Instead, just crash the system, and restart it. Because you need to implement "restart after crash" anyway, and if you make it the normal mode of operation, you will first design it to be reliable and fast, and second it will get tested extremely well.
 
Back
Top