Kent responded and argued that "Bcachefs is _definitely_ more trustworthy than Btrfs"
Excuse me, what's the point of comparing yourself to the worst one?
Kent responded and argued that "Bcachefs is _definitely_ more trustworthy than Btrfs"
I think that the utility of ZFS depends on the usage perspective.But there is enormous power in integrating the RAID layer with the file system layer. The single biggest one: when a disk fails, you don't have to immediately resilver unallocated space, and you can treat metadata and data separately, and ... many more things. All that is not available when using ext4 or other "single disk file systems".
The Linux Logical Volume Manager is able to virtualise (optionally) redundant storage, creating, replicating, deleting, growing, or shrinking logical volumes on-line at will. In these circumstances, ext4 and xfs continue to provide sound options. And with no main stream commercial support for ZFS (I'll ignore Oracle), this is the prevailing (and absolutely overwhelming) commercial model.
Having said all that, I agree with you that ZFS is certainly better in many ways. It's just not used in most serious commercial applications, where ext4 and xfs prevail (and, in my commercial experience with thousands of Linux systems, work pretty well)
I have used both, quite extensively, and for a very long time (since before you could boot from ZFS). So I get that. What I am saying is that:You can't compare LVM to a zpool. They're fundamentally different
iXsystems have great products. But they have only a few hundred employees. They are simply too small to count significantly in a global assessment of what's deployed and in use. I expect that will change, as companies like iXsystems grow and go public. But I don't see a lot of hope for ZFS to be widely deployed on Linux because of the licensing issues. That's probably good news for FreeBSD.You might want to take a look at iXsystem's clients. I beg to differ.
For customers who need Linux support, it may be RHEL. The really larger customers (the FAANG and friends) instead roll their own distributions, often "based loosely" on a well-known one like Debian or CentOS. If you deploy 10 million Linux machines, you'll have enough engineering, you don't need to pay Red Hat.In situations where Unix-like systems are deployed in significant numbers (large data centres including the cloud) they are almost inevitably virtualisedRed HatLinux.
Absolutely. In the large deployments, individual clients do not run against real disks. Their "block devices" are in reality virtual volumes that run on complex layers of storage, usually with error checking, snapshots, load balancing, internal logging (partially to get the the speed of SSD at the cost of HDD), and so on. In the ones I'm familiar with, the Linux LVM isn't even used, since there are better things in the layers below. And those virtual disks de-facto don't fail: most cloud vendors advertise 11 nines of durability, and those claims are not a lie.For virtualised systems, dealing with a dead disk is usually handled transparently in a hypervisor or storage array. ...
With physical placement and rectification of disk hardware maladies off the list of issues for the client operating system (or volume manager) to address, the advantages of ZFS on that client are somewhat less pronounced.
Absolutely! My comment was targeted at the "home" user who actually has physical disks (that includes small commercial systems): they are better served by ZFS, in particular if they have 2 or 5 disks available. If you are inside a giant cloud deployment, the game changes. Or if your data is disposable. For example, I treat Raspberry Pi's as disposable computers: If the SD card fails (happens occasionally), or if the OS has a hiccup, I just put a blank SD card in and re-image them.That's why I said your description of ext4 "only for single-disk single-node systems" as a little unfair.
Similar and better things exist in other parts of the commercial space. Not just in NetApp, who used to be head and shoulders above the competition, and had the intellectual leadership, but that was 25 years ago. These better solutions are not commonly bought by small and medium users. I mean, who has a PureStorage or DDN or Spectrum Scale or Data Domain at home or under the desk in the office? And if you were able look inside the big cloud providers, they have stuff that's at least as good.ZFS isn't just some meme. It's the last (latest?) word on filesystems. IIRC only NetApp and Apple have similar filesystems.
I would word that differently: If the underlying storage is virtualized extremely well (so it is reliable, balanced, fast, error-proof), then the power of ZFS is wasted on it, and one needlessly pays for the overhead of it.ZFS does not work well when the underlying storage is virtualised,
ZFS does not work well when the underlying storage is virtualised
In my opinion, the only advantage is using css draggable regions in a frameless application window. I find PWA to be a much simpler option and is more consolidated rather than making mobile apps and electron apps or other web application solutions.No, you can't, and I encourage you to get into an argument about it with the people who have participated in all the threads about if this was possible or not.
Everyone who needs an app that depends on electron.
I've used yum and apt and all other sorts of linux package mangement and they either work or they don't, vs FreeBSD where maybe something will break, hope you read UPDATING, GLHF. I appreciate that I can fix it (I could probably fix linux if I actually had to, but... NEVER HAD TO), but I don't appreciate the hosing. Forum littered with examples.
Yeah, sure, there is no "best" but let's not pretend FreeBSD is something it is not.
That's wrong. There are multiple electron versions available. What's true is only one is built as a binary package at the moment because of a shortage in builder resources ... (I'm the one who opened the PR about that! From what was written there, it seems there will be new/more hardware eventually at least).Only supports one version of electron. Way better! Top leader!
Packages can disappear from the repository. Not from your machine, unless you deliberately ignore what pkg is telling you and just hit "yes" ... a common bad habit. Regarding the repository, I very much prefer that approach to the alternative: Just keep the old package and hope nothing breaks in weird ways because of changed dependencies.Packages disappear out underneath you if a security fix breaks it.
Can't mix ports and packages. Easier management!
Wrong again, nobody ever said you can't. To do it manually, you DO need a good thorough understanding how things work, and it's still easy to mess up, that's why it's never a recommended thing to do. There's been a perfectly safe option for a while now: Let poudriere do it while building your own repo. Poudriere will only add pre-built packages that are a perfect match to your configuration.No, you can't, and I encourage you to get into an argument about it with the people who have participated in all the threads about if this was possible or not.
Pedantic. You know what the problem is and why it's a problem. It's not better than other distros that have multiple binaries available.There are multiple electron versions available
I think you missed where I was trying to express that it was a bad thing that it happened at all. It's only a "bad habit" because it can happen.common bad habit
Ah yes, thank you for more pedantry! I believe my comment on this was "easier management" in comparison to other systems and you explaining how it do it is super complicated illustrates how FreeBSD does not have easier package management than other systems. I expect your next comment to be about how ports is not "package management" and then we can go round and round again with the electron problem!you DO need a good thorough understanding how things work, and it's still easy to mess up, that's why it's never a recommended thing to do.
Conversely, FreeBSD is not inherently inferior to these other systems.My point was that FreeBSD is not inherently superior to these other systems.
IMO, people see "linux" on this forum and get defensive.trigger to get into that kind of bashing
These things could be better. I don't think other things are worse.I'm not sure what exactly you're arguing
People often came to this forum to get away from broken ideas. In many ways you can expect they don't want to discuss it or when they do, it won't be with joy and happiness in their heartsIMO, people see "linux" on this forum and get defensive.
I think that there's a fair bit written on it. From the top of my head:That's interesting. Could you explain why? I'd wager this could be a problem worth fixing. Especially with regards to bhyve. I haven't seen fixes for the ARC and mmap/page cache issue either.
Better the enemy you know. You don't run from broken ideas, you tackle them head on, expose them and discard them quickly.So, defensive and also unable to resist getting closer to posts involving the "broken ideas" they wanted to get away from.
ZFS, being designed for small systems (a few disks to dozens of disks), has no notion of failure domains and of correlated failures. It is designed around the assumption that every block device is independent of all others. This assumption is sensible if the main source of failures are disk drives, and each block device corresponds exactly to a disk drive. It also doesn't try to deal with interesting failure scenarios: A disk either works perfectly, or it has an individual sector error, or a whole disk goes away (failstop). In the real world of large storage systems, both assumptions are wrong. For example, groups of disks may have correlated failures. For example a system with 100 disks may have 10 backplanes, each with 10 disks. Failure of a backplane will knock out 10 disks at once. When doing RAID layout, you need to make sure you never use two disks from the same backplane in a RAID set. On a virtualized storage system, the assumption of failure independence is more complex than the simply failure domain example I gave. Or for example a disk may be slowly failing, and you want to begin slowly draining data from it, but without starting a full resilvering which is equivalent to the disk being dead, because the data that is still on the disk remains readable, but no new data should be written to it (with HAMR/MAMR disks, this will become a common syndrome, where one part of a disk goes readonly). Or a disk may have developed performance problems, and the optimal system configuration is to decrease the load on just that disk, by allocating less data onto it. These are all things that a file system integrated with the underlying virtual storage layer can do well, but ZFS doesn't know that the underlying layer is virtualized.ZFS makes assumptions about redundancy when constructing a RAID set. The assumption that the "disks" are independent, and distributed parity can be used to recover if one "disk" fails.
Depending on how the underlying storage system is implemented, the workload presented by ZFS's CoW style writing can either lead to terrible performance, or to great performance. The thing is that the average user doesn't know ahead of time which it is going to be.ZFS is a copy on write (CoW) file system. The conventional wisdom is that using a CoW file system to provide virtualised storage to another CoW file system is undesirable. I think that the main problem is write amplification.
Been there, done that, never again. Linux LVM is a mess of poorly designed tools. It's configuration is large and confusing, and changes randomly from release to release. You're left wondering why your machine won't boot (if you were foolish enough to use LVM in the boot drive) or at least why it won't mount the volumes that worked just yesterday. Hard pass.The Linux Logical Volume Manager is able to virtualise (optionally) redundant storage, creating, replicating, deleting, growing, or shrinking logical volumes on-line at will. In these circumstances, ext4 and xfs continue to provide sound options. And with no main stream commercial support for ZFS (I'll ignore Oracle), this is the prevailing (and absolutely overwhelming) commercial model.
I get what you mean by "single disk file system" but LVM makes them highly flexible, and storage arrays take away physical disk management. That's why I said your description of ext4 "only for single-disk single-node systems" as a little unfair.
Hijacking thread with BS on stuff only very few people use, some wrong statements and a lot of ranting. That is what makes a classic troll post, in my book. Welcome to my (very small) ignore list on this forum.Only supports one version of electron. Way better! Top leader! Packages disappear out underneath you if a security fix breaks it. Can't mix ports and packages. Easier management!
I find myself wedged into the uncomfortable position of defending LVM against ZFS.Been there, done that, never again. Linux LVM is a mess of poorly designed tools. It's configuration is large and confusing...
I have lost count of the number of times I have grown ext4 and xfs file systems on-line with lvextend. I'll admit that shrinking file systems on-line is dangerous, and I'd always prefer to do it off-line. But shrinking is acutely rare (read never happens).I'd like to see an example of a volume whose size changed "on-line at will". Use ZFS datasets, and this is not even a problem.