Although you can do poor mans error correction if you set copies=2 (or 3) on the dataset properties. ... The other issue with this is that it affects performance, writing data takes 2 or 3 times longer as it doesn't do the writes in parallel or in the background.
I can easily be much slower than 2x or 3x: it can take a big sequential write of many GB, and turn it into a random hop and skip. But not always. The answer whether the performance penalty (which is large and unknown) is worth the reliability gain (which is small but non-zero) depends on the needs and wants of the user.
In fact for the moment I do not need most of the features of ZFS. Need stable file system with journaling ...
The one feature of ZFS that you (and everyone else) do need is checksum verification on data read from disks. Disks and data volumes have become so big, while error rates have not improved, that we are now in a world where undetected / uncorrected read errors are reality. Not using checksums on big modern systems is beginning to be reckless. Some other file systems are beginning to recognize this, and also have checksums, at least on some metadata structures. But in the free/open software realm, ZFS is miles ahead of everyone else in this respect.
The other thing is: You don't need journaling. Nobody needs journaling. Journaling is a technique, that tries to solve a particular problem: Make file systems consistent and reduce loss of unacknowledged data in case of a system stop (power loss, crash). There are many other ways to solve the same problem. What you do need is a file system that doesn't get corrupted after a system stop. Typical ingredients do solving this include journaling, CoW, logging or log-structured file systems, write buffering, and so on. Because journaling is the technique used by the most popular Linux file system (which has a huge market share), people tend to think that journaling is the one and only answer.
Saying "I need journaling" is like saying "I need a Ford to get from home to work". Sorry, wrong. What you really need is reliable, inexpensive, and safe transportation. There are many options there, including Chevrolet, Honda, and Volkswagen; for particular situations there are also horses, rollerskates, and the subway. Ford is just one possible solution, not always the best.
SSDs controllers routinely acknowledge data written to volatile cache. That's one of the reasons that they are so fast. The controller updates the page in the background.
True. Particularly common among comsumer hardware. By the way, hard-disks do the same thing.
Expensive SSDs, usually tagged "enterprise class", have "power loss data protection" (usually via capacitors) which save the volatile cache at the time of unexpected power loss.
Also true. Although I've seen plenty of cases where expensive enterprise class SSDs also lost data during a power shutdown or reset. The firmware installed on SSDs is of frighteningly bad quality (spinning hard disks are leagues better), and even some expensive enterprise SSDs are far from perfect.
In practical terms, people use consumer grade SSDs and non-ECC memory all the time, mostly without significant issues.
It's a cost/benefit/risk tradeoff. I happen to have an enterprise SSD from a reputable vendor at home, but non-ECC RAM. Given the pricing at the time, this was the least bad option.
But the design assumptions underlying ZFS do rely on "enterprise class" hardware.
Careful: while your statement is not outright wrong, it can be mis-interpreted. With enterprise class hardware (in particular ECC RAM, and using redundant storage), ZFS can reach levels of data durability that are common in the enterprise/cloud market, and way better than the amateur/consumer/discount market. Even without ECC RAM, and with non-redundant storage, ZFS is still better than most other file systems as far as data durability and error detection is concerned ... but it is not as good as it could be. On the other hand, if you take a piece of crap file system (say a FAT implementation written by a second-year college student who was drunk most of the time), it will still suck, even on million dollar hardware.
I think the right way to express it is something like this: ZFS is so good, that it exposes the reliability and data durability bottlenecks in the rest of the system. To get the best value out of ZFS, you should also make the rest of the system stronger, which will cost more money.
Hallelujah. Exactly that.
Just felt
this link may be relevant to above discussion
Well, it is slightly relevant, but also contains lots of wrong and obsolete information, and is terribly Linux-FS centric. There is a world outside of extN, XFS and BtrFS, but many people are gleefully unaware of that.