ZFS desktop computer zfs? boon or bain?

Nothing wrong with ZFS on a desktop. I thoroughly enjoy using ZFS for just about anything.
 
I agree with SirDice, there is nothing wrong with a desktop on zfs, in fact I'm running one right now: an old laptop (~10yrs old) with only 4GB ram, Intel P8400 (Core2 duo), nvidia GeForce 8200M G (MCP79), 240GB SSD (Crucial M500), who works like a charm for net browsing and light things, like movies, music, etc.

Code:
╼ gpart show                                                                                                            
=>       40  468862048  ada0  GPT  (224G)
         40       1024     1  freebsd-boot  (512K)
       1064        984        - free -  (492K)
       2048  468858880     2  freebsd-zfs  (224G)
  468860928       1160        - free -  (580K)
 
When I get into trouble, in most of case UFS is flexible enough to get out of trouble in different manner.
That's definitely a good argument for UFS. Although I very much like ZFS if you run into serious problems with ZFS you're most likely going to need to restore from backups. If ZFS can't fix itself you're usually out of luck. In this respect UFS may prove easier to fix. It's been around for ages so there's a ton of good information and tools available to help fix things.
 
After a crash/power failure, UFS needs fsck to be run.
It normally does this automatically on the next boot.
If fsck doesn't fix the UFS problems or recover your files in the 'lost+found' directory, it 's time to go and find your backups.
 
I'm asking because after one power failure the program I often use failed to start because its main configuration file vanished. fsck was run during the boot, of course. I had to copy the config file from another computer which luckily had the program installed. Though I never looked in the lost+found, maybe the config file was there.

BTW I never had this problem on Linux with ext4, even though my parents sometimes kill our HT systems by long pressing the power button which bypasses the normal shutdown. FreeBSD with ZFS was fine too. Is UFS less resistant to abnormal shutdown compared to other FSs? I think I always had SU+J setup, maybe something else (gjournal?) would be better?
 
Is UFS less resistant to abnormal shutdown compared to other FSs?

After 30 years of continuous use in enterprise settings, I should say not. I would suspect something else being the culprit---most likely human error, if I'm being honest. I've never heard of UFS spontaneously losing data that's been sitting on a disk untouched for long periods of time. The filesystem metadata corruption and write sheer are still problems, sure, but those should only happen when pending and in-progress writes to the disk are interrupted. Having a single, untouched file disappear would be very unusual for any filesystem, but particularly for one that's been in continuous use for so long.

BTW I never had this problem on Linux with ext4...

I've had entire directories wind up in lost+found after power loss using ext4. Of course that's purely anecdotal, but then the Wikipedia page for ext4 has an entire section on the increased risk of data loss under certain circumstances. And while I'm not sure if this really means anything, the original ext2 was actually designed after UFS.

To answer your question directly, ZFS works fine on a desktop with a sufficient amount of RAM and processing power. You might have to put a little more thought into how you set it up, but it will work fine.
 
I suspect the config file was open by its running program and that's why it vanished after power shutoff.

I tried ZFS on 2 machines (16GB and 32GB of RAM) - too resource hungry, occasional slowdowns and in general sluggish. I used an additional SSD drive for ZIL and L2ARC which improved speed, but it was still slower than UFS or ext4. I don't run enterprise systems, just home workstations and ZFS seemed like an overkill. Though it never gave me lost files/directories compared to UFS.
 
maybe this can do the trick to prevent disasters(power lost,etc)
not 100% the solution but is a way

Code:
kern.metadelay: 1
kern.dirdelay: 2
kern.filedelay: 3
 
I tried ZFS on 2 machines (16GB and 32GB of RAM) - too resource hungry, occasional slowdowns and in general sluggish. I used an additional SSD drive for ZIL and L2ARC which improved speed, but it was still slower than UFS or ext4.

I'm using ZFS on all of my desktops and laptops with 8-32GB RAM (and on my servers, of course) - with none of these systems ZFS slows down because of RAM pressure. Yes, after a few days of runtime all available RAM is used for caching, but thats what it SHOULD be used for. Having multiple GB of unused RAM is completely pointless and ZFS releases the RAM if any other process needs it.
The only system I've encountered that is unbearably slow with ZFS even on mild load is an old NAS with only 3GB of ram and a horribly weak Atom D2700 - but to be fair this system is total crap with all filesystems even at low loads...

I never compared ZFS performance directly with UFS or ext4 on my systems - the laptop uses SSDs, the desktops use various combinations of HDD/SSD/NVMe, sometimes with multiple pools (e.g. my steam library sits on NVMe) so they are easily fast enough for their everyday tasks and I prefer data integrity over the last few mb/sec of performance...
 
About ZFS data integrity. It is recommended that computer with ZFS uses ECC RAM which is an expensive server grade stuff. Do I really need it for a desktop?
 
About ZFS data integrity. It is recommended that computer with ZFS uses ECC RAM which is an expensive server grade stuff. Do I really need it for a desktop?

To quote Allen Jude's show notes from last weeks BSDNow:

As we talked about a few weeks ago, ECC is best, but it is not required. If you want your server to stay up for a long time, to be highly available, you’ll put ECC in it. Don’t let a lack of ECC stop you from using ZFS, you are just putting your data at more risk. The scrub of death is a myth.

The argument is basically: using ZFS without ECC memory is better for data integrity than not using ZFS at all.
 
First: If your desktop has multiple disks, ZFS is the easiest way to get RAID protection against disk failure. That can be achieved with other file systems too (usually by using separate software or hardware RAID), but ZFS makes it easy, and integrates it nicely into a single management infrastructure.

About ZFS data integrity. It is recommended that computer with ZFS uses ECC RAM which is an expensive server grade stuff. Do I really need it for a desktop?

The argument is basically: using ZFS without ECC memory is better for data integrity than not using ZFS at all.

Do you need ECC memory? Well, that depends. How valuable is your data to you? Is (value of your data) * (risk of a corruption due to an error that ECC would have corrected) > (cost of ECC) ? Now, to evaluate that equation, you need to know the value of three unknowns. The first and last (value of your data, cost of ECC) only you can answer. The middle one (risk of corruption) is virtually impossible to figure out. Let's try anyhow.

ZFS is a really good file system. From a data reliability and availability viewpoint, that is mostly for two reasons: (a) it has RAID built-in, which protects against failures of disks (complete failures, and failures of individual sectors and read errors), (b) it has checksums, which protects against silent corruption of the data on the storage path (between the buffer memory inside the host, through the write, then the read, and back into memory), and (c) it scrubs, so it can find latent disk errors early. Let's talk about (b): There is well-known cases of the storage stack corrupting data. The most talked about one is "off-track writes": You send a write request to the disk, at the moment the write happens the disk is not exactly on the track but next to it (can happen due to mechanical vibration, or bad servoing), the data actually gets written, but future reads follow the track's servo information, and find the old data. This is a case where the drive will return wrong data (actually old data), without telling the host that it had a read error, and the classic example of uncorrected read errors. What to do about that? The obvious answer: Before writing, take a checksum of the new data to be written. After reading, verify the checksum. The checksum can not just be stored with the data, so the off-track write above can be detected (the drive returned valid data with a valid checksum, but not the checksum that was expected). Together, RAID and checksum take care of a large fraction of all failure modes of hardware - let me jokingly say that they handle 90% and 9% of the problems.

But there is one problem that isn't solved yet: What if the user data gets corrupted while in memory, in the buffer pool? The file system itself can not guard against that completely, because it doesn't actually have full control over the data buffer at all times (think mmap for example). And this is where ECC hardware comes into play. With my joking examples above, I could now say the following: Since ZFS has already taken care of 99% of all data loss/corruption problems, ECC is the single most important thing to do next, the largest source of bad things happening to good data. In reality, that statement is false; the largest source of data loss/corruption are (x) user error, usually by a sys admin, and (y) software defects in OS and file system, but people tend to ignore those as unsolvable, since they involve humans, while investing effort to protect against hardware problems is considered good style.

This helps understand why many people say "if you use ZFS, you should have ECC": People who use ZFS are typically people who care about data integrity and availability, and invest time and money into it; having deployed RAID and checksums, using ECC makes good sense. But it also helps understand why ZFS makes sense even if you can't afford ECC (and even if you don't have multiple disks to use RAID): Just checksums and scrub alone already help with some problems.
 
I've been using ZFS on all the consumer-grade hardware I can, and will soon be rebuilding my Ubuntu systems on ZFS root.

RAID of some sort for my personal data is important, having experienced more than enough physical drive failures over the years.

The ability of ZFS to easily expand a pool by adding a couple drives and to replace drives within a mirror make this kind of work "trivial" compared to other solutions, especially when you can't get an "identical" replacement.

The snapshot/clone/rollback feature has saved me from my own mistakes. Perhaps one day it will save me from ransomware.

It was well worth it even when I had to dive through hoops and had hand-crafted scripts to set up the pools and datasets and fought against 2 GB RAM in my low-power, "silent" machines. Yes, I had ZFS running on an Atom D330 with 2 GB of RAM. Now the FreeBSD installer will set it up for you and it's hard to buy a board that doesn't support at least 4 GB, and a 2x 2 GB pair of sticks from a name brand can be had for $35.
 
Back
Top