ZFS superior to UFS for desktop / workstation use?

I'm currently running FreeBSD 8.1-RELEASE as a graphical desktop-type system, with X11 and GNOME. If you want to get an idea of what I use this system for, the main programs I run are vim, Firefox and xchat.

My question is: would ZFS be a superior filesystem for my purposes? I get the impression that it was primarily designed for servers, and that a (l)user like myself might not notice much difference in terms of system performance.
 
By the way, it's amd64 with 2GB of memory. I heard that these were the prerequisites for using ZFS.
 
I'm using ZFS on my older laptop with 1GB of ram. There were a couple of issues with fine-tuning the zfs settings where the system would spontaneously reboot. After making adjustments to zfs for a system with minimal ram there hasn't been any zfs problems.

I have had issues with the Xorg driver and suspend where the system would lock-up and require a manual power off. In all cases data was preserved and there wasn't any corruption of the filesystem.

Despite the ram requirements the system feels more responsive with ZFS than UFS. Not much to choose in terms of speed other than filesystem recovery. I used to watch the UFS background fsck have the hard drive access light blinking constantly. I don't see any of that with ZFS.
 
d_mon said:
u need at least 4 gb of ram...

Not so, 4GB is the recommended amount if you want to enable prefetching, ZFS is still perfectly usable with less. It will just need some tuning, that's all.
 
d_mon said:
u need at least 4 gb of ram...

No you don't. You can run ZFS with as little as 512 MB of RAM, if you spend a lot of time tuning. 2 GB is the recommended minimum. 4 GB is just the sweet spot where things get more stable without needing too much manual tuning.
 
gordon@ said:
What features of ZFS are you planning on using that would make sense to switch to that instead of UFS2?

To be perfectly honest, I don't know. I've read what the features of ZFS are, but I'm not technical enough to understand how they might benefit me. I do know that many people seem to speak of ZFS excitedly and think of it as the "filesystem of the future," which OS X and Linux are currently attempting to integrate.
 
raid said:
To be perfectly honest, I don't know. I've read what the features of ZFS are, but I'm not technical enough to understand how they might benefit me. I do know that many people seem to speak of ZFS excitedly and think of it as the "filesystem of the future," which OS X and Linux are currently attempting to integrate.

Actally Apple dropped zfs support in the last release. It was supposed to be a killer feature and they silently removed it:

http://apple.slashdot.org/story/09/10/23/2210246/Apple-Discontinues-ZFS-Project

also linux integrates it via fuse. it doesn't jive well with gpl so there are some politics involved.

UFS is a tried and true filesystem.
ZFS is new and integrates more 'features'.

I personally don't feel minimum specs for zfs are not realistic for all uses. Setting up a server (non desktop) with jails (kernel level visualization) prompted me to upgrade my 4gb ram to 12gb and add an intel ssd for larc2

Under heavy load zfs performance may suffer in comparison to the unix file system.

The usage of the word `superior` might be inappropriate for this discussion. These are both only tools. And will be used for such. zfs is probably best for mass storage solutions over something like a singular desktop install. But then again it's up for you to try before you buy =)
 
UFS is rock stable and real mature, heavily tested in countless environments since several decades. ZFS is not. Period. It's a nice file system, it's modern, it's sometimes faster, but it's also a ressource-hog and it has its known caveats. It's certainly a filesystem you should consider, but test it first on your hardware. Nobody can spare you this work ...
 
tessio said:
2GB to use an filesystem!? WTF!?

To clarify things a bit.

First of all, ZFS doesn't need 2 GB to run. 1 GB will do fine, although 2 GB performs better. It can also run on i386. Sometimes a bit of tuning is required.

Second of all, ZFS doesn't use all the RAM you have. Usually, a few hundred MB is used. It's also possible to limit ZFS to 100 or 200 MB.
 
oliverh said:
UFS is rock stable and real mature, heavily tested in countless environments since several decades.
Thats true. But dont forget to say that UFS (or more the tools to manage it) cant handle more than 2TB. So in some cases people may be forced to use ZFS if they want to stay with FreeBSD

oliverh said:
It's certainly a filesystem you should consider, but test it first on your hardware.
100% ack. For myself i run the system itself often on a gmirror or HW raid1 and the user data on a zfs pool. So i cant run into trouble while updating the system and try to boot from ZFS.
 
UFS is rock stable and real mature, heavily tested in countless environments since several decades. ZFS is not. Period.

Are you talking about the codebase, or about the filesystem stability per se ?
I've had *tons* of problems with the second one :(

Second of all, ZFS doesn't use all the RAM you have. Usually, a few hundred MB is used. It's also possible to limit ZFS to 100 or 200 MB.

Isn't ARC supposed to use TotalRAM- 1GB by default ? (well, that's how it behaves in Solaris : source )
But yeah, the memory usage is perfectly tunable
 
NFS Fileserver AMD64 with 32GB RAM and without any ZFS Tuning:

Code:
last pid: 61792;  load averages:  1.37,  1.23,  1.05  up 7+09:34:58  16:26:31
67 processes:  4 running, 63 sleeping
CPU:  0.0% user,  0.0% nice, 11.6% system,  0.1% interrupt, 88.2% idle
Mem: 138M Active, 22M Inact, 22G Wired, 228K Cache, 1596M Buf, 9202M Free
Swap: 32G Total, 32G Free

Before the last reboot not more than 21G was used as Wired.

Code:
	NAME        STATE     READ WRITE CKSUM
	home        ONLINE       0     0     0
	  raidz2    ONLINE       0     0     0
	    da1     ONLINE       0     0     0
	    da2     ONLINE       0     0     0
	    da3     ONLINE       0     0     0
	    da4     ONLINE       0     0     0
	    da5     ONLINE       0     0     0
	  raidz2    ONLINE       0     0     0
	    da6     ONLINE       0     0     0
	    da7     ONLINE       0     0     0
	    da8     ONLINE       0     0     0
	    da9     ONLINE       0     0     0
	    da10    ONLINE       0     0     0
	logs        ONLINE       0     0     0
	  mirror    ONLINE       0     0     0
	    da12    ONLINE       0     0     0
	    da13    ONLINE       0     0     0
	cache
	  ad4       ONLINE       0     0     0
	  ad8       ONLINE       0     0     0
	spares
	  da11      AVAIL
 
User23 said:
Thats true. But dont forget to say that UFS (or more the tools to manage it) cant handle more than 2TB. So in some cases people may be forced to use ZFS if they want to stay with FreeBSD
A lot has changed since that was true. GPT is here and working as well as 64 bit quotas. I haven't actually tested it so it's possible there's something I missed but I believe your information is out of date.
 
Galactic_Dominator said:
A lot has changed since that was true. GPT is here and working as well as 64 bit quotas. I haven't actually tested it so it's possible there's something I missed but I believe your information is out of date.

Yes, i was wrong. GPT solve this.
 
Galactic_Dominator said:
A lot has changed since that was true. GPT is here and working as well as 64 bit quotas. I haven't actually tested it so it's possible there's something I missed but I believe your information is out of date.
fsck-ing a 2TB UFS partition takes quite some time. Even when [r]dump takes a snapshot, it takes some time. That's something to consider when building large partitions.

I've tested zpools of up to 21TB usable storage (3 x 5-2TB-drive raidz's) with 6TB of data in about 250000 files, and a scrub runs for around 10 hours, but it happens in the background.

I'm running systems with a SuperMicro X8DTH-iF w/ 2 E5520 CPUs and 48GB of RAM. The disk controller is a 3Ware/LSI 9650SE exporting 16 single units (WD RE4's). The OS is on a gmirror'd pair of WD3200BEKT's. There is also an OCZ Z-Drive R2 P84 (256GB PCI-E RAID0 SSD) in each system. I have 3 of these systems that I've been stress-testing for several months now. I've tested pulling drives, unclean shutdowns, simulating a failed SSD (it is the ZFS log device) and the systems have handled everything I've thrown at them. The only issue was a system lockup with the RELENG_8 version of the twa driver (it spews loads of error messages and eventually stops responding). The prior version in CVS works fine, as does an un-committed update I got from 3Ware.
 
Terry_Kennedy said:
The only issue was a system lockup with the RELENG_8 version of the twa driver (it spews loads of error messages and eventually stops responding). The prior version in CVS works fine, as does an un-committed update I got from 3Ware.

Did you remember the error messages? Was it something like "Micro Controller Error ... Unexpected status bit(s) ...."?
 
User23 said:
Did you remember the error messages? Was it something like "Micro Controller Error ... Unexpected status bit(s) ...."?
Nope. Here's a sample set (the system locked up after the last one):

Code:
Aug  2 19:18:02 new-rz1 kernel: twa0: ERROR: (0x05: 0x2018): Passthru request timed out!: request = 0xffffff8000bd3de0
Aug  2 19:18:02 new-rz1 kernel: twa0: INFO: (0x16: 0x1108): Resetting controller...:  
Aug  2 19:18:41 new-rz1 kernel: twa0: INFO: (0x04: 0x0063): Enclosure added: encl=0
Aug  2 19:18:41 new-rz1 kernel: twa0: INFO: (0x04: 0x0001): Controller reset occurred: resets=1
Aug  2 19:18:41 new-rz1 kernel: twa0: INFO: (0x16: 0x1107): Controller reset done!:  
Aug  2 19:18:41 new-rz1 kernel: twa0: ERROR: (0x05: 0x201A): Firmware passthru failed!: error = 60
Aug  2 21:01:24 new-rz1 kernel: twa0: ERROR: (0x05: 0x2018): Passthru request timed out!: request = 0xffffff8000bcb480
Aug  2 21:01:24 new-rz1 kernel: twa0: INFO: (0x16: 0x1108): Resetting controller...:  
Aug  2 21:02:03 new-rz1 kernel: twa0: INFO: (0x04: 0x0063): Enclosure added: encl=0
Aug  2 21:02:03 new-rz1 kernel: twa0: INFO: (0x04: 0x0001): Controller reset occurred: resets=2
Aug  2 21:02:03 new-rz1 kernel: twa0: INFO: (0x16: 0x1107): Controller reset done!:
 
Back
Top