ZFS over UFS which one is better for LGA775 based PC?

Hello I'm new to the forum and FreeBSD OS. What I want to know is which file system ZFS or UFS is better for computer:

Pentium E5700
Asrock G41TM LX v2.0
4GB DDR3 Kingston KVR1333
2 x 1TB HDD(one used for this os and other one for windows)
GTX260

So what is better system now I'm trying on VMware when writing.
 
Either will work on that hardware. "Better" is subjective. UFS is smaller and faster, ZFS provides data security and lots of features.
 
"Better" is subjective.

To add to this a bit: it's a matter of personal needs and intended application, and if your hardware can handle either one (looks like it can), then you need to choose a filesystem based on what you think you need and how you want to manage your data.

ZFS awesome, and is often seen as the "wave of the future," but it isn't necessarily the best choice in every case. You can use ZFS on any kind of computer with sufficient resources, but each type of system (and the tasks you use it for) will present its own advantages and disadvantages. My own experience has led me to conclude that ZFS is best suited for dedicated storage servers, and using it on a laptop or desktop with a single, relatively smaller disk on which data frequently changes can add more complexity and maintenance concerns than it's worth. So I use UFS on my production laptop, and back up my data to two different ZFS pools---one on a spare HDD, and one on a file server.
 
Hello sorry for late reply. I actually would like to have something that won't lose my data and load system fast enough so I won't be waiting eons to load OS ;) So I think I will try ZFS and let You know if that's what I was looking for :)

Cheers, Horacy.
 
Since it's mostly a single disk setup, you won't get the "self-healing" powers of ZFS. That only works when there's redundancy of the data. Either by using mirrors or RAID, or by setting copies to 2 (or higher).
 
So I won't have a chance to use ZFS features untill I get other disk for mirrors or RAID, or by setting copies to 2 (or higher) as far as I understood yea???
 
Setting copies=2 in a single disk pool is not a bad solution actually, if the disk gets only a few bad sectors chances are that everything on the disk is still recoverable. Of course in case of total failure it doesn't help. Remember to take full back ups often, ZFS or RAID are not a substitute for proper back ups.
 
For example: zfs set copies=2 zroot/mydata

Do note this will cause all data in zroot/mydata to be written twice. So data will use twice as much space. Also note this will only work for files written after the setting has been set. It will not retroactively change existing data.
 
Note that setting it after you have filled the datasets with files the existing files won't benefit from the setting unless the files are written to and modified. New files will be stored twice as expected.
 
^ All of this is just a taste of the learning curve and complexity I mentioned. If you want redundancy on a single disk, you need to increase the "copies" property. Doing that eats up more disk space. Now you need to more closely monitor your disk space, as all datasets---including snapshots and clones---will take up twice the space. (The only exception is zvols). You could of course use compression, but that only works with certain types of files. (To put it plainly, compression won't work on the sorts of files that eat up the greatest amount of disk space).

So now, at first, you're filling your disk twice as fast. How much data do you have on the disk now, and how much do you expect to write in the foreseeable future? If you set up automated snapshots, that disk will continuously fill on its own as you manipulate your data; the simple act of copying a directory could double or triple the amount of space it consumes. In some cases, you now have three (or four) copies instead of merely two (or three). If your disk is already over 50% full, you might have to pick and choose which data you protect with copies={2,3}. This means you have to figure out which directories you want multiple copies of and which you consider more "expendable," and then set up separate filesystems for each. Which in turn makes the list generated by zfs list longer and more difficult to parse, and also increases the number of filesystems you need to configure for automated snapshots, which in turn increases the number of snapshots on your system, making the list of snapshots you need to read through if you're looking for what's eating space even longer, and complicating backup/restore procedures... And so on.

This isn't as horrorible as I might make it sound. It is manageable. But if you use ZFS on a one-disk desktop/laptop or in a low-capacity mirror, and make use of even a small number of its great features, you absolutely will be spending at least a little more time managing your storage than you ever have before. With a traditional filesystem and typical desktop workflow, a 1Tb disk will give you enough space that you don't need to think about it. The amount of data you store and the amount of storage space consumed is 1:1; if your disk is half-full now, and you add a few gigabytes, so what? ZFS changes that---each of its features eats up more space as time goes on, and predicting how much space will be consumed for different operations isn't really feasible. And all this for potentially no great benefit---are you "protecting" files that you'll probably end up deleting anyway? Does the total amount of data you want to protect with ZFS take up a mere 10% of your total disk space? How many times have you ever permanently lost a significant amount of important data to filesystem or disk failure, how frequently do you expect it to happen in the future, and how is using a more complex ZFS setup a better way to prevent such loss than routine backups?

It's not a trivial concern, and so the question "Should I use ZFS or UFS?" isn't a simple matter of taste or of one being objectively superior to the other. My personal opinion: If you don't know whether ZFS is useful in this case, it's probably not.
 
One additional awareness point. Booting from ZFS (so called ZFS on Root system) may not be a simple matter in some cases, especially on complex pools with many disks. The ZFS bootloader requires that the boot pool is accessible completely by the BIOS routines and this causes often problems with systems that have broken BIOSes or some of the disks are connected with an additional SATA adapter. It's worth thinking if putting the basic operating system on an UFS filesystem and rest of the data on a ZFS pool would be a better idea.
 
Thx for all that stuff :) and it is gonna be my actual C++/Java and web dev. setup so actually I'm not sure if I should use ZFS or better UFS but I will consider also Base OS on UFS and rest ZFS but I'm not sure what size would be best for each :(
And what partitions are actually needed for OS to work properly??
kpa so would it be good if I do eg: 40GB for OS on UFS and give rest hdd ZFS???
Cheers.
 
I will consider also Base OS on UFS and rest ZFS but I'm not sure what size would be best for each :(
And what partitions are actually needed for OS to work properly??
kpa so would it be good if I do eg: 40GB for OS on UFS and give rest hdd ZFS???
If heavily depends on what you're going to install, but 40GB should be plenty for a common installation. I run a semi-complete FreeBSD desktop environment on my laptop (X11, Xfce, Opera, SeaMonkey, Netbeans, LibreOffice) and I got that easily set up within 40GB. Right now my laptop even has a checked out copy of the FreeBSD 10.3 sourcetree which I'm planning to compile and install later this week. Oh, next to the source I also have the documentation project checked out which I keep up to date using textproc/docproj. Which basically is all extra taxing on my diskspace ;)
 
Back
Top