zfs - to be or not to be

I'm starting to gain interest in ZFS....
I've read some articles and my gut is saying that I should try it... (i know it's not finished, I know i can't boot straight from zfs)

Self healing and zfs snapshots are very appealing features....

Being a desktop user, I don't have too much important data on my HDD (+ I have backups)
Atm I'm using my HDD space very inefficiently (multiple partitions, lots of free space etc)

I wonder how much of space will I have available if I want self healing... (I have total of 390GB of HDD space. 152GB on ide and 238 on SATA hard disk), would it be half of all disk space (195 GB)?

I have 1.5GB ram (R.I.P. 512MB, which i remover few days ago....)


Instead of thinking why I should migrate to zfs, i would like to hear, why i shouldn't migrate to zfs :D


Thank you in advance

P.S.
links, info, experience, etc.... appreciated :D

EDIT:
what do you think about:
all files on one ZFS vs system on one ZFS and data on other ZFS ?
 
Over the past few months or so, I've seen occasional instability and crashes. Most of them were in ISP style hosting setup with high load and almost all of the server were running with ZFS. With UFS we have no problem at all.

In initial testing, it did provides us some good results, but real-world experience has left us to conclude that zfs in 7.xRelease is not stable or reliable. I believe by version 8.0 it will get stable enough.

HTH
 
I have nothing but praise for and good experiences with using ZFS on FreeBSD 7.x. I use it at home (3x 120 GB SATA using raidz1) keeping 2 months of daily snapshots. At work, we use it on our backups servers, where they do rsync backups of over 100 remote servers every night (see my howto thread on the setup).

Once you wrap your head around the concept of a single storage pool per server, and thinking in terms of vdevs instead of disks, then it makes so much sense that you wonder how we ever survived using slices and partitions. :)

If your drives are not all of the same size, you can't use raidz (you lose any space over and above the size of the smallest disk). But you can do mirroring. And you can set copies=2 or higher to keep redundant copies of data on a non-redundant set of disks. But if you don't do at least mirroring, then losing 1 disk will corrupt the entire pool.

It's possible to create vdevs using slices; however, ZFS (at least on Solaris, don't now for sure on FreeBSD) will disable the onboard disk cache if the vdevs don't consist of entire drives. It's also possible to create vdevs using files.

IOW, unless you have a collection of totally random, odd-ball sized drives, I say give ZFS a spin.
 
phoenix said:
But you can do mirroring. And you can set copies=2 or higher to keep redundant copies of data on a non-redundant set of disks.

as i understand, if I, for example, set 1 HDD to use ZFS, I can make it redundant, right?

I'm consider putting ZFS on my 250GB sata HDD for test drive....
What I want is to be able to use ZFS after power failures.... so i need healing.... from what you write, I understand that I can enable this even with 1 disk.... (I don't consider losing entire disk, which is very unlikely to happen in next 2 years, and by that time I will have new PC [most likely])
 
OMG. I have zfs on my HDD for about 10 minutes and i'm so impressed.... that i can't think straight....

May brain is saying that I must use ZFS for everything (as much as possible)



EDIT:
I am shocked.......
I love ZFS, I love FreeBSD, I love Sun Microsystems
I love, I love, I love....
you can't even imagine how much time this will save me dealing with partitioning (I'm kind of guy who likes optimizing everything.... Sometimes i hate myself, because i can't decide if i want 6 or 8 GB for /usr etc)
This rocks....
 
OK, i tried setting ZFS on root, but failed (many times)
my PC stayed online max 15min with ZFS on root.

I've read resources from
http://wiki.freebsd.org/ZFS

I fallowed tuning guide, with little success
Now i'm thinking to either try FreeBSD_current or wait for FreeBSD-8-release (I'll probably try current)
 
killasmurf86 said:
as i understand, if I, for example, set 1 HDD to use ZFS, I can make it redundant, right?

The data can be made redundant, by setting the filesystem property copies to something higher than 1. Then ZFS will save multiple copies of each file in different places on the disk. If one copy is corrupted, another copy will be loaded instead. However, if the drive dies, everything on the drive is gone.

I'm consider putting ZFS on my 250GB sata HDD for test drive....
What I want is to be able to use ZFS after power failures.... so i need healing.... from what you write, I understand that I can enable this even with 1 disk.... (I don't consider losing entire disk, which is very unlikely to happen in next 2 years, and by that time I will have new PC [most likely])

Yes, this will work, and can be good for testing. See the man page for zfs(8) for details on how to set properties on filesystems.
 
killasmurf86 said:
OK, i tried setting ZFS on root, but failed (many times)
my PC stayed online max 15min with ZFS on root.

I've read resources from
http://wiki.freebsd.org/ZFS

I fallowed tuning guide, with little success
Now i'm thinking to either try FreeBSD_current or wait for FreeBSD-8-release (I'll probably try current)

I wouldn't bother trying to get /-on-ZFS working until FreeBSD 8.x is released with proper support for it in the loader, the kernel, the init system, etc.

There are a bunch of different ways to do it with FreeBSD 7.x, but most of them are hacks that don't always work.

For a single harddrive, I'd recommend creating 2 slices on the disk: the first slice will be used for / and /usr (2 GB is plenty), the second slice will be used for everything else and dedicated to ZFS.

In the first slice, create 2 partitions: / and swap (you could create a third for /usr if you really want, but I'd leave it on /).

After the install and initial boot, enable ZFS support, and add the second slice to the pool.

Then create filesystems for /var, /usr/ports, /usr/src, /usr/obj, /home, and /usr/local; but don't set the mountpoint.

Finally, boot into single user mode, and:
* mount -u /
* /etc/rc.d/hostid start
* /etc/rc.d/zfs start
* cp -Rp /path/* /pool/path/ for each of the above filesystems
* rm -rf /path/* for each of the above filesystems
* zfs set mountpoint=/path pool/path for each of the filesystems
* shutdown -r now

That will copy all the data for each of the filesystems off / and onto ZFS filesystems. Then reset the ZFS mountpoints to the correct locations. And finally, boot into the OS using the ZFS filesystems. After that, the only data under / will be the base FreeBSD OS. Just enough to boot into single-user mode and fix ZFS issues if needed. Everything else will be on ZFS.
 
he he he, ye i was thinking about that.....
I just got tired today....
Now I had rest for few hours and I'm ready to continue...
ZFS is really wonderful, and i can't wait till 8 is out. (I'm waiting it even more than 7.0, when i became regular FreeBSD user) [pardon for poor language]



Off topic:
btw. I will do it a little different way, since my system is wiped (backups doesn't count. lol)
I will do everything from fixit (basically install FreeBSD without sysinstall. I'm already so used to this method, thanks to coray_james @ daemonforums.com for http://daemonforums.org/showthread.php?t=1538)
If not that post I woudn't use gpt, fully encrypted disks and lots of other stuff that I like

P.S.
FreeBSD-8-Current didn't even boot with ZFS, it had panic just before mounting drives
 
man, When i was looking in hand book, I hit ctrl+f (in firefox) and typed zfs.
I was surprised that there was no info.... (i knew i saw it once)
I wouldn't not even think of searching "Z file system"
this should be changed (i think), because everyone calls it zfs

However i found this info elsewhere.
 
phoenix said:
The data can be made redundant, by setting the filesystem property copies to something higher than 1. Then ZFS will save multiple copies of each file in different places on the disk. If one copy is corrupted, another copy will be loaded instead. However, if the drive dies, everything on the drive is gone.



Yes, this will work, and can be good for testing. See the man page for zfs(8) for details on how to set properties on filesystems.

already learned all that. It was fast, and much more simpler, than it seamed at first
 
OK so far so good, booted system...
Will see how stable it is, but for now there is 1 problem already:
System panics during shutdown/reboot
Code:
Waiting (max 60 seconds) for system process 'buffdaemon' to stop...done
All buffers synced
panic: vput negative ref cat
cpuid=0
Physical memory: 1523 MB
....

Edit:
bough disks are completely encrypted, i'm booting from flash

EDIT
other than this, everything seams to be fine
PC is up and running for 50 min already :D
 
If you have a system i386, very bad. ZFS is better on amd64.
And CURRENT newer, so it is best to wait for him.
CURRENT can to boot with ZFS ;)
 
f-andrey said:
If you have a system i386, very bad. ZFS is better on amd64.

ZFS works just fine on 32-bit systems. You just need to be more aggressive in your kernel memory and ARC tuning. And you really should have more than 2 GB of memory (people have run ZFS on 32-bit systems with as little as 512 MB, but more is always better).

And CURRENT newer, so it is best to wait for him.
CURRENT can to boot with ZFS ;)

Kip Macy has made available a test branch of 7-STABLE that includes ZFSv13. Will be interesting to see if this makes it into 7.3. :)
 
phoenix said:
ZFS works just fine on 32-bit systems. You just need to be more aggressive in your kernel memory and ARC tuning. And you really should have more than 2 GB of memory (people have run ZFS on 32-bit systems with as little as 512 MB, but more is always better).

I can use ZFS (if i download torrents with less than 2MB/s)
I will try tuning more....
I Had 2GB ram, but 512mb died....
I'm going to buy 512MB or 1GB next week


phoenix said:
Kip Macy has made available a test branch of 7-STABLE that includes ZFSv13. Will be interesting to see if this makes it into 7.3. :)

That's wonderful....
Perhaps i should try....



HERE's disk IO
Code:
               capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G      2      4   245K   201K
sys         90.2G  50.8G     11     17   869K   585K
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G     80      0  10.0M      0
sys         90.2G  50.8G      0      0      0      0
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G    165     58  20.7M  4.64M
sys         90.2G  50.8G      2     38   184K   483K
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G      1      3   248K  15.5K
sys         90.2G  50.8G      0      0      0      0
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G      0      0      0      0
sys         90.2G  50.8G     11      0   859K      0
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G     35      0  4.37M      0
sys         90.2G  50.8G      0      0      0  3.89K
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G    105      0  13.2M      0
sys         90.2G  50.8G      1      0   198K      0
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G      0    119      0  9.64M
sys         90.2G  50.8G      0      0      0      0
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G    211      0  26.5M      0
sys         90.2G  50.8G      3     16   376K   149K
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G     56      0  7.06M      0
sys         90.2G  50.8G      0     41  62.4K   280K
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G     11      0  1.39M      0
sys         90.2G  50.8G      6      0   515K      0
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G     28     74  3.51M  9.30M
sys         90.2G  50.8G      0      0      0      0
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G     40     47  4.93M  3.88M
sys         90.2G  50.8G      0      0      0      0
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G    183      0  22.8M      0
sys         90.2G  50.8G      2     52   227K   516K
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G     46     70  5.84M   953K
sys         90.2G  50.8G      0      4      0  39.0K
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G      0      0      0      0
sys         90.2G  50.8G      4      0   310K  11.6K
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G      0      0   125K      0
sys         90.2G  50.8G     23      0  1.63M      0
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G    145      3  18.1M   443K
sys         90.2G  50.8G      5      0   411K      0
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G    141     76  17.6M  3.76M
sys         90.2G  50.8G      0     59      0   638K
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G      2      0   373K      0
sys         90.2G  50.8G      0      3      0  15.5K
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G      0      0      0      0
sys         90.2G  50.8G      0      0      0      0
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G      0      0   124K      0
sys         90.2G  50.8G      0      0      0      0
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G    141     29  17.6M  3.56M
sys         90.2G  50.8G      0      0      0      0
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G    165      8  20.6M   201K
sys         90.2G  50.8G      1     51   127K   416K
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G    195      0  24.5M      0
sys         90.2G  50.8G      2      3   281K  14.5K
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G    184      2  23.1M   114K
sys         90.2G  50.8G      0     46  46.7K   445K
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G     52      3  6.38M  15.7K
sys         90.2G  50.8G      0      0      0      0
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G      2     52   251K  2.01M
sys         90.2G  50.8G      0      0  95.1K      0
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G     20      0  2.57M      0
sys         90.2G  50.8G      0      0      0      0
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G      0      0      0      0
sys         90.2G  50.8G      0      0      0      0
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G      0      0      0      0
sys         90.2G  50.8G      1      0   196K      0
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G      0      0      0      0
sys         90.2G  50.8G      0      0      0      0
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G     76    144  9.52M  8.56M
sys         90.2G  50.8G      0      0      0      0
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G    159      0  19.9M      0
sys         90.2G  50.8G      6    114   660K  1.42M
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G     60      0  7.57M      0
sys         90.2G  50.8G      0      0      0  3.91K
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G    116      0  14.5M      0
sys         90.2G  50.8G      2      0   306K      0
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G    158     52  19.8M  6.32M
sys         90.2G  50.8G      0      0  96.7K      0
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G     36     35  4.42M   330K
sys         90.2G  50.8G     12      0  1.04M      0
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G      0      0      0      0
sys         90.2G  50.8G      1     82   124K   852K
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G     32      0  4.02M      0
sys         90.2G  50.8G      1      0   196K      0
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G      0     11      0  1.46M
sys         90.2G  50.8G      1      0   207K      0
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G     66     38  8.35M  4.79M
sys         90.2G  50.8G      3      0   412K      0
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G    206      0  25.8M      0
sys         90.2G  50.8G      0      0   105K      0
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G     54      0  6.75M      0
sys         90.2G  50.8G      4     60   398K   568K
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G      0      0  62.1K      0
sys         90.2G  50.8G      0      0      0      0
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G      0      0      0      0
sys         90.2G  50.8G      2      0   299K      0
----------  -----  -----  -----  -----  -----  -----
data        58.1G   174G      0      4  61.9K   281K
sys         90.2G  50.8G      0      0      0      0
----------  -----  -----  -----  -----  -----  -----
data        58.2G   174G     31    158  3.75M  15.3M
sys         90.2G  50.8G      0      0  63.0K      0
----------  -----  -----  -----  -----  -----  -----
and then it hanged

I've been monitoring my ram, and i had plenty free ram (few hundreds to 1GB free)
It's also worth mentioning, that I'm running very light width desktop
 
Just curious, but why do you have two separate pools in the same system?

As for the hanging issue, have you done any VM/ARC tuning in /boot/loader.conf?
 
phoenix said:
Just curious, but why do you have two separate pools in the same system?
Because if i decide to move back to UFS, it'll be much more easy to do. Transferring over 60GB to laptop is pain, because laptops wifi and build in network card sux


phoenix said:
As for the hanging issue, have you done any VM/ARC tuning in /boot/loader.conf?
Yup, i tried.... I will keep on experimenting.
I will try again to compile kernel with ...KVA=512... last time it failed to compile.


Do you think using single pool would help?


Also ATA disk is about 4-5 Years old..... It might start failing soon
 
With only 1.5 GB of RAM, you can't use KVA_PAGES=512. That will give you 2 GB of kernel memory space ... which means there's nothing left for the userland. :) You'll want to remove that setting from your kernel.

By default, 1/2 of your RAM is configured as kernel memory. In your case, that would be 768 MB.

My rule of thumb has been: 1/2 RAM for kernel, 1/2 kernel space for ARC. Set kmem_max to 768 MB. Then set zfs.arc_max to 384 MB. That should keep things stable.
 
phoenix said:
With only 1.5 GB of RAM, you can't use KVA_PAGES=512. That will give you 2 GB of kernel memory space ... which means there's nothing left for the userland. :) You'll want to remove that setting from your kernel.

By default, 1/2 of your RAM is configured as kernel memory. In your case, that would be 768 MB.

My rule of thumb has been: 1/2 RAM for kernel, 1/2 kernel space for ARC. Set kmem_max to 768 MB. Then set zfs.arc_max to 384 MB. That should keep things stable.

ok, I could actually increase kmem_max even more.
I monitored my memory usage, and it's about 700MB free (Very stable, haven't seen less)

With ufs most of it probably was used for HDD cache.


EDIT:
currently on i386, kmem limit is 512M (so it seams), PC panicked


On i386 systems you will need to recompile your kernel with increased KVA_PAGES option to increase the size of the kernel address space before vm.kmem_size can be increased beyond 512M. Add the following line to your kernel configuration file to increase available space for vm.kmem_size to at least 1 GB:

options KVA_PAGES=512
http://wiki.freebsd.org/ZFSTuningGuide
 
added options KVA_PAGES=512 to kernel comfig and increased kernel memory to 1G and Arc to 512m


vm.kmem_size_max: 1073741824 (1G)
vm.kmem_size: 1073741824 (1G)
vfs.zfs.arc_max: 536870912 (512M)


Still crashing.... could it be because I use 2 pools?
 
he he
I did interesting test:
I downloaded file from some (relatively) high speed ftp (100Mbps)
It was downloading at ~9MB/s (which is up to 5 times faster then when i downloaded files from torrents). I had few small lags, but nothing crashed.
I downloaded file to each of pools using elinks. Everything went file.

Conclusion: It's probably Deluge causing all my problems (i never liked python), however I won't tag thread SOLVED for now... (just to make sure)
 
KVA_PAGES gets mutilplied by 4 to come to the number of KB to use for kernel memory. Using KVA_PAGES=512 means 2 GB of kernel memory space. If you run with this setting, with only 1.5 GB of RAM, you will run into issues, unless you have a lot of non-ZFS disk space set up for swap.

Unless you have over 2 GB of memory, don't mess with KVA_PAGES.

I'll have to dig up where I read about this, I just went over this myself last summer. Setting this too high, and setting kmem_max too high, in relation to the amount of RAM you have, will panic the kernel.
 
  • Thanks
Reactions: jef
Argh... ZFS The filesystem I most love, and hate....

Killasmurf, I had the same feeling when I fist learned of and used ZFS: I got to use it! for everything!
And so far I am very impressed with its features, but then comes the crashes... and crashes...

and Tuning doesn't always solve your problems forever
I've tried many different configurations in memory, my system is also i386, a pentium4 2.6ghz with 2gb of DDR400 RAM,
Right now I am using
Code:
vfs.zfs.arc_max="512M"
#vfs.zfs.vdev.cache.size="5M"
#vfs.zfs.prefetch_disable=1
vm.kmem_size="1024M"
vm.kmem_size_max="1024M"
Once I decreased zfs memory so low that it never crashed, but running portmaster -a was so slow, and every other heavy io disk activity would take forever, so I am seriously considering buying more 2gb of ram so I can feed RAM hungry ZFS.


I am very interested in this testing FreeBSD 7STABLE branch which has zfs13, I think would be our best bet, mainly because I can't run FreeBSD 8 yet because it doesn't recognize my SATA disk controler
http://forums.freebsd.org/showthread.php?t=3682
but more on that later....
I've been too busy lately :-(

But even with all these problems I think ZFS worthies the work because it is so promising, powerful and simple.
 
Back
Top