zfs - to be or not to be

phoenix said:
KVA_PAGES gets mutilplied by 4 to come to the number of KB to use for kernel memory. Using KVA_PAGES=512 means 2 GB of kernel memory space. If you run with this setting, with only 1.5 GB of RAM, you will run into issues, unless you have a lot of non-ZFS disk space set up for swap.

How can it be 2GB when i have clearly set kmem_max to 1G.
And when i booted, it showed 1G

I have 6G swap, just in case

phoenix said:
Unless you have over 2 GB of memory, don't mess with KVA_PAGES.
Without KVA_PAGES i can't have more than 512MB kernel memory
I will try to set it to lower number next time.


anyway, i somehow messed things up.
I didn't export data pool. I simply destroyed it and added sata disk to sys pool. In the end i have panics.
Managed to avoid panics, when I boot from FreeBSD-8-current cd, and import and export ZFS pool. Then i restart and i can imports sys pool.
However sometimes I can see Data pool (corrupted. lol). and when PC crashes, i have to boot from FreeBSD-8-Current cd, again.

For a few minutes i thought i lost my music collection....
Now i only need to transfer it to laptop....
After that, tomorrow, I will try to Install FreeBSD-8-Current.

bigboss said:
But even with all these problems I think ZFS worthies the work because it is so promising, powerful and simple.
Yup, it's so good, i can't refuse myself to go trough all the mambo jumbo, to get it work for everything.
Best of all, it's always consistent, no mater how many times my PC crashes.
 
ZFSV13 on FreeBDS-7STABLE!!!

Where is this Kip Macy FreeBDS-7STABLE branch ?
I tried to look for it but I got totally lost in the tree, can someone help me out ?

I really can't use CURRENT now because FreeBSD-8 doesn't recognize my hardware, and besides that I'd like to keep using stable.
 
Search the mailing lists for -stable and -current, the link to the repos is in his e-mail message. It's not part of the official FreeBSD source tree.
 
FreeBSD 8-CURRENT Rocks!

Hi guys, I managed to install FreeBSD 8 for good.(I'm just about figuring out the fix for this PR http://www.freebsd.org/cgi/query-pr.cgi?pr=121461 , but more on that later)
Phoenix, I searched the mail lists a little and I didn't find it. I was needing to recompile everything anyway, I thought upgrading to current now would be an opportunity to have a more stable ZFS. I "snapshoted" everything first, and upgraded my root partition to 8-CURRENT, and surprisingly it worked, last time I tried 8-CURRENT snapshot it didn't.

Anyway, I upgraded from 7.2-STABLE to 8-CURRENT using sources, and I got caught in a bad situation when you do
Code:
make installkernel
and then reboot, you get an almost unusable ZFS, with a ZFS 13 module on the kernel with ZFS 6 userland which doesn't start correctly and leave the system stuck at single user(specially if you have /usr on a zpool like me). You can just type
Code:
mount -t zfs tank/usr /usr
for example and you're good to go, but to really avoid this run
Code:
make installworld
BEFORE rebooting the machine, I repeat BEFORE rebooting the machine, so you'll have both zfs kernel modules and userland up to date.
There is also another gotcha, I had to installworld with the following variable set
Code:
make NO_FSCGH=true installworld
as the old zfs filesystem was version=1 and don't support flags until you have upgraded the filesystem to version 3.

Got it from the mail list
http://www.nabble.com/zfs-version-13-kernel-and-zfs-version-6-userland-tool--td20650216.html

There should be a note about this in /usr/src/UPDATING
there shouldn't ?

By the way, zfsv13 is FAR more stable. Running some stress tests without a reboot or freeze yet!
�e
 
I bought 1BG ram [now i have 2.5G ram] :)
now customizing kernel

I think I will even make ZFS bootable flash with basic FreeBSD on it :D

Unfortunately i wasn't able to use compression on zfs boot partition, and couldn't start off 128M flash :(

But with compression GENERIC kernel fit in it quite well, i even had 25 (gzip) to 28 (gzip-9) MB free Disk space
 
Guys, do you really don't have lags when writing to disk at high speed?
I have small, very annoying, periodic lags....

From here I'm thinking of trying few things:
1) try to decrease vfs.zfs.arc_max to some very few megabytes. I hope this would force ZFS to write to disk instantly, unlike now when it writes at very high speed for few seconds, and then waits for cache to fill
2) Increase vfs.zfs.arc_max even more (currently it's 512MB)
3) rebuild pool without geli (man, i really don't want this)
 
killasmurf86 said:
Guys, do you really don't have lags when writing to disk at high speed?

At work, no, we don't have that, and we do heavy, sustained reading and writing for 5-hour periods twice a day.

At home, yes, I do experience this, but have never been able to track down exactly how to fix/minimise it.

I'm almost positive it has to do with the size of the ARC and how often it gets flushed, but haven't played around with the settings too much to confirm.
 
phoenix said:
Kip Macy has made available a test branch of 7-STABLE that includes ZFSv13. Will be interesting to see if this makes it into 7.3. :)

First off, thanks for all the practical pointers on getting ZFS to be functional. From what I read, it sounds as though, under light or moderate load, ZFS is "stable enough" for "non-life-support applications."

I'm going to be building up some Atom 330 iTX boxes with 2GB and paired 500 GB notebook drives to replace my decade-old Intel Pentium III (733.13-MHz 686-class CPU) boxes that mainly supply small-scale external web services, mail, and (internal) file server mainly for a couple Macs.

Do you have any feeling on the relative stability of the "test branch" of 7-STABLE compared to what I've been used to in tracking -STABLE since the 4.x days?
 
For boxes like that, with just two drives, I'd just use gmirror. Less CPU/RAM required for gmirror compared to ZFS. ZFS really only gets useful/fun when you have lots of disks. :)

-STABLE is usually usable, but one should subscribe to the -stable mailing list and watch for the various HEAD'S UP messages about big changes that are going in, and the various MFC messages detailing code coming in from -CURRENT.
 
Thanks -- I missed the second page that indicates that ZFS is in -STABLE now. I've dealt with occasional "bad times to buildworld" in the past, so I'm ok with that.

ZFS looks like it solves a few issues for me that GEOM, I don't think, will, including
  • Snapshots for rollback
  • Dealing with a "partition" per jail (potentially with multiple "sub-partitions")
  • Resizing "partions"
It also becomes very interesting on the boxes where 500GB isn't enough (did I really say that?), such as the media and Time Machine file servers, which will probably have four (or six) 1 TB drives in addition to the pair of notebook drives.

I'll probably build the "critical services" machines on GEOM and try -STABLE on another box or two before making the decision about when to cut over.
 
phoenix said:
I'm almost positive it has to do with the size of the ARC and how often it gets flushed, but haven't played around with the settings too much to confirm.

Yes, I think exactly the same.
I tried setting ARC to 50MB, but for some reason it wasn't applied.

Do you use i386 or amd64 at home?
 
danger@ said:
8.0 in the summer will be an interesting release :)

Very very interesting.... I'm running current right now.... {to bad it has lags.....}

I'm googling for second day, still can't find anything.....
I'm already starting to think about submitting PR
 
killasmurf86 said:
Yes, I think exactly the same.
I tried setting ARC to 50MB, but for some reason it wasn't applied.

Do you use i386 or amd64 at home?

32-bit FreeBSD 7.1, 3.0 GHz P4 CPU, 2 GB RAM, 3x 120 GB SATA drives in raidz1.
 
I have
32-bit, FreeBSD-8-Current, 3GHz P4 HTT enabled CPU, 2.5GB RAM, 1x250GB SATA + 1x160GB ATA HDD in raidz.


Let's make this clear (for me)...
$ zpool create poolname ad0 ad4
is that raidz (or just striping)? (i'm getting confused with all the raid names)
 
If you don't specify raidz on the command-line, then it isn't using raidz. :) Same for mirroring.

What you have is a non-redundant pool comprised of two vdevs. The pool is striped across the two vdevs (RAID0).
 
3x ZFS problems

1)

I'm not sure if this is directly related to zfs

2) after running zfs rollback i get kernel panic

3) zfs create problem
When i run zfs create, it created new fs, and it's automatically mounted.
But you can't write to it unless you run
Code:
$ zfs umount -a
$ zfs mount -a
it seams that new fs was mounted under other fs. For example:
if i have /home (a/home)
and i $ zfs create a/home/killasmurf86
it will be automatically mounted (if you don't change default settings)
then if i do
restore my home directory backup everything will be written to /home.
mount will show a/home/killasmurf86 mounted, but i'm not able to restore backups until i remount it.

[uhh, explaining 3rd is really hard]



I use compression=gzip and copies=2 on almost all fs

I attached my kernel config, perhaps that has something to do with 1st problem
 

Attachments

  • ANTIGENERIC.txt
    13.7 KB · Views: 199
Back
Top