ZFS v28 in -STABLE

I was waiting for it, it was supposed to be MFC'ed a month after it was added to -CURRENT.
So I kept an eye on it but forgot about it after a while :e

Just noticed it again. Updating my source tree as we speak :h
 
I was always very conservative regarding arc usage because my system runs a few other services as well besides being a file server. If you see the statistics below it is impressive how increasing the arc has not affected the performance.

Currently with:
Code:
vfs.zfs.arc_max="2048M"
Before:
Code:
vfs.zfs.arc_max="1536M"

memory-month.png


zfs_arc-month.png
 
Matty said:
What did you think the 500mb extra arc cache would do?

It is actually 512Mb. The problem I have faced since 8.0-RELEASE is unstable behavior with 4Gb of RAM and no tuning at all. During large file transfers with samba (50GB) the system would crash. Searching for solutions I found that limiting the arc and kmem size would solve the problem. Since 8.2-RELEASE kmem size does not need to be adjusted any more but arc does. After ZFS v28 I also noticed that the arc_max size can safely be increased.
 
gkontos said:
It is actually 512Mb. The problem I have faced since 8.0-RELEASE is unstable behavior with 4Gb of RAM and no tuning at all. During large file transfers with samba (50GB) the system would crash. Searching for solutions I found that limiting the arc and kmem size would solve the problem. Since 8.2-RELEASE kmem size does not need to be adjusted any more but arc does. After ZFS v28 I also noticed that the arc_max size can safely be increased.

That's new to me that kmem doesn't need any adjustments. I still use 1.5 x RAM without problems for quite some time now.
 
gkontos said:
It is actually 512Mb. The problem I have faced since 8.0-RELEASE is unstable behavior with 4Gb of RAM and no tuning at all. During large file transfers with samba (50GB) the system would crash. Searching for solutions I found that limiting the arc and kmem size would solve the problem. Since 8.2-RELEASE kmem size does not need to be adjusted any more but arc does. After ZFS v28 I also noticed that the arc_max size can safely be increased.

There was a known issue with samba on zfs when using sendfile [mem leak].
http://lists.freebsd.org/pipermail/freebsd-fs/2011-February/010647.html

Compile samba with aio support, disable sendfile and it flies.
http://lists.freebsd.org/pipermail/freebsd-stable/2011-February/061642.html
 
Matty said:
Thats new to me that kmem doesn't need any adjustments. I still use 1,5xram without problems for quiet some time now.
The only thing that still needs tuning after 8.2-Release, that is if your system crashes, is arc_max.
 
thuglife said:
There was a known issue with samba on zfs when using sendfile [mem leak].
http://lists.freebsd.org/pipermail/freebsd-fs/2011-February/010647.html

Compile samba with aio support, disable sendfile and it flies.
http://lists.freebsd.org/pipermail/freebsd-stable/2011-February/061642.html

I think that bug appeared in stable just before 8.2-RELEASE. Anyway, that didn't affect me at that time.

I also read somewhere that samba35 had a problem with aio support but this could be BS. Are you using this set up, because I would really like to test it. Although the bottleneck in my case is my switch (100MB).
 
thuglife said:
Yes I use samba35 with aio support and sendfile disabled, works wonderfully.
Thanks I will recompile after my WD drive gets replaced for the 3rd time x(

Code:
gkontos@hp>zpool status
  pool: tank
 state: DEGRADED
status: One or more devices has been taken offline by the administrator.
	Sufficient replicas exist for the pool to continue functioning in a
	degraded state.
action: Online the device using 'zpool online' or replace the device with
	'zpool replace'.
 scan: scrub canceled on Tue Jun 21 14:05:36 2011
config:

	NAME              STATE     READ WRITE CKSUM
	tank              DEGRADED     0     0     0
	  raidz1-0        DEGRADED     0     0     0
	    label/zdisk1  ONLINE       0     0     0
	    label/zdisk2  ONLINE       0     0     0
	    label/zdisk3  OFFLINE      0     0     0
 
Back
Top