ZFS and tmpfs together

I have read a few places that tmpfs and ZFS on the same box is a bad idea. However in all that I've found, it predates 9.0-RELEASE. Today now I see some recommending it, but I'd just like to know if this was in fact an issue, and if so, has it been fixed.

Thanks!
 
Works just fine. Never heard of any issues with the combination. tmpfs(5) is still considered experimental though.

Code:
root@williscorto:~#df -h
Filesystem                         Size    Used   Avail Capacity  Mounted on
zroot                              211G    766M    210G     0%    /
devfs                              1.0k    1.0k      0B   100%    /dev
tmpfs                              3.5G     12k    3.5G     0%    /tmp
procfs                             4.0k    4.0k      0B   100%    /proc
fdescfs                            1.0k    1.0k      0B   100%    /dev/fd
linprocfs                          4.0k    4.0k      0B   100%    /compat/linux/proc
zroot/usr                          213G    3.2G    210G     2%    /usr
zroot/usr/home                     220G     10G    210G     5%    /usr/home
zroot/var                          210G     34M    210G     0%    /var
zroot/var/log                      210G     13M    210G     0%    /var/log
 
SirDice said:
Works just fine. Never heard of any issues with the combination. tmpfs(5) is still considered experimental though.

I remember some quote from phoenix, that there are issues when You run out of Inact/Cache/Free memory with both tmpfs and ZFS, as long as You have any of these available, there are no issues.
 
This system only has 2GB of memory. It sometimes start swapping. Never had any problems though.
 
Interesting... Never noticed this before..

Code:
# Just after a reboot
dice@williscorto:~>df -h /tmp
Filesystem                    Size    Used   Avail Capacity  Mounted on
tmpfs                         5.6G    8.0k    5.6G     0%    /tmp

# Generate a large random file
dice@williscorto:~>openssl rand 2000000000 > test.ran
dice@williscorto:~>ll -h test.ran
-rw-r--r--  1 dice  dice   1.9G Mar 14 11:32 test.ran

# Check again
dice@williscorto:~>df -h /tmp/
Filesystem    Size    Used   Avail Capacity  Mounted on
tmpfs         3.3G    8.0k    3.3G     0%    /tmp
# Notice the size difference?

dice@williscorto:~>swapinfo
Device          1K-blocks     Used    Avail Capacity
/dev/gpt/corto-swap   4194304        0  4194304     0%

dice@williscorto:~>cp test.ran /tmp/
dice@williscorto:~>df -h /tmp/
Filesystem    Size    Used   Avail Capacity  Mounted on
tmpfs         5.6G    1.9G    3.7G    33%    /tmp
# Size is back to 'normal'

dice@williscorto:~>swapinfo
Device          1K-blocks     Used    Avail Capacity
/dev/gpt/corto-swap   4194304   287784  3906520     7%

dice@williscorto:~>uname -a
FreeBSD williscorto.dicelan.home 9.0-STABLE FreeBSD 9.0-STABLE #0: Wed Jan 25 13:03:03 CET 2012     root@molly.dicelan.home:/usr/obj/usr/src/sys/CORTO  amd64

Besides the fluctuating size of /tmp/ it all seemed to work. I'll see if I can do the same test with an even larger file (bigger than my internal memory).
 
@SirDice

Hmm OK. Having messed around a lot with the ZFS ARC on Solaris a lot the drop in tmpfs doesn't surprise me at all. ZFS will generally try to cache whatever it can (up to the limit you set on the ARC)

What surprises me a lot though is how it went back up after copying, not moving (if it was a move, I'd understand)

I'm thinking maybe I should just go with a tmpmfs (which is mdmfs()-based, IIRC) instead of tmpfs(). Just need to find a good size.
 
What if I run on ZFS and want to use tmpfs(5) for ports compiling and devel/ccache temporal work directories? Feasible? Advantageous?

Temporal workdirs on memory work very well on UFS, avoiding writes and speeding up compilation, but I am not so sure if it will work so well on ZFS, given the memory needs it has.
 
Back
Top