ZFS SSD cache device in FreeBSD

Riddle me this:

I am about to build a new NAS soon (I actually already have most of the hardware already) with the following disk configuration:

1 x 40gb Intel SSD => AHCI driver => GPT => UFS2+Softupdates => FreeBSD OS installation
2 x 2tb SATA disks => AHCI driver => GPT => GELI => ZFS mirror out of 2 geli vdevs

Now, if I wanted to give a noticable boost to read performance, I could utilise part of my system SSD disk by using a dedicated partition (also gelified), say, 10gb in size on it as a ZFS cache vdev. When using a cache device, ZFS does write the cache data to the disk in a contiguous way, because it is assumed that the cache vdev is an SSD so it's designed to minimise the wear. If I wanted to reduce the wear further, an obvious solution would be to increase the cache vdev size, for example to 20gb. But still, just how fast is that kind of use is going to wear out and kill an SSD?

Another concern I have is: what matters more for the disk serving as a cache vdev, read speed or write speed? The SSD model I have in sights is a cheap new Intel 40gb model that costs ~120$, judging from early benchmarks of a Kingston 40gb SSD that uses the same controller, it has very fast reads, 180+ mb/s, low latency, under 0,1ms, but very slow write - 35-40mb/s. Is the slow write performance of the disk going to kill it's benefit when used as a cache device or does ZFS caching work in such a way that read speeds are vastly more important?
 
Found answers to some of my questions:

What about writes - isn't flash memory slow to write to?

The L2ARC is coded to write to the cache devices asynchronously, so write latency doesn't affect system performance. This allows us to use "read-bias" SSDs for the L2ARC, which have the best read latency (and slow write latency).

What's bad about the L2ARC?

It was designed to either improve performance or do nothing, so there isn't anything that should be bad. To explain what I mean by do nothing - if you use the L2ARC for a streaming or sequential workload, then the L2ARC will mostly ignore it and not cache it. This is because the default L2ARC settings assume you are using current SSD devices, where caching random read workloads is most favourable; with future SSDs (or other storage technology), we can use the L2ARC for streaming workloads as well.
 
While on the subject of FreeBSD and ZFS:

I am still not 100% on whether I want to use UFS2 with softupdates on the 40gb SSD or if I want to use a simple, non-redundant ZFS root pool on it. My concern is this: I really really like freebsd-update and want to continue using it. Freebsd-update however assumed that no part of your base system has been compiled by hand, it's intended to be used to update from official binaries to other official binaries. I also do know that you HAVE to build a custom loader if you want to boot off a ZFS mirror or raidz... but what about a non-redundant ZFS pool as root in 8.0-RELEASE? Can I have a full ZFS FreeBSD installation on a non-redundant ZFS pool and have the system boot off it without having to compile anything manually with the existing binaries provided on the 8.0 install DVD?
 
@Jago

I install FreeBSD that way:
Code:
slice 1: /    512MB (mounted read-only UFS without SoftUpdates)
         SWAP 2G

slice 2: zfs (which covers /usr /var)

         /tmp mounted on SWAP using mdmfs (in /etc/rc.conf)

That way you have rock solid / with all ZFS potencial and you do not have to rebuild anything.
 
Can you use GPT labelling + partitioning + ZFS like that (without rebuilding anything) or does that require MBR?
 
I use plain MBR for that, with fdisk:

Code:
# cat > config << EOF
p    1    165     63    2560M
p    2      0      *        *
EOF

# fdisk -f config ad0

Then, use bsdlabel:
Code:
# bsdlabel -B -w ad0s1
# bsdlabel -e ad0s1
8 partitions:
	 #	  size	 offset    fstype   [fsize bsize bps/cpg]
	   a:	  512m	      0    4.2BSD
	   b:       *	      *    swap
	   c: 1173930	      0    unused	 0     0	 # "raw" part, don't edit
:wq


# newfs ad0s1a
# zpool create pool ad0s2
# zfs create pool/usr
# zfs create pool/var
 
Ended up placing an order for a 80gb Intel SSD and a 2nd 2tb WD Green disk (I already had one). Probably going to stick with UFS2 and softupdates for the root partition. 4gb swap, 24gb for OS, 50gb for ZFS readcache, gonna be sweet :)
 
Back
Top