Use SSD Boot-disk as ZFS cache

Hi all,

I am working on my ZFS pool layout with 8 disks and I am thinking about 2 striped vdevs containing 4 disks with RAIDZ. For me speed is important, so I doubt about the RAIDZ and maybe I switch to 4 striped vdevs with 2 mirrored disks.

To further improve speed, I did read about read/write cache and wondering if I can use my SSD Boot-disk for caching purposes. It is a 60GB disk and got plenty of space left after FreeBSD installation I guess.

I know ZFS does allow to use files as block-device. Is it possible/smart to use my SSD Boot-disk also as read or write cache for a ZFS pool?
 
It depends on how much faster the SSD is compared to the standard disks. Also check how much write wear the SSD can take. I would use the SSD as a cache device if it's fast enough to make a difference. A file as a block device is not a good idea though, resize your FreeBSD installation so that you can add a new partition for the cache.
 
A SSD is by default much faster because of the extreem low latancy en the minimaal faster througput then a converntional drive. So can I assume in any case the use of a partition (slice?) is a smart thing to do for use as read cache?
 
M_Devil said:
A SSD is by default much faster because of the extreem low latancy en the minimaal faster througput then a converntional drive. So can I assume in any case the use of a partition (slice?) is a smart thing to do for use as read cache?

Although I have never tried to use the SSD for the OS and cache, I often divide them for swap, cache & logs.

In the example below, we are using 2 SSD's for that purpose:
Code:
	NAME           STATE     READ WRITE CKSUM
	zroot          ONLINE       0     0     0
	  mirror-0     ONLINE       0     0     0
	    gpt/disk0  ONLINE       0     0     0
	    gpt/disk1  ONLINE       0     0     0
	logs
	  mirror-1     ONLINE       0     0     0
	    gpt/log0   ONLINE       0     0     0
	    gpt/log1   ONLINE       0     0     0
	cache
	  gpt/cache0   ONLINE       0     0     0
	  gpt/cache1   ONLINE       0     0     0

You can achieve that by using gpart(1)()

Code:
=>       34  156301421  ada2  GPT  (74G)
         34       2014        - free -  (1M)
       2048   16777216     1  freebsd-swap  (8.0G)
   16779264   33554432     2  freebsd-zfs  (16G)
   50333696  105967759     3  freebsd-zfs  (50G)

=>       34  156301421  ada3  GPT  (74G)
         34       2014        - free -  (1M)
       2048   16777216     1  freebsd-swap  (8.0G)
   16779264   33554432     2  freebsd-zfs  (16G)
   50333696  105967759     3  freebsd-zfs  (50G)

Don't forget to perform a 4K alignment.
 
Do some benchmarks before and after adding a 'log' and/or 'cache' device.

An L2ARC (cached) device will almost always increase random read speeds, once data is in the L2ARC. Depending on the size of ARC and L2ARC, and your 'normal' amount of 'hot' data, this can take a couple of hours to a couple of days to really take effect.

An SLOG (log) device may actually slow things down, depending on yhour mix of sync vs async writes, and the write IOps of the SLOG compared to your pool.

For example, a 2GB USB2 flash stick sped up my raidz1+mirror pool, using mixed SATA and IDE drives. Rebuilding the pool using 2x mirrors of SATA disks, though, turned the SLOG into a bottleneck. Same for a USB-based L2ARC device in the same box.

So, be sure to benchmark things. :)
 
phoenix said:
An SLOG (log) device may actually slow things down, depending on yhour mix of sync vs async writes, and the write IOps of the SLOG compared to your pool.

I have generally seen a great performance increase when it comes Innodb engine databases. Combined with 16KB recordsize datafiles, I max at 120M when restoring a 5GB dump file.

Of course, like you said I am using Intel SSDs with much higher IOPS.

I have yet to test this with NFS but I would really like to.
 
Back
Top