Riddle me this:
I am about to build a new NAS soon (I actually already have most of the hardware already) with the following disk configuration:
1 x 40gb Intel SSD => AHCI driver => GPT => UFS2+Softupdates => FreeBSD OS installation
2 x 2tb SATA disks => AHCI driver => GPT => GELI => ZFS mirror out of 2 geli vdevs
Now, if I wanted to give a noticable boost to read performance, I could utilise part of my system SSD disk by using a dedicated partition (also gelified), say, 10gb in size on it as a ZFS cache vdev. When using a cache device, ZFS does write the cache data to the disk in a contiguous way, because it is assumed that the cache vdev is an SSD so it's designed to minimise the wear. If I wanted to reduce the wear further, an obvious solution would be to increase the cache vdev size, for example to 20gb. But still, just how fast is that kind of use is going to wear out and kill an SSD?
Another concern I have is: what matters more for the disk serving as a cache vdev, read speed or write speed? The SSD model I have in sights is a cheap new Intel 40gb model that costs ~120$, judging from early benchmarks of a Kingston 40gb SSD that uses the same controller, it has very fast reads, 180+ mb/s, low latency, under 0,1ms, but very slow write - 35-40mb/s. Is the slow write performance of the disk going to kill it's benefit when used as a cache device or does ZFS caching work in such a way that read speeds are vastly more important?
I am about to build a new NAS soon (I actually already have most of the hardware already) with the following disk configuration:
1 x 40gb Intel SSD => AHCI driver => GPT => UFS2+Softupdates => FreeBSD OS installation
2 x 2tb SATA disks => AHCI driver => GPT => GELI => ZFS mirror out of 2 geli vdevs
Now, if I wanted to give a noticable boost to read performance, I could utilise part of my system SSD disk by using a dedicated partition (also gelified), say, 10gb in size on it as a ZFS cache vdev. When using a cache device, ZFS does write the cache data to the disk in a contiguous way, because it is assumed that the cache vdev is an SSD so it's designed to minimise the wear. If I wanted to reduce the wear further, an obvious solution would be to increase the cache vdev size, for example to 20gb. But still, just how fast is that kind of use is going to wear out and kill an SSD?
Another concern I have is: what matters more for the disk serving as a cache vdev, read speed or write speed? The SSD model I have in sights is a cheap new Intel 40gb model that costs ~120$, judging from early benchmarks of a Kingston 40gb SSD that uses the same controller, it has very fast reads, 180+ mb/s, low latency, under 0,1ms, but very slow write - 35-40mb/s. Is the slow write performance of the disk going to kill it's benefit when used as a cache device or does ZFS caching work in such a way that read speeds are vastly more important?