ZFS Custom Parameters - Your Recommendation & Experience

Hi,

I know, there is this page.

As I read HDD cache has also impact on performance as well as SATA vs. SAS (rules, but even NVMe is fastest and SATA-SSDs are fast, too, SSDs are not best choice if you have databases using your pool). In fact, I use broadcom 93xx and 94xx HBAs 😎

I have 2 types of sata disks across the servers:
1. SATA, RPM: 7200, cache: 256MB, CMR, sectors: 4KB with emulation (512e), (reaching constantly 160mb/s in reading and writing, each)
2. SATA, RPM: 5400, cache: 64MB, CMR, sectors: 4KB with emulation (512e), (reaching constantly 115mb/s in reading and writing, each)
3. SAS, RPM: 15000, cache: 16MB, CMR, sectors: 512*** (no Benchmark yet) [not in use]

*** I guess

At the moment, I simply use ashift=9 (512) on all pools. The pools (12x type 2 in raidz3, 4x type 1 in raidz1)** performances are jumping between 1000-1600 IOPS @ approx. 220MB/s.

** I know, I am about to go for striped mirror (raid 10) :rolleyes: and I know: the more vdevs you have the better the performance ;) (iops):
Do not use raidz1 at all on > 750 GB disks.
Do not use less than 3 or more than 7 disks on a raidz1.
If thinking of using 3-disk raidz1 vdevs, seriously consider 3-way mirror vdevs instead.
Do not use less than 6 or more than 12 disks on a raidz2.
Do not use less than 7 or more than 15 disks on a raidz3.

I am thinking about vfs.zfs.zil_disable="1", because I have UPS by APC, but I want to use SQL ... so ... :-/
I am also not sure about using ashift=12 on all pools. Will it improve performance? Once I tested it at the begining, I recognized a huge overhead consuming a lot free space. ashift=9 takes less from raw capacity than ashift=12 :-/, so I went back to ashift=9.

What are your disk properties and your zfs pool parameters?
 
On modern large disks overhead from ashift 12 shouldnt be an issue, also a lower ashift increases metadata usage so the grass isnt as green as you think.

Performance hit depends, but if you use an ashift smaller than the physical sectors on the disk then you may get misaligned data.

I have one server where I stupidly somehow had the pool created with ashift 9, one of the disks in the mirror was swapped and the replacement disk is a 4k disk, zpool status correctly informs me I will have a performance hit, reads are ok, but when I Iook at writes on average the misaligned disk needs 3-4x as much time to write data.
 
I use a zpool cache device and a zpool log device. These have no bad impact on performance.
With my PC with 8G memory i have:
Code:
vfs.zfs.min_auto_ashift=12
vfs.zfs.arc_min= 1500000000  # 7.200.000.000
vfs.zfs.arc_max= 2500000000  # 7.200.000.000
On the zfs datasets you can configure recordsize,logbias,checksum,compression,atime option.
 
Back
Top