bhyve VM-Bhyve ZFS properties

Hi everyone,

I'm running VM-Bhyve with some Windows VMs. Setup is like this article: https://klarasystems.com/articles/from-0-to-bhyve-on-freebsd-13-1/

Will there be a big improvement on performance if the zfs properties atime=off and sync=disabled are set? The article leaves the properties at default, so I guess it won't be a big difference?
Have hourly zfs snapshots and an aditional backup of the MS SQL databases. Server is powered from a UPS so I'm not that afraid for power outages.
Also current recordsize is set at default 128K. Thinking to change to 64K as recommended in the article.

Greetings
Tim
 
Will there be a big improvement on performance if the zfs properties atime=off and sync=disabled are set?
Only one way to find out. Test it. Measure the performance (in the VM), then turn one option off (or on) at a time and measure again.

Also current recordsize is set at default 128K. Thinking to change to 64K as recommended in the article.
This article was really useful for me: https://shatteredsilicon.net/blog/2020/06/05/mysql-mariadb-innodb-on-zfs/
It's rather specific to MySQL/MariaDB and assumes you're running it directly on ZFS. But note that the Windows VMs don't 'see' ZFS, they get a zvol which is presented to the VMs as a disk image. The 'best' recordsize for MS-SQL is going to depend on the recordsizes that are used by MS-SQL (I don't know, but you can probably look those up).
 
Maybe already known but switching from synchronous (writing) to asynchronous with ZFS where synchronous is standard or requested by the application is antithetical to ZFS principles. You're trading in an important ZFS security feature for enhanced speed, even though you have regular snapshots and UPS in play.

ZFS sync/async + ZIL/SLOG, explained:
sync=disabled decreases latency at the expense of safety.
and from zfs set sync=disabled:
In technical terms, sync=disabled tells ZFS “when an application requests that you sync() before returning, lie to it.”

To combine ZFS (writing) safety guarantees with enhanced throughput for synchronous writes, the available solution is the deployment of an appropriate SLOG device.

More information:
Additionally for your db's recordsize: Tuning Recordsize in OpenZFS
 
Note atime doesn’t exist for zvols.

As with so many things, “it depends” for your other questions. As suggested above, if you have the space and bandwidth, try different settings for your workload and see for yourself.

Smaller block sizes (matched to the size of chunks the DB will be using) can reduce latency, but overly small can lead to poor compression (if you’re using it) and increased overhead for large data transfers. If you’re using snapshots, the block size also defines the smallest size of data that can be updated (equivalently that must be retained) so that’s another reason to try to be no larger than your predominant I/O size.

Sync depends on what your application and risk tolerance are like. Even with the UPS, disabling sync on something backing a database is risky. Databases are built to be resilient to many things, but that resiliency is contingent upon bytes that the OS has said are sync-ed actually persisting in the event of a crash, which may not be the case with sync=disabled.

That said, if you are suffering for performance, and the database/VM can be rebuilt if it becomes corrupted (or you’re OK rolling back to the previous snapshot if needed), try it out to see if it helps.

You didn’t mention (or I missed) what type of pool you’ve got (raid? Mirrors? No redundancy?)… block size (esp. small block sizes) can lead to surprising space consumption on raid-z devices; just another thing to be aware of.
 
Note atime doesn’t exist for zvols.
Thanks for letting me know. I don't use zvols. File on dataset is used.
Code:
disk0_type="nvme"
disk0_name="disk0.img"

Please note that the Klara Systems article does not mention to set compression. Compression was not a standard option on my VM zpool.

Code:
zfs set compression=lz4 tank

Update: Current setup in vm-bhyve is running great for about a year now. Really love it. Great stuff!!

Current hardware is a HP Proliant Gen 8 server, this month we upgrade to a HP Proliant Gen 9 server with 2 mirrored SSDs. Currently running on single Samsung SSD 970 PRO. The mirrored setup in the new sever will be an improvement.
 
Back
Top