UFS Can anyone suggest a Ideal way to do fio benchmarking for NVMe devices?

Hi,

I am trying to run FIO benchmark test with NVMe devices and see how FreeBSD performs. There are lot of variables and combination. So, can anyone suggest a Ideal way to do FIO benchmarking in FreeBSD? My intent is to check what is the maximum throughput and IOPS the device delivers.

Few questions regarding the same,
  1. Should we use "posixaio" as the ioengine (or) something else?
  2. Should we use single thread (or) multiple threads for test? If multiple threads, how can we decide on the thread count?
  3. Should we use "raw device files" (Eg: nvme namespace file - /dev/nvme0ns1) without filesystem (or) use a mounted filesystem with a regular file (Eg: /mnt/nvme/test1). Looks like raw device files give better numbers.
  4. Should we use a shared file (or) one file per thread?
  5. I believe 1Job should be fine for benchmarking. (or) should we try multiple jobs?
Let me know your suggestions. Also, please suggest performance tuning methods for NVMe and storage devices in general.
 
I think it is important to know why you are trying to benchmark those disks. Do you need to find out if a certain device is capable of handling your workload? Or are you trying to compare the performance of different devices, in order to find out which one suits your needs better? Or something else?

In either case, I think it is better to test the devices with your actual real-life workload. For example, if you're in the database business, then install a database on such a drive and run some serious transactions through it. That will give you more reliable results than any synthetic benchmark.

Numbers from synthetic benchmarks are mainly useful for bragging. And for marketing people (for advertising). It might give a rough idea of the raw performance of the device, but it isn't necessarily helpful for deciding how well it will perform when doing real work.
 
In either case, I think it is better to test the devices with your actual real-life workload. For example, if you're in the database business, then install a database on such a drive and run some serious transactions through it. That will give you more reliable results than any synthetic benchmark.

Hi olli,
you say : "That will give you more reliable results"

ok, do you know tools for get IOPS from a storage that has very transaction?
 
SirDice, I haven't tried bonnie++ before. I will give it a try.

olli@, My Intent is to check how the device performs in FreeBSD compared to it's specification, understand the reasons for differences and see how they can be worked around.
 
Hi,

I am trying to run FIO benchmark test with NVMe devices and see how FreeBSD performs. There are lot of variables and combination. So, can anyone suggest a Ideal way to do FIO benchmarking in FreeBSD? My intent is to check what is the maximum throughput and IOPS the device delivers.

Few questions regarding the same,
  1. Should we use "posixaio" as the ioengine (or) something else?
  2. Should we use single thread (or) multiple threads for test? If multiple threads, how can we decide on the thread count?
  3. Should we use "raw device files" (Eg: nvme namespace file - /dev/nvme0ns1) without filesystem (or) use a mounted filesystem with a regular file (Eg: /mnt/nvme/test1). Looks like raw device files give better numbers.
  4. Should we use a shared file (or) one file per thread?
  5. I believe 1Job should be fine for benchmarking. (or) should we try multiple jobs?
Let me know your suggestions. Also, please suggest performance tuning methods for NVMe and storage devices in general.

I try to to use something like CrystalDiskMark on FreeBSD. There is a project based on FIO for Linux. https://github.com/JonMagon/KDiskMark I found that the current FIO port does not support libaio engine, which this project uses. There is a FreeBSD port for libaio, so I don't understand why. In theory it is possible to compile it for FreeBSD, since C++ is supported and there is a port for Qt5 too, but I lack the knowledge to do it currently. CDM is used for fast testing hard drives by most people with Windows systems. I don't say it is optimal, because I don't understand how this should be done in theory, it is just used by most people I know and for my current problem it would be useful, because I want to run the exact same test on an SSD that crashes on Windows systems.
 
Back
Top