UFS Is kvm virtio really that slow on FreeBSD?

rainer_d

Active Member

Reaction score: 4
Messages: 117

Hi,

I have FreeBSD 12.0 running on a KVM (that is AFAIK running on Ubuntu 18) in an OpenStack (Rocky, IIRC, if that matters) setup.

I went with this tutorial to create the image:


(disk needs to be bigger, added some swap)

On a CentOS 7.6 guest (with XFS), I get:

Code:
[root@centos ~]# fio -filename=/mnt/test.fio_test_file -direct=1 -iodepth 4 -thread -rw=randrw -ioengine=psync -bs=4k -size 8G -numjobs=4 -runtime=60 -group_reporting -name=pleasehelpme
pleasehelpme: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=4
...
fio-3.1
Starting 4 threads
Jobs: 4 (f=4): [m(4)][100.0%][r=3827KiB/s,w=4180KiB/s][r=956,w=1045 IOPS][eta 00m:00s]
pleasehelpme: (groupid=0, jobs=4): err= 0: pid=24144: Wed Jun 19 16:15:59 2019
   read: IOPS=997, BW=3991KiB/s (4087kB/s)(234MiB/60001msec)
    clat (usec): min=79, max=116484, avg=2295.23, stdev=1618.01
     lat (usec): min=79, max=116485, avg=2296.29, stdev=1618.00
    clat percentiles (usec):
     |  1.00th=[  578],  5.00th=[ 1045], 10.00th=[ 1336], 20.00th=[ 1745],
     | 30.00th=[ 1876], 40.00th=[ 2114], 50.00th=[ 2245], 60.00th=[ 2343],
     | 70.00th=[ 2474], 80.00th=[ 2737], 90.00th=[ 3064], 95.00th=[ 3458],
     | 99.00th=[ 4228], 99.50th=[ 5669], 99.90th=[30540], 99.95th=[36963],
     | 99.99th=[56886]
   bw (  KiB/s): min=  784, max= 1208, per=25.01%, avg=997.89, stdev=63.93, samples=480
   iops        : min=  196, max=  302, avg=249.43, stdev=15.98, samples=480
  write: IOPS=1004, BW=4016KiB/s (4113kB/s)(235MiB/60001msec)
    clat (usec): min=50, max=98580, avg=1691.83, stdev=1299.06
     lat (usec): min=51, max=98581, avg=1693.03, stdev=1299.05
    clat percentiles (usec):
     |  1.00th=[  101],  5.00th=[  506], 10.00th=[  693], 20.00th=[ 1188],
     | 30.00th=[ 1303], 40.00th=[ 1598], 50.00th=[ 1745], 60.00th=[ 1827],
     | 70.00th=[ 1926], 80.00th=[ 2212], 90.00th=[ 2507], 95.00th=[ 2868],
     | 99.00th=[ 3163], 99.50th=[ 3326], 99.90th=[19792], 99.95th=[27395],
     | 99.99th=[38536]
   bw (  KiB/s): min=  704, max= 1416, per=25.01%, avg=1004.26, stdev=81.10, samples=480
   iops        : min=  176, max=  354, avg=251.01, stdev=20.26, samples=480
  lat (usec)   : 100=0.47%, 250=1.67%, 500=0.41%, 750=5.26%, 1000=2.29%
  lat (msec)   : 2=45.59%, 4=43.52%, 10=0.60%, 20=0.05%, 50=0.12%
  lat (msec)   : 100=0.01%, 250=0.01%
  cpu          : usr=0.27%, sys=1.12%, ctx=120397, majf=0, minf=5
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwt: total=59864,60247,0, short=0,0,0, dropped=0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=4

Run status group 0 (all jobs):
   READ: bw=3991KiB/s (4087kB/s), 3991KiB/s-3991KiB/s (4087kB/s-4087kB/s), io=234MiB (245MB), run=60001-60001msec
  WRITE: bw=4016KiB/s (4113kB/s), 4016KiB/s-4016KiB/s (4113kB/s-4113kB/s), io=235MiB (247MB), run=60001-60001msec

Disk stats (read/write):
  sda: ios=59760/60214, merge=0/3, ticks=136218/100971, in_queue=237163, util=99.89%

On the same volume-type, same hardware, with FreeBSD 12 (and UFS), I get:

Code:
root@freebsd:~ # fio -filename=/srv/test2.fio_test_file -direct=1 -iodepth 4 -thread -rw=randrw -ioengine=psync -bs=4k -size 8G -numjobs=4 -runtime=60 -group_reporting -name=pleasehelpme
pleasehelpme: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=4
...
fio-3.13
Starting 4 threads
Jobs: 4 (f=4): [m(4)][100.0%][r=1405KiB/s,w=1397KiB/s][r=351,w=349 IOPS][eta 00m:00s]
pleasehelpme: (groupid=0, jobs=4): err= 0: pid=100411: Wed Jun 19 18:27:20 2019
  read: IOPS=265, BW=1061KiB/s (1086kB/s)(62.2MiB/60006msec)
    clat (usec): min=8, max=195897, avg=8442.92, stdev=14584.38
     lat (usec): min=14, max=195903, avg=8450.28, stdev=14584.30
    clat percentiles (usec):
     |  1.00th=[  1188],  5.00th=[  1319], 10.00th=[  1401], 20.00th=[  1565],
     | 30.00th=[  2802], 40.00th=[  3359], 50.00th=[  4555], 60.00th=[  6063],
     | 70.00th=[  7832], 80.00th=[ 10552], 90.00th=[ 15270], 95.00th=[ 23725],
     | 99.00th=[ 88605], 99.50th=[109577], 99.90th=[145753], 99.95th=[164627],
     | 99.99th=[193987]
   bw (  KiB/s): min=  220, max= 1671, per=97.12%, avg=1029.49, stdev=70.41, samples=476
   iops        : min=   52, max=  416, avg=255.74, stdev=17.63, samples=476
  write: IOPS=272, BW=1092KiB/s (1118kB/s)(63.0MiB/60006msec)
    clat (usec): min=14, max=205868, avg=6382.93, stdev=13040.75
     lat (usec): min=20, max=205875, avg=6390.29, stdev=13040.80
    clat percentiles (usec):
     |  1.00th=[  1401],  5.00th=[  1778], 10.00th=[  2638], 20.00th=[  2835],
     | 30.00th=[  2966], 40.00th=[  3097], 50.00th=[  3294], 60.00th=[  3687],
     | 70.00th=[  4424], 80.00th=[  5604], 90.00th=[  8586], 95.00th=[ 15270],
     | 99.00th=[ 81265], 99.50th=[103285], 99.90th=[139461], 99.95th=[156238],
     | 99.99th=[183501]
   bw (  KiB/s): min=  291, max= 1980, per=97.24%, avg=1060.91, stdev=77.77, samples=476
   iops        : min=   70, max=  493, avg=263.70, stdev=19.46, samples=476
  lat (usec)   : 10=0.03%, 20=0.02%, 50=0.01%, 100=0.01%, 250=0.07%
  lat (usec)   : 500=0.01%, 750=0.01%, 1000=0.01%
  lat (msec)   : 2=15.74%, 4=38.50%, 10=30.85%, 20=9.64%, 50=2.94%
  lat (msec)   : 100=1.60%, 250=0.59%
  cpu          : usr=0.06%, sys=1.55%, ctx=74180, majf=0, minf=0
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=15911,16377,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=4

Run status group 0 (all jobs):
   READ: bw=1061KiB/s (1086kB/s), 1061KiB/s-1061KiB/s (1086kB/s-1086kB/s), io=62.2MiB (65.2MB), run=60006-60006msec
  WRITE: bw=1092KiB/s (1118kB/s), 1092KiB/s-1092KiB/s (1118kB/s-1118kB/s), io=63.0MiB (67.1MB), run=60006-60006msec

Another example, writing to the raw disks (CentOS 7.6) using dc3dd:

Code:
[root@centos ~]# dc3dd wipe=/dev/sdb

dc3dd 7.1.614 started at 2019-06-20 07:54:22 +0000
compiled options:
command line: dc3dd wipe=/dev/sdb
device size: 83886080 sectors (probed)
sector size: 512 bytes (probed)
42949672960 bytes (40 G) copied (100%), 342.37 s, 120 M/s                     

input results for pattern `00':
   83886080 sectors in

output results for device `/dev/sdb':
   83886080 sectors out

dc3dd completed at 2019-06-20 08:00:05 +0000

On FreeBSD 12.0:

Code:
root@freebsd:~ # dc3dd wipe=/dev/vtbd2

dc3dd 7.2.646 started at 2019-06-20 09:37:10 +0200
compiled options:
command line: dc3dd wipe=/dev/vtbd2
device size: 83886080 sectors (probed),   42,949,672,960 bytes
sector size: 512 bytes (probed)
 42949672960 bytes ( 40 G ) copied ( 100% ), 4585 s, 8.9 M/s                  

input results for pattern `00':
   83886080 sectors in

output results for device `/dev/vtbd2':
   83886080 sectors out

dc3dd completed at 2019-06-20 10:53:35 +0200


What can I do about this?
Is this normal?
 
Top