FreeBSD 11.2 vs 12.1 - vast difference in performance profile.

I am stopping short of calling it performance degradation but that is also quite probable.

I have a large set of FreeBSD nodes and I am migrating them from 11.2 to 12.1 as well as migrating from AWS t2.medium instances to similar t3.medium instances. I am trying to account for a FreeBSD performance delta.

Reference System:
  • Freebsd 11.2
  • Running on AWS t2.medium instance (2 virtual CPUs, 4GB RAM)
  • Running mariadb 10.2, as a slave
  • Average CPU usage: 24% user, 9% system, 1% interrupt (see graph)
  • vm.pmap.pti=1
t2medium_FreeBSD11-2.png

System under Test
  • Freebsd 12.1
  • Running on AWS t2.medium instance (2 virtual CPUs, 4GB RAM)
  • Running mariadb 10.2, as a slave, parallel to reference system
  • Average CPU usage: 54% user, 50% system, 1.5% interrupt (see graph)
  • 1st half of the graph with default vm.pmap.pti, latter part with vm.pmap.pti=0
t2medium_FreeBSD12-1.png

Other system of interest - ultimate target
  • Freebsd 12.1
  • Running on AWS t3.medium instance (2 virtual CPUs, 4GB RAM)
  • Running mariadb 10.2, as a slave, parallel to reference system
  • Average CPU usage: 28% user, 51% system, 4% interrupt (see graph)
  • 1st half of the graph with default vm.pmap.pti, latter part with vm.pmap.pti=0
t3medium_FreeBSD12-1.png
As per graphs:
  1. System activity increases dramatically from 11.2 to 12.1
  2. The difference is partly mitigated on a t2.medium using vm.pmap.pti=0, but remains substantial
  3. Importantly - it still does the work. Metrics at the app level (internal mysql metrics) show that the node does track master, even slightly better than the reference system. Maybe it just looks busy?
Ultimately I care most about the t3/12.1 latter but since it changes two variables, I am going to focus on the first two and what could account for a dramatic jump in system activity from 11.2 to 12.1.
The servers are all slaves off the the same master, their entire purpose is to read binlogs and write changes to disk.

My suspicion is something related to the Meltdown/Spectre mitigations (hence the pmap.pti change).Note that this is running on an isolated server - no external access. I can disable the Meltdown/Spectre mitigations without too much concern.

On all nodes,
hw.ibrs_disable=1
hw.mds_disable=0



Questions are
  1. Why is cpu.system so busy in 12.1 compared to 11.2?
  2. Is there a security patch I can disable to recover that CPU time?
  3. What's the best way to track this? Anyone else experienced something similar?
 
Update -

Currently testing the theory that hyperthreading (which I think accounts for CPU vs vCPU difference on AWS) has huge overhead costs in such an application and that cost is more pronounced on 12.1 than 11.2.

I may not have answers to the above, but perhaps disabling hyperthreading is the resolution regardless.
 
Update #2
based on diskinfo, it looks like 12.1 supports higher disk throughput for a higher latency. An ACID compliant DB slave does a large number of small disk writes / syncs and the high latency is a pain point.

From this below, going to 12.1
calculated command overhead = 0.294 => 0.453 msec/sector
Seek times ~ 50% higher
Small writes are slower, large block writes are faster.


Note that the disk are labeled nvd vs xbd



t2 / 11.2
Code:
# uname -a && diskinfo -c -t -i -S -w /dev/xbd6
FreeBSD backsnapshot-main-pr.m2msuite.com 11.2-RELEASE-p3 FreeBSD 11.2-RELEASE-p3 #0: Thu Sep  6 07:14:16 UTC 2018     root@amd64-builder.daemonology.net:/usr/obj/usr/src/sys/GENERIC  amd64
/dev/xbd6
    512             # sectorsize
    215822106624    # mediasize in bytes (201G)
    421527552       # mediasize in sectors
    0               # stripesize
    0               # stripeoffset
                    # Disk descr.
                    # Disk ident.
    No              # TRIM/UNMAP support
    Unknown         # Rotation rate in RPM

I/O command overhead:
    time to read 10MB block      0.045218 sec    =    0.002 msec/sector
    time to read 20480 sectors   6.071804 sec    =    0.296 msec/sector
    calculated command overhead            =    0.294 msec/sector

Seek times:
    Full stroke:      250 iter in   0.082659 sec =    0.331 msec
    Half stroke:      250 iter in   0.072609 sec =    0.290 msec
    Quarter stroke:      500 iter in   0.156660 sec =    0.313 msec
    Short forward:      400 iter in   0.120875 sec =    0.302 msec
    Short backward:      400 iter in   0.131271 sec =    0.328 msec
    Seq outer:     2048 iter in   0.582768 sec =    0.285 msec
    Seq inner:     2048 iter in   0.583594 sec =    0.285 msec

Transfer rates:
    outside:       102400 kbytes in   0.651904 sec =   157078 kbytes/sec
    middle:        102400 kbytes in   0.638761 sec =   160310 kbytes/sec
    inside:        102400 kbytes in   0.638390 sec =   160404 kbytes/sec

Asynchronous random reads:
    sectorsize:     12366 ops in    3.041485 sec =     4066 IOPS
    4 kbytes:       12366 ops in    3.041514 sec =     4066 IOPS
    32 kbytes:       7942 ops in    3.065260 sec =     2591 IOPS
    128 kbytes:      2081 ops in    3.261250 sec =      638 IOPS

Synchronous random writes:
     0.5 kbytes:    676.9 usec/IO =      0.7 Mbytes/s
       1 kbytes:    723.7 usec/IO =      1.3 Mbytes/s
       2 kbytes:    854.8 usec/IO =      2.3 Mbytes/s
       4 kbytes:    971.8 usec/IO =      4.0 Mbytes/s
       8 kbytes:   1049.1 usec/IO =      7.4 Mbytes/s
      16 kbytes:   1037.1 usec/IO =     15.1 Mbytes/s
      32 kbytes:   1081.1 usec/IO =     28.9 Mbytes/s
      64 kbytes:   1428.6 usec/IO =     43.7 Mbytes/s
     128 kbytes:   1590.2 usec/IO =     78.6 Mbytes/s
     256 kbytes:   2277.5 usec/IO =    109.8 Mbytes/s
     512 kbytes:   4257.3 usec/IO =    117.4 Mbytes/s
    1024 kbytes:  12437.6 usec/IO =     80.4 Mbytes/s
    2048 kbytes:  28811.7 usec/IO =     69.4 Mbytes/s
    4096 kbytes:  61584.8 usec/IO =     65.0 Mbytes/s
    8192 kbytes: 127024.9 usec/IO =     63.0 Mbytes/s

t3/12.1

Code:
# uname -a && diskinfo -c -t -i -S -w /dev/nvd1
FreeBSD backsnapshot2-main-pr.m2msuite.com 12.1-RELEASE-p2 FreeBSD 12.1-RELEASE-p2 GENERIC  amd64
/dev/nvd1
    512             # sectorsize
    215822106624    # mediasize in bytes (201G)
    421527552       # mediasize in sectors
    0               # stripesize
    0               # stripeoffset
    Amazon Elastic Block Store    # Disk descr.
    vol0e0c439daf825d391    # Disk ident.
    No              # TRIM/UNMAP support
    0               # Rotation rate in RPM

I/O command overhead:
    time to read 10MB block      0.046433 sec    =    0.002 msec/sector
    time to read 20480 sectors   9.333365 sec    =    0.456 msec/sector
    calculated command overhead            =    0.453 msec/sector

Seek times:
    Full stroke:      250 iter in   0.114019 sec =    0.456 msec
    Half stroke:      250 iter in   0.114488 sec =    0.458 msec
    Quarter stroke:      500 iter in   0.221293 sec =    0.443 msec
    Short forward:      400 iter in   0.192122 sec =    0.480 msec
    Short backward:      400 iter in   0.183169 sec =    0.458 msec
    Seq outer:     2048 iter in   0.882630 sec =    0.431 msec
    Seq inner:     2048 iter in   0.947622 sec =    0.463 msec

Transfer rates:
    outside:       102400 kbytes in   0.530318 sec =   193092 kbytes/sec
    middle:        102400 kbytes in   0.482080 sec =   212413 kbytes/sec
    inside:        102400 kbytes in   0.509949 sec =   200804 kbytes/sec

Asynchronous random reads:
    sectorsize:     12126 ops in    3.042575 sec =     3985 IOPS
    4 kbytes:       12126 ops in    3.042559 sec =     3985 IOPS
    32 kbytes:      12126 ops in    3.042187 sec =     3986 IOPS
    128 kbytes:      8101 ops in    3.063518 sec =     2644 IOPS

Synchronous random writes:
     0.5 kbytes:   1030.0 usec/IO =      0.5 Mbytes/s
       1 kbytes:   1388.1 usec/IO =      0.7 Mbytes/s
       2 kbytes:   1003.9 usec/IO =      1.9 Mbytes/s
       4 kbytes:   1024.1 usec/IO =      3.8 Mbytes/s
       8 kbytes:   1016.2 usec/IO =      7.7 Mbytes/s
      16 kbytes:   1058.5 usec/IO =     14.8 Mbytes/s
      32 kbytes:   1163.6 usec/IO =     26.9 Mbytes/s
      64 kbytes:   1319.0 usec/IO =     47.4 Mbytes/s
     128 kbytes:   1652.6 usec/IO =     75.6 Mbytes/s
     256 kbytes:   2163.0 usec/IO =    115.6 Mbytes/s
     512 kbytes:   2584.8 usec/IO =    193.4 Mbytes/s
    1024 kbytes:   3482.0 usec/IO =    287.2 Mbytes/s
    2048 kbytes:   5520.5 usec/IO =    362.3 Mbytes/s
    4096 kbytes:  12061.1 usec/IO =    331.6 Mbytes/s
    8192 kbytes:  28095.8 usec/IO =    284.7 Mbytes/s
 
Update #3
I need to restart this thread in an AWS-specific context & title. Is this the right forum for it?
 
Back
Top