Application hangs on disk usage

I am using Freebsd 11 installation for several months and have some issues with some applications usage for example firefox, thunderbird or qtcreator can hang and not respond for any action for ~10s. I wanted to investigate this issue but my knowledge in BSD has ended and I need help.

I checked by top if the CPU usage is high but it was around 0-5%. Next I checked the top with the io statistics and it looks like this when the issue occurs:
Code:
$ top
1010 processes:1 running, 1007 sleeping, 2 stopped
CPU:  2.5% user,  0.0% nice,  1.4% system,  0.4% interrupt, 95.7% idle
Mem: 728M Active, 316M Inact, 6650M Wired, 127K Buf, 100M Free
ARC: 846M Total, 127M MFU, 142M MRU, 5392K Anon, 73M Header, 498M Other
Swap: 8192M Total, 3252M Used, 4940M Free, 39% Inuse, 100K In

  PID USERNAME      THR PRI NICE   SIZE    RES STATE   C   TIME    WCPU COMMAND
 5103 user           26  20    0  4346M  4095M select  2  40.9H  15.08% VBoxHeadless
83673 user            1  22    0  3300M 18460K select  1   9:22  14.62% Xorg
86025 user            1  20    0 28312K  7956K CPU7    7   0:00   1.61% top
83758 user            4  20    0   295M 15576K select  2   0:27   1.39% xfce4-terminal
85178 root            2  20    0   222M 13556K select  6   0:14   1.01% gsmartcontrol
83848 user           62  20    0  2050M   910M swread  1  36:48   0.37% firefox
Code:
$ top -m io
  PID USERNAME       VCSW  IVCSW   READ  WRITE  FAULT  TOTAL PERCENT COMMAND
 5103 user            688      0      0      0      0      0   0.00% VBoxHeadless
83673 user             52      5      0      0      0      0   0.00% Xorg
83848 user            544      0      2      0    241    243  99.18% firefox
I didn't found any information of what is the FAULT column but this can be the number of fault disk operations. This is my first question to you know what is FAULT column?

Next I checked my disks. I have ZFS on two mirrored disks and one SSD as cache and log (8 GB RAM):
Code:
$ zpool list -v
NAME           SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
zmain          460G   155G   305G         -    23%    33%  1.00x  ONLINE  -
  mirror       460G   155G   305G         -    23%    33%
    ada2p3        -      -      -         -      -      -
    ada1p3        -      -      -         -      -      -
log               -      -      -         -      -      -
  gpt/log0    7.94G   680K  7.94G         -    10%     0%
cache             -      -      -         -      -      -
  gpt/cache0   111G  33.5G  77.7G         -     0%    30%
Code:
$ sudo camcontrol devlist | grep ada
<SAMSUNG SSD PM810 2.5" 7mm 128GB AXM08D1Q>  at scbus0 target 0 lun 0 (ada0,pass0)
<ST9500420AS D005SDM1>             at scbus1 target 0 lun 0 (ada1,pass1)
<WDC WD5000AAKS-22A7B2 01.03B01>   at scbus3 target 0 lun 0 (ada2,pass3)
I started gsmartcontrol app and performed full tests of all 3 disks but there was no errors.

Code:
$ uname -a
FreeBSD AMDC727.local 11.0-RELEASE-p2 FreeBSD 11.0-RELEASE-p2 #0: Mon Oct 24 06:55:27 UTC 2016     root@amd64-builder.daemonology.net:/usr/obj/usr/src/sys/GENERIC  amd64
I know there is similar thread Thread SSD-hangs-on-heavy-writing.58266 but I don't know if this is the same issue because my swap is on the mirrored disks not on the SSD. Currently I don't know what could be the cause of this issue.
Where and how I can find what is the cause of those hangs?
 
Perhaps you need more RAM. ZFS needs a lot of memory and your 'VBoxHeadless' process is eating 4095M of resident memory, so there is swapping.
I think that the lack of responsiveness is due to swap-in pages from the swap area.
 
You can use gstat to find out what is happening on your swap partition.
As dlegrand said:
You need more RAM or you tune ZFS so it will consume only 2GB RAM in order to avoid that swapping.
I would prefer more RAM.
 
You're right, I forgot about VM launched in the background. I will try to upgrade my machine to the 16GB RAM and test it then.
 
You're right, I forgot about VM launched in the background. I will try to upgrade my machine to the 16GB RAM and test it then.
After memory upgrade, you could limit memory for ZFS ARC by adding vfs.zfs.arc_max="4G" to /boot/loader.conf.
 
Back
Top