System slow after update 12.2 -> 13.0

Hello everyone :)

I did upgrade my FreeBSD 12.2 to 13.0 last week, everything went well as always.
But, for the first time, I feel that after the update, the system is quite slow...

Facts :
- I'm using it remotely with SSH, and sometimes it hangs for milliseconds before getting back on ;
- In MySQL, very simple queries can take up to 1 second (instead of 0,001 before) ;
- When restarting Apache with "apachectl restart", the process of stopping then starting it again seems slow ;
- The server hosts websites : Apache seems to answer slower than before...

Question : What kind of tool / action would be useful to understand the reason why my system lacks reactivity ?
I'm quite out of ideas right now, just ran "top" command and nothing unusual came out :

Code:
last pid: 30955;  load averages:  0.09,  0.20,  0.82
62 processes:  1 running, 61 sleeping
CPU:  1.5% user,  0.0% nice,  0.1% system,  0.0% interrupt, 98.3% idle
Mem: 713M Active, 11G Inact, 1241M Laundry, 2116M Wired, 1337M Buf, 729M Free
Swap: 4096M Total, 616M Used, 3480M Free, 15% Inuse

Thanks :)

--
Léo.
 
You can use systat(1) to check the tps on the disks. Type systat then :help to see the list of the displays and switch between them using :vmstat or :iostat

p.s.
also check your network load using :ifstat

p.s.2
There's some swap used so at some point some process required more memory than your server has free. In ideal world your swap usage should be 0% You can use top then press "w" to see the swap usage per process to identify which process is using the swap.
 
i had a somewhat similar problem and it was a bad SSD disk. not reporting any errors just being super slow at times (not always)
it was a pain in the arse to trace (disk was part of a hw raid (ciss) and never reported any kind of error or timeout)
 
A bit of swap in use doesn't seem like a problem, the load is super low too. So I very much doubt the apparent slowness is caused by a high load or lots of swapping.
 
Is there a lot of (pending) disk I/O and does the system use ZFS? If so, do you see processes in state zfs te when this is happening?

I don't have a solution for that, just curious because I've seen such behavior very rarely (with the CPU going to idle and the system "stuck")…
 
Hey there,

Thanks everyone. I'm using UFS, this system was installed in 10.3 and upgraded well since without any problems like the one I'm encountering.
It is a dedicated server at OVH, with a SATA drive :
Code:
root@vendome# egrep 'da[0-9]|cd[0-9]' /var/run/dmesg.boot
[...]
ada0: <HGST HUS724020ALA640 MF6OABY0> ATA8-ACS SATA 3.x device

Here are some answers from what you asked me :

systat :
Code:
                    /0   /1   /2   /3   /4   /5   /6   /7   /8   /9   /10
     Load Average   ||

                    /0%  /10  /20  /30  /40  /50  /60  /70  /80  /90  /100
root           idle XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
root           idle XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
root           idle XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
root           idle XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
root           idle XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
root           idle XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
root           idle XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
root           idle XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
root           intr XXXXXXXX
www           httpd XXXX
www           httpd XXXX
www           httpd XXXX
mysql        mysqld XXX
mysql        mysqld XXX
mysql        mysqld XXX
www           httpd XX
mysql        mysqld XX
mysql        mysqld X
mysql        mysqld X
mysql        mysqld X
www           httpd X
www           httpd X

vmstat :
Code:
1 users    Load  0.28  0.25  0.18                  Apr 23 14:11
   Mem usage:  94%Phy  4%Kmem                           VN PAGER   SWAP PAGER
Mem:      REAL           VIRTUAL                        in   out     in   out
       Tot   Share     Tot    Share     Free   count
Act  2447M    152M   8339M     163M     936M   pages
All  2661M    365M   8942M     669M                       ioflt  Interrupts
Proc:                                                     cow    2631 total
  r   p   d    s   w   Csw  Trp  Sys  Int  Sof  Flt   205 zfod        ehci0 uhci
             123       12K  208   8K   3K   1K  205       ozfod       uhci2 uhci
                                                         %ozfod  1710 hpet0 20
 0.3%Sys   0.0%Intr  4.0%User  0.0%Nice 95.7%Idle         daefr    20 em0:rxq0
|    |    |    |    |    |    |    |    |    |    |       prcfr     2 em0:rxq1
>>                                                    353 totfr       em0:aq 26
                                        37 dtbuf          react   899 ahci0:ch0
Namei     Name-cache   Dir-cache    350006 maxvn          pdwak
   Calls    hits   %    hits   %    324229 numvn      731 pdpgs
   27487   27480 100                259908 frevn          intrn
                                                    2021M wire
Disks  ada0 pass0 pass1                             1715M act
KB/t  33.40  0.00  0.00                            11179M inact
tps     899     0     0                              197M laund
MB/s  29.32  0.00  0.00                              936M free
%busy    74     0     0                             1277M buf

iostat :
Code:
/0   /1   /2   /3   /4   /5   /6   /7   /8   /9   /10
     Load Average   |

          /0%  /10  /20  /30  /40  /50  /60  /70  /80  /90  /100
cpu  user|
     nice|
   system|
interrupt|
     idle|XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

          /0%  /10  /20  /30  /40  /50  /60  /70  /80  /90  /100
ada0  MB/s
      tps|
pass0 MB/s
      tps|
pass1 MB/s
      tps|

ifstat :
Code:
/0   /1   /2   /3   /4   /5   /6   /7   /8   /9   /10
     Load Average   |

      Interface           Traffic               Peak                Total
            lo0  in     25.351 KB/s         25.351 KB/s          584.012 MB
                 out    25.351 KB/s         25.351 KB/s          584.012 MB

            em0  in      3.074 KB/s          3.074 KB/s          859.061 MB
                 out     2.060 KB/s          2.060 KB/s            4.149 GB

swap :
top command then "w" shows "0B" in all swap columns.
htop command shows :
Code:
Swp[||||||||||||||||||||||||                                                                                                                  638M/4.00G]

I'm aware that's a tricky one because nothing not working, just slowly compared to before...
Is there any kind of test to experience and actually measure this... "slowlyness" ? :)

Thanks for all your ideas and help here.

--
Léo.
 
Doh. Got it.
I tried that one :
# top -m io -o total -b
Code:
last pid: 95536;  load averages:  0.14,  0.17,  0.13  up 4+07:31:26    15:23:00
59 processes:  1 running, 58 sleeping
CPU:  2.1% user,  0.2% nice,  0.2% system,  0.0% interrupt, 97.5% idle
Mem: 1658M Active, 12G Inact, 201M Laundry, 2031M Wired, 1297M Buf, 350M Free
Swap: 4096M Total, 638M Used, 3458M Free, 15% Inuse

  PID USERNAME     VCSW  IVCSW   READ  WRITE  FAULT  TOTAL PERCENT COMMAND
 5449 mysql     1112922976 1010608 12193600 163513410  10620 175717630  99.98% mysqld
70186 clamav     22018    575  21375     17     30  21422   0.01% clamd
60077 clamav     10016      5   9912    895     26  10833   0.01% freshclam
49498 root      916016  29798   1384      1   3951   5336   0.00% goaccess
[...]

I didn't paste the other PID because they were 0.00%.
From what I get here, it seems like mysqld is eating all of my hdd resources, am I correct ?

What are your recommandations ?
- portmaster -f mysql-server to build it again ?
- try to trace mysqld activity to understand what's going on exactly ?
- reboot ? :)
- something else ?

Thanks !

--
Léo.
 
0. make a mysqldump --all-databases
1. kill the mysql service and restart it

2. top m
Code:
last pid: 65578;  load averages:  0.69,  0.31,  0.17             up 140+01:56:14 15:30:33
43 processes:  1 running, 42 sleeping
CPU:  1.3% user,  0.0% nice,  1.3% system,  1.5% interrupt, 95.9% idle
Mem: 30M Active, 20G Inact, 148M Laundry, 41G Wired, 696M Buf, 1070M Free
ARC: 24G Total, 5822M MFU, 17G MRU, 2912K Anon, 200M Header, 1281M Other
     19G Compressed, 65G Uncompressed, 3.39:1 Ratio
Swap: 16G Total, 324M Used, 16G Free, 1% Inuse

  PID USERNAME        VCSW  IVCSW   READ  WRITE  FAULT  TOTAL PERCENT COMMAND
65576 root            4580      2    473      0      0    473 100.00% rsync
 1950 root            1205      7      0      0      0      0   0.00% VBoxHeadless
 1542 root             491      2      0      0      0      0   0.00% VBoxHeadless
65569 root               2      0      0      0      0      0   0.00% top
 1507 root              41      0      0      0      0      0   0.00% VBoxSVC
  871 mysql             12      0      0      0      0      0   0.00% mysqld
 1505 root              11      0      0      0      0      0   0.00% VBoxXPCOMIPCD
64923 root               1      0      0      0      0      0   0.00% smbd
  714 root               2      0      0      0      0      0   0.00% ntpd
65525 root               2      0      0      0      0      0   0.00% sshd
  910 root               1      0      0      0      0      0   0.00% sendmail
Cumulative infos are not so useful
 
Hey fcorbelli,

Here is the output :

Bash:
root@vendome # vmstat 1 100
procs    memory    page                      disks     faults       cpu
r  b  w  avm  fre  flt  re  pi  po   fr   sr ad0 pa0   in   sy   cs us sy id
0  0  0 7.9G 339M 1.4K 114   0   0 1.9K 1.6K   0   0 1814 5.5K 8.1K  2  0 97
0  0  0 7.9G 339M    0   0   0   0   72  709 175   0 1254  151 2.7K  0  0 100
0  0  0 7.9G 339M    0   0   0   0    0  782   0   0 1088  175 2.4K  0  0 100
0  0  0 7.9G 339M    0   0   0   0    0  709   1   0 1148  150 2.4K  0  0 100
0  0  0 7.9G 339M    0   0   0   0    0  781   0   0 1155  173 2.4K  0  0 100
0  0  0 7.9G 339M    3   0   0   0    0  781   0   0 1119  184 2.4K  0  0 100
0  0  0 7.9G 339M   22   0   0   0 1.0K  725   0   0 1205 5.4K 3.0K  2  0 98
0  0  0 7.9G 339M    0   0   0   0    0  745   4   0 1162  243 2.4K  0  0 100
0  0  0 7.9G 339M    0   0   0   0    0  744   0   0 1145  173 2.4K  0  0 100
2  0  0 7.9G 338M 3.7K   0   0   0 1.4K 1.7K   8   0 1183 4.8K 3.9K  6  0 93
1  0  0 7.9G 341M   41   0   0   0 2.1K 2.1K 1398   0 3284 9.1K  16K  2  1 97
0  0  0 7.9G 340M    0   0   0   0  520  714 856   0 2663 5.9K  11K  1  1 98
1  0  0 7.9G 367M  16K  97   0   0  26K  26K 1299   0 3279  12K  16K  9  1 90
1  0  0 7.9G 367M    0   0   0   0  192  727 1661   0 3630  11K  19K  3  0 97
1  0  0 7.9G 367M   34   0   0   0  224  716 632   0 2462 5.1K 9.8K  1  0 99
0  0  0 7.9G 367M    0   0   0   0  232  721 984   0 2770 7.3K  13K  2  0 98
2  0  0 7.9G 367M    5   0   0   0  192  743 230   0 1667 2.8K 5.4K 11  0 88
2  0  0 7.9G 365M 1.1K   0   0   0  748  723 1250   0 3156  13K  17K  5  1 95
0  0  0 7.9G 365M    5   0   0   0  352  666 916   0 2625 6.5K  12K  1  0 98
0  0  0 7.9G 364M    0   0   0   0  224  731 826   0 2747 6.5K  12K  2  0 98
0  0  0 7.9G 363M    0   0   0   0  208  722 1028   0 2855 7.6K  13K  2  0 97
0  0  0 7.9G 362M    0   0   0   0  200  726 640   0 2560 5.4K  11K  1  0 99
0  0  0 7.9G 360M    0   0   0   0  232  715 1463   0 3319  10K  17K  2  0 98
0  0  0 7.9G 359M    0   0   0   0 2.2K  740 701   0 2472 5.6K  10K  1  0 98
0  0  0 7.9G 359M    0   0   0   0  192  750  66   0 1202  443 2.9K  0  0 100
0  0  0 7.9G 359M    0   0   0   0  192  745  59   0 1198  236 2.7K  0  0 100
1  0  0 7.9G 358M  491   0   0   0 1.3K  743 357   0 1499 2.4K 4.1K  1  0 99
0  0  0 7.9G 358M    0   0   0   0 1.2K  749 313   0 1375  572 3.5K  0  0 100
procs    memory    page                      disks     faults       cpu
r  b  w  avm  fre  flt  re  pi  po   fr   sr ad0 pa0   in   sy   cs us sy id
0  0  0 7.9G 358M    0   0   0   0 1.6K  662 434   0 1492  688 4.0K  0  0 100
1  0  0 7.9G 358M  499   0   0   0 2.0K  746 421   0 1633 2.7K 4.6K  6  0 93
0  0  0 7.9G 358M    0   0   0   0  112  750  29   0 1102  258 2.5K  0  0 100
0  0  0 7.9G 358M    0   0   0   0    0  714   0   0 1097  180 2.3K  0  0 100
0  0  0 7.9G 358M    0   0   0   0 1.3K  710 315   0 1451  582 3.7K  0  0 100
0  0  0 7.9G 358M    0   0   0   0 1.2K  710 308   0 1378  530 3.5K  0  0 100
0  0  0 7.9G 358M    0   0   0   0  961  741 268   0 1354  495 3.4K  0  0 100
1  0  0 7.9G 358M    1   0   0   0  976  747 233   0 1415 3.6K 3.6K  1  0 99
0  0  0 7.9G 358M    0   0   0   0  480  745 165   0 1278  452 3.1K  0  0 100
1  0  0 7.9G 360M  741   0   0   0 1.7K  741 101   0 1340 4.3K 3.9K  2  0 98
0  0  0 7.9G 361M    0   0   0   0  240  710  67   0 1185  444 2.8K  0  0 100
0  0  0 7.9G 361M    0   0   0   0   16  781   7   0 1138  194 2.4K  0  0 100
0  0  0 7.9G 361M    0   0   0   0  200  781  44   0 1180  206 2.6K  0  0 100
0  0  0 7.9G 361M    7   0   0   0  120  710  35   0 1155 3.3K 2.7K  1  0 99
0  0  0 7.9G 361M    0   0   0   0    0  781   2   0 1130  182 2.4K  0  0 100
0  0  0 7.9G 361M    0   0   0   0    0  710   0   0 1046  151 2.2K  0  0 100
0  0  0 7.9G 361M    0   0   0   0    0  710   1   0 1092  159 2.3K  0  0 100
0  0  0 7.9G 361M    0   0   0   0    0  781   0   0 1133  166 2.4K  0  0 100
0  0  0 7.9G 361M    0   0   0   0    0  781   0   0 1132 2.0K 2.6K  1  0 99
0  0  0 7.9G 361M    0   0   0   0    0  710   1   0 1132  171 2.4K  0  0 100
0  0  0 7.9G 362M    0   0   0   0  216  710  10   0 1084  140 2.3K  0  0 100
0  0  0 7.9G 362M    0   0   0   0    0  781   0   0 1073  171 2.3K  0  0 100
0  0  0 7.9G 362M    0   0   0   0    0  710   0   0 1103  145 2.4K  0  0 100
0  0  0 7.9G 362M    0   0   0   0    0  710   0   0 1069  170 2.3K  0  0 100
0  0  0 7.9G 362M    0   0   0   0    0  782   0   0 1073  167 2.3K  0  0 100
0  0  0 7.9G 362M    0   0   0   0    0  709   1   0 1085  145 2.3K  0  0 100
0  0  0 7.9G 364M    6   0   0   0  756  711   0   0 1084  186 2.3K  0  0 100
0  0  0 7.9G 364M    0   0   0   0    1  780   2   0 1079  146 2.3K  0  0 100
procs    memory    page                      disks     faults       cpu
r  b  w  avm  fre  flt  re  pi  po   fr   sr ad0 pa0   in   sy   cs us sy id
1  0  0 7.9G 364M 1.6K   0   0   0  162  710   0   0 1115 7.4K 2.9K  2  0 98
0  0  0 7.9G 369M  852   0   1   0 2.5K  728 192   0 7472  71K  70K  3  1 96
0  0  0 7.9G 369M    0   0   0   0   40  739  19   0 1072  198 2.4K  0  0 100
1  0  0 7.9G 369M   29   0   0   0   32  670  11   0 1089 2.5K 2.5K  1  0 99
0  0  0 7.9G 369M    0   0   0   0   32  747  13   0 1087  366 2.3K  0  0 100
0  0  0 7.9G 369M    1   0   0   0    0  669   0   0 1061  143 2.2K  0  0 100
0  0  0 7.9G 369M    0   0   0   0    0  747   0   0 1099  175 2.3K  0  0 100
0  0  0 7.9G 369M    0   0   0   0    0  701   0   0 1070  129 2.3K  0  0 100
0  0  0 7.9G 369M  495   0   0   0    0  707   0   0 1100 3.3K 2.5K  1  0 99
0  0  0 7.9G 369M    0   0   0   0    0  781   0   0 1094  157 2.4K  0  0 100
1  0  0 7.9G 324M  14K   0   0   0 1.5K 2.2K   1   0 1102 1.2K 2.5K  2  0 98
0  0  0 7.9G 351M 6.1K 185   0   0  14K  14K 874   0 2518 8.6K  13K  7  1 92
0  0  0 7.9G 351M    6   0   0   0  744  710 653   0 2532 8.6K  10K  3  0 97
1  0  0 7.8G 358M    5   0   0   0 3.2K  726 398   0 2017 2.9K 6.9K  1  0 99
0  0  0 7.8G 358M  456   0   0   0 1.0K  721 1287   0 3173  11K  16K  3  1 96
2  0  0 7.8G 358M    0   0   0   0   96  716 1200   0 3096 9.0K  15K  3  0 97
0  0  0 7.8G 358M    2   0   0   0  578  709 576   0 2356 4.7K 9.4K  1  1 98
0  0  0 7.9G 357M 1.7K   0   0   0  347  710 1428   0 3292  13K  17K  3  1 96
1  0  0 7.8G 356M   20   0   0   0  584  716 612   0 2402 5.3K 9.9K  4  0 96
0  0  0 7.8G 353M 1.0K   0   0   0   99  740 541   0 2020 8.0K 9.6K 10  1 89
0  0  0 7.8G 352M    5   0   0   0   96  734 751   0 2679 6.7K  12K  1  0 98
1  0  0 7.8G 351M    5   0   0   0  120  648 1101   0 2952 8.1K  14K  3  1 97
0  0  0 7.9G 348M  574   0   0   0  190  781 538   0 2392 6.1K  10K  1  0 99
0  0  0 7.9G 346M    2   0   0   0  128  715 1320   0 3185 9.7K  16K  3  1 97
0  0  0 7.9G 344M    0   0   0   0  104  715 1222   0 3240 9.3K  16K  2  0 98
1  0  0 7.9G 343M    2   0   0   0  112  729 450   0 2259 4.5K 8.9K  1  0 99
0  0  0 7.9G 351M    3   0   0   0 2.1K  727 607   0 1945 4.4K 8.2K  2  0 98
0  0  0 7.9G 350M    0   0   0   0   96  750  34   0 1133  286 2.6K  0  0 100
procs    memory    page                      disks     faults       cpu
r  b  w  avm  fre  flt  re  pi  po   fr   sr ad0 pa0   in   sy   cs us sy id
0  0  0 7.9G 349M    0   0   0   0 1.1K  667 323   0 1453  655 3.8K  0  0 100
0  0  0 7.9G 349M    0   0   0   0 1.0K  751 269   0 1361  550 3.5K  0  0 100
0  0  0 7.9G 349M    0   0   0   0 1.0K  742 304   0 1456  518 3.6K  0  0 100
0  0  0 7.9G 349M    0   0   0   0 1.0K  744 239   0 1308  395 3.2K  0  0 100
0  0  0 7.9G 349M   81   0   0   0 1.5K  729 373   0 1529 3.7K 4.3K  1  0 99
0  0  0 7.9G 349M    0   0   0   0 1.5K  708 365   0 1468  698 3.9K  0  0 99
0  0  0 7.9G 349M    0   0   0   0 1.2K  708 364   0 1451  685 3.9K  0  0 100
0  0  0 7.9G 351M    0   0   0   0 1.3K  775 227   0 1488 3.3K 4.3K  1  0 99
0  0  0 7.9G 351M    0   0   0   0   40  702  12   0 1133  279 2.4K  0  0 100
1  0  0 7.9G 350M  143   0   0   0  513  772   2   0 1106 2.0K 2.5K  1  0 99
0  0  0 7.9G 350M    0   0   0   0  984  703 244   0 1337  556 3.3K  0  0 99
3  0  0 7.9G 349M 1.1K   0   0   0 1.0K  729 244   0 1462 4.0K 4.4K  1  0 99
1  0  0 7.9G 341M 7.3K   0   0   5 1.1K  741  25   0 1209 3.1K 2.8K  4  0 95
0  0  0 7.9G 341M   81   0   0   0   64  673  17   0 1107  406 2.5K  0  0 100
0  0  0 7.9G 340M 8.9K   0   0   0 7.6K 4.2K 230   0 1360 7.2K 3.4K  3  1 97
0  0  0 7.9G 340M    2   0   0   0  591  986  77   0 1178  302 2.6K  0  0 100
 
This is an example of a very fast nginx server, with short "burst" on mirrored nvme.
In your case you seems to have long data transfer "strips" from ada0 (many seconds).
Maybe a wordpress site with numerous images that are sent from the drive and not served from the disk cache?

EDIT: do not forget top then m, the first thing to see
Code:
procs  memory       page                    disks     faults         cpu
r b w  avm   fre   flt  re  pi  po    fr   sr nv0 nv1   in    sy    cs us sy id
8 0 2  11G  1.4G     0   0   0   0     0    6   0   0  191  1806  5371  0  0 100
0 0 2  11G  1.4G    44   0   0   0     0    6   3   9  467  5924  5407  0  1 99
2 0 2  11G  1.4G     1   0   0   0     0    6   0   0  565  5524  5267  0  1 99
0 0 2  11G  1.4G     6   0   0   0     0    6   0   0  989  6575  9137  0  2 98
0 0 2  11G  1.4G     4   0   0   0     0    6   0   0   57  2124  3141  0  0 100
0 0 2  11G  1.4G     0   0   0   0     0    6   0   0   67  2763  3620  0  0 100
1 0 2  11G  1.4G     2   0   0   0     0    6  56  56  260  2441  3942  0  1 99
1 0 2  11G  1.4G     0   0   0   0     0    6   0   0  332  3247  4658  0  1 99
9 0 2  11G  1.4G     2   0   0   0     0    6   0   0   44  1972  2936  0  0 100
1 0 2  11G  1.4G     0   0   0   0     0    6   0   0  402  4650  4160  0  0 99
1 0 2  11G  1.4G     0   0   0   0     0    6   0   0   45  2542  3317  0  0 100
0 0 2  11G  1.4G     0   0   0   0     0    6   0   0   29  1802  2676  0  0 100
0 0 2  11G  1.4G     0   0   0   0     0    6  43  43  359 58566 54444  1 13 87
0 0 2  11G  1.4G     0   0   0   0     0    6   0   0   44  1962  2777  0  0 100
0 0 2  11G  1.4G     0   0   0   0     0    6   0   0   40  1924  2803  0  0 100
0 0 2  11G  1.4G     2   0   0   0     0    6   1   1  233  3806  4932  0  1 99
 
Thanks fcorbelli,

You guessed it right : there are 2 Wordpress sites on the server serving not so many images.
I'm not sure if disk cache is used here, but I'm certain I didn't put some cache magic manually as I don't know how to achieve that (I'm using Apache).
You could tell me I didn't really try, but since now, I never met such slowness...

Anyway.

I dumped the whole database and did :
Code:
/usr/local/etc/rc.d/mysld-server stop
/usr/local/etc/rc.d/mysld-server start

Then did "top" then pressed "m" inside top :
Code:
last pid: 57747;  load averages:  0.12,  0.10,  0.12                                                                                                                                                                                                                                               up 4+08:08:01  15:59:35
61 processes:  2 running, 59 sleeping
CPU:  2.8% user,  0.0% nice,  0.3% system,  0.0% interrupt, 96.8% idle
Mem: 987M Active, 12G Inact, 202M Laundry, 1910M Wired, 1172M Buf, 777M Free
Swap: 4096M Total, 154M Used, 3942M Free, 3% Inuse

  PID USERNAME     VCSW  IVCSW   READ  WRITE  FAULT  TOTAL PERCENT COMMAND
93104 mysql      13361      3     51   3251      0   3302 100.00% mysqld
20526 www          489      0      0      0      0      0   0.00% httpd
98309 root           2      0      0      0      0      0   0.00% top
26098 root           2      0      0      0      0      0   0.00% sshd
50527 root           1      0      0      0      0      0   0.00% perl
42143 gorby          2      0      0      0      0      0   0.00% sshd
42566 cyrus          2      0      0      0      0      0   0.00% idled
76724 vscan          1      0      0      0      0      0   0.00% perl
90617 root           2      0      0      0      0      0   0.00% httpd
49498 root           2      0      0      0      0      0   0.00% goaccess
70186 clamav         0      0      0      0      0      0   0.00% clamd
13126 www            0      0      0      0      0      0   0.00% httpd
99001 vscan          0      0      0      0      0      0   0.00% perl
   30 vscan          0      0      0      0      0      0   0.00% perl
 
Sorry VladiBG, I'm sorry to ask, but can you tell me how to answer that question ? :)
Do I have to go look into configuration files ? Or inside mysql is there a command to have this answer ?
Google wasn't my friend there...
 

I'm not saying that this will resolve your issue but you can try it.

Also is good idea to check your SMART status of the disk. It may just have some problems and that's the cause of the slow response.
 
Login on mysql, run the show processlist; command. I'm wondering if you have a bunch of queries running. Someone may be hammering your site which would cause a constant stream of queries on your database. As you are running wordpress this is quite a popular target for bots to scan on.
 
Hey SirDice :)

show processlist;
Code:
root@localhost [(none)]> show processlist;
+------+-----------------+-----------+------+---------+------+------------------------+------------------+
| Id   | User            | Host      | db   | Command | Time | State                  | Info             |
+------+-----------------+-----------+------+---------+------+------------------------+------------------+
|    5 | event_scheduler | localhost | NULL | Daemon  | 8119 | Waiting on empty queue | NULL             |
| 2182 | root            | localhost | NULL | Query   |    0 | init                   | show processlist |
+------+-----------------+-----------+------+---------+------+------------------------+------------------+
2 rows in set (0.00 sec)

(I tried it several times, same output)

I'm hosting two very small website, but I get your point.
I "tail -f /var/log/httpd-access.log" my Apache logs, but I found nothing unusual.
Most access come from my Nextcloud instance and Roundcube webmail instance.

Does stopping Apache server could help me track down the problem ?
What would "normal" vmstat output (or top output with "m") should I have then ?

Also, how could his be related to the 13.0 upgrade ?
Maybe that's a coincidence after all...

VladiBG : I'll try that one also, I'm ready to investigate everything here :)

Thanks for your help,

--
Léo.
 
The plot thickens. If there are no queries then what's mysql constantly doing with its disk access? It should just idle like everything else and barely touch the disk (most or all of the database should be in memory). Anything interesting in the MySQL error log? Maybe a corrupt table?
 
The plot thickens. If there are no queries then what's mysql constantly doing with its disk access? It should just idle like everything else and barely touch the disk (most or all of the database should be in memory). Anything interesting in the MySQL error log? Maybe a corrupt table?

Love the "plot" thickening idea :)

For now :

root@vendome # tail -f slow-query.log
Code:
/usr/local/libexec/mysqld, Version: 8.0.23 (Source distribution). started with:
Tcp port: 3306  Unix socket: /tmp/mysql.sock
Time                 Id Command    Argument
/usr/local/libexec/mysqld, Version: 8.0.23 (Source distribution). started with:
Tcp port: 3306  Unix socket: /tmp/mysql.sock
Time                 Id Command    Argument

Nothing since 5 minutes now...

root@vendome # tail -f /var/log/mysql/mysqld.log
Code:
2021-04-23T16:37:32.514999Z 0 [Warning] [MY-011068] [Server] The syntax 'expire-logs-days' is deprecated and will be removed in a future release. Please use binlog_expire_logs_seconds instead.
2021-04-23T16:37:32.516963Z 0 [System] [MY-010116] [Server] /usr/local/libexec/mysqld (mysqld 8.0.23) starting as process 70872
2021-04-23T16:37:32.519753Z 0 [Warning] [MY-010156] [Server] Although a path was specified for the --slow-query-log-file option, log tables are used. To enable logging to files use the --log-output=file option.
2021-04-23T16:37:32.555301Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started.
2021-04-23T16:37:37.543098Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended.
2021-04-23T16:37:38.548875Z 0 [System] [MY-011323] [Server] X Plugin ready for connections. Bind-address: '::' port: 33060, socket: /tmp/mysqlx.sock
2021-04-23T16:37:38.977140Z 0 [Warning] [MY-010068] [Server] CA certificate ca.pem is self signed.
2021-04-23T16:37:38.977454Z 0 [System] [MY-013602] [Server] Channel mysql_main configured to support TLS. Encrypted connections are now supported for this channel.
2021-04-23T16:37:39.147514Z 0 [System] [MY-010931] [Server] /usr/local/libexec/mysqld: ready for connections. Version: '8.0.23'  socket: '/tmp/mysql.sock'  port: 3306  Source distribution.

Nothing since 5 minutes also...
 
Back
Top