ZFS slow performance

Zpool shows constant, high read rate, while there is no real reason to be at this level.
Network usage is about 700kb/s, and there is no other process except mysql which have default configuration with query cached turned on.
It's seems that because of this system is very slow.
After reboot read rate is about 1mb or lower and everything is working good. Then after a while read rates are going up and server performance became very, very slow (like freeze).


system:
FreeBSD 8.0 amd64, 4gb ram, 2x1tb seagate baracuda as mirrored ZFS raid

zpool iostat 1:
Code:
zroot       98.5G   821G    622      0  76.2M      0
zroot       98.5G   821G    802      0  98.0M      0
zroot       98.5G   821G    560      0  67.0M      0
zroot       98.5G   821G    521      0  62.9M      0
zroot       98.5G   821G    468      0  48.7M      0
zroot       98.5G   821G    751      0  91.6M      0


top:
Code:
last pid:  6044;  load averages:  0.07,  0.23,  0.38    up 0+04:02:39  17:34:24
1554 processes:1 running, 1553 sleeping
CPU:  1.1% user,  0.0% nice,  3.7% system,  0.2% interrupt, 95.0% idle
Mem: 2499M Active, 817M Inact, 472M Wired, 1704K Cache, 153M Free
Swap: 8192M Total, 8192M Free


  PID USERNAME     THR PRI NICE   SIZE    RES STATE   C   TIME   WCPU COMMAND
 6044 site1    1  46    0   105M 12332K accept  0   0:00  0.29% php-cgi
 6041 site1    1  46    0   105M 13312K sbwait  0   0:00  0.20% php-cgi
 6042 site1    1  45    0   105M 13036K accept  0   0:00  0.10% php-cgi
 4997 mysql        160  44    0   313M   216M umtxn   0   1:17  0.00% mysqld
 4798 www           66  44    0   107M 31276K lockf   0   0:41  0.00% httpd
 4803 www           66  44    0   108M 31680K lockf   3   0:21  0.00% httpd
 4809 www           66  44    0   112M 34008K lockf   3   0:13  0.00% httpd
 4801 www           66  44    0   109M 32400K lockf   2   0:13  0.00% httpd
 4800 www           66  44    0   106M 29860K kqread  3   0:12  0.00% httpd
 4806 www           66  44    0   108M 31728K lockf   2   0:11  0.00% httpd
 4805 www           66  44    0   107M 30852K lockf   2   0:10  0.00% httpd
 
I solved my problems with migrating back to UFS.
Today I had same problem on another server which I migrated from UFS to ZFS.
The symptom is same - unrealistic high read in "zpool iostat", while there is no network or CPU load.
Is it some kind of ZFS bug?
 
miks said:
I solved my problems with migrating back to UFS.
Today I had same problem on another server which I migrated from UFS to ZFS.
The symptom is same - unrealistic high read in "zpool iostat", while there is no network or CPU load.
Is it some kind of ZFS bug?

did you use zfs prefetching?
 
Server, which I migrated back to UFS, have 4gb ram, so prefetch was disabled by default. I enabled it, but nothing changed.
 
So, I got same problem already on 3rd server with ZFS.
It's FreeBSD 8.0 with 8 GB ram and 2x 10k rpm hdds zpool mirror.

top show following:
Code:
last pid: 54473;  load averages:  0.79,  1.14,  0.81   up 13+22:29:49  19:00:53
1599 processes:1 running, 1598 sleeping
CPU:  2.7% user,  0.0% nice,  5.5% system,  0.2% interrupt, 91.6% idle
Mem: 6208M Active, 18M Inact, 1172M Wired, 251M Cache, 128K Buf, 235M Free
Swap: 1024M Total, 12K Used, 1024M Free

  PID USERNAME       THR PRI NICE   SIZE    RES STATE   C   TIME   WCPU COMMAND
99769 mysql              132  56    0  1525M  1055M sbwait  5  40:37 17.19% mysqld
54179 user2              1  44    0   369M 28032K sbwait  3   0:02  1.46% php-cgi
54193 user2              1  45    0   367M 26796K sbwait  2   0:02  1.46% php-cgi
54397 user3              1  45    0   373M   140M tx->tx  1   0:02  0.98% php-cgi
54459 user1              1  45    0   104M 14932K tx->tx  2   0:00  0.98% php-cgi
54458 user1              1  45    0   360M 15152K zfs     0   0:00  0.98% php-cgi
54346 user1              1  45    0   373M   100M tx->tx  5   0:02  0.88% php-cgi
54456 user1              1  45    0   104M 14924K zfs     6   0:00  0.78% php-cgi
54403 user3              1  45    0   373M 71684K tx->tx  2   0:02  0.59% php-cgi
54450 user1              1  44    0   360M 15152K zfs     6   0:00  0.59% php-cgi
53771 user4              1  45    0   374M   264M tx->tx  0   0:13  0.49% php-cgi
54451 user1              1  46    0   360M 15152K zfs     2   0:00  0.49% php-cgi
54453 user1              1  46    0   360M 15152K zfs     5   0:00  0.49% php-cgi
54447 user1              1  45    0   360M 15152K zfs     0   0:00  0.49% php-cgi
54454 user1              1  47    0   360M 15152K zfs     4   0:00  0.49% php-cgi
54455 user1              1  45    0   360M 15152K zfs     2   0:00  0.49% php-cgi

Mysql have over 100 processes waiting.
I wonder about "zfs" and "tx->tx" in processes STATE column, because it only shows up like this when system is slow.

"zpool iostat 1" while server is slow:
Code:
               capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
zroot        108G  31.3G      9     37   557K  1.04M
zroot        108G  31.3G    156    475  9.06M  10.4M
zroot        108G  31.3G    181    477  3.86M  13.8M
zroot        108G  31.3G     14    652   635K  52.9M
zroot        108G  31.3G     81    267  4.09M  13.7M

and "zpool iostat 1" after I killed and restarted all httpd/php-cgi and mysql processes:
Code:
               capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
zroot       96.1G  42.9G      9     37   559K  1.04M
zroot       96.1G  42.9G     17      6  2.18M   270K
zroot       96.1G  42.9G     92      8  9.97M   250K
zroot       96.1G  42.9G     63      6  5.62M   306K

How can be it that zroot "used" and "avail" have different values after I restarted httpd/mysql?

After all I strongly do not recommend to use ZFS on web hosting servers. With UFS there were no problems at all, it just worked.
 
>After all I strongly do not recommend to use ZFS on web hosting servers. With UFS there were no problems at all, it just worked.

Well, it depends on your hardware. ZFS doesn't forgive anything.

Apart from that, there are lots of ZFS related fixes in 8-stable.
 
I don't think there is some problem with hardware.
All servers is manufactured by Intel and as far I know Intel hardware generally works very well with FreeBSD.
I will look out for changes it FreeBSD 8 stable.
 
Back
Top