What is going on with my drives?

Things are incredibly slow..

systat -iostat

Code:
                    /0   /1   /2   /3   /4   /5   /6   /7   /8   /9   /10
     Load Average   |

          /0%  /10  /20  /30  /40  /50  /60  /70  /80  /90  /100
cpu  user|
     nice|
   system|
interrupt|
     idle|XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

          /0%  /10  /20  /30  /40  /50  /60  /70  /80  /90  /100
ada0  MB/s
      tps|XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX218.16
ada1  MB/s
      tps|XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX129.77
ada2  MB/s
      tps|XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX241.75
ada3  MB/s
      tps|XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX131.57
cd0   MB/s
      tps|
pass0 MB/s
      tps|
pass1 MB/s
      tps|
pass2 MB/s
      tps|
pass3 MB/s
      tps|
pass4 MB/s
      tps|

Code:
 lsof -n | awk '{print $2 "\t" $1}' | sort | uniq -c | sort | wc -l
167

Code:
 > zpool status
  pool: basefs
 state: ONLINE
  scan: scrub repaired 0 in 11h38m with 0 errors on Wed Jan 22 20:50:05 2014
config:

        NAME        STATE     READ WRITE CKSUM
        basefs      ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            ada0s3  ONLINE       0     0     0
            ada2s3  ONLINE       0     0     0
          mirror-1  ONLINE       0     0     0
            ada1s3  ONLINE       0     0     0
            ada3s3  ONLINE       0     0     0

errors: No known data errors
 
mystique said:
Things are incredibly slow..
Slow doing what? Random reads/writes? Latency? Continues reads/writes?

Just showing some numbers doesn't exactly tell us what's going on. Run a proper benchmarking tool like benchmarks/bonnie++ to measure performance.

Also tell us about the hardware, 10.000 RPM drives perform quite differently from 5400 RPM ones. The type of controller used can also have a big impact.
 
And more questions ...
  • What version of FreeBSD is this? Show the output of uname -a.
  • How large are the disks (slices) and how much data is on them? ZFS performance degrades considerably as utilization in a zpool approaches 80%.
  • Do you have dedup enabled on the basefs zpool?
  • How much memory is in the system?
 
Code:
ada0: <GB0250EAFJF HPGB> ATA-7 SATA 2.x device
ada0: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes)
ada1: <GB0250EAFJF HPGB> ATA-7 SATA 2.x device
ada1: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes)
ada2: <GB0250EAFYK HPG2> ATA-7 SATA 2.x device
ada2: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes)
ada3: <GB0250EAFYK HPG2> ATA-7 SATA 2.x device
ada3: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes)

Code:
> zfs get all | grep dedup
basefs       dedup                 off                    default
basefs/home  dedup                 off                    default
basefs/usr   dedup                 off                    default
basefs/var   dedup                 off                    default

Code:
 df -h | grep basefs
basefs                 104G     89G     15G    85%    /basefs
basefs/home             15G     74M     15G     0%    /home
basefs/usr             335G    319G     15G    95%    /usr
basefs/var              39G     24G     15G    61%    /var

80% really? *sigh*

Code:
    1 users    Load  0.32  0.20  0.24                  Feb 24 11:57

Mem:KB    REAL            VIRTUAL                       VN PAGER   SWAP PAGER
        Tot   Share      Tot    Share    Free           in   out     in   out
Act  803936   16292  5547184    27284 5634324  count
All 1631124   22208 1079841k    90348          pages     1
Proc:                                                            Interrupts
  r   p   d   s   w   Csw  Trp  Sys  Int  Sof  Flt    252 cow    1384 total
            215      6696  713 1796 1385  104  656    331 zfod      1 ehci0 16
                                                          ozfod   105 em0 17
 0.9%Sys   0.0%Intr  0.2%User  0.1%Nice 98.8%Idle        %ozfod     2 ehci1 23
|    |    |    |    |    |    |    |    |    |    |       daefr   102 hpet0:t0
>                                                     440 prcfr    70 hpet0:t1
                                        22 dtbuf     1612 totfr    64 hpet0:t2
Namei     Name-cache   Dir-cache    206098 desvn          react    40 hpet0:t3
   Calls    hits   %    hits   %     51419 numvn          pdwak    73 hpet0:t4
    7442    7203  97                 38532 frevn          pdpgs    69 hpet0:t5
                                                          intrn    43 hpet0:t6
Disks  ada0  ada1  ada2  ada3   cd0 pass0 pass1   1398864 wire    132 hpet0:t7
KB/t   3.60  5.41  4.11  5.26  0.00  0.00  0.00    694248 act     683 ahci0 265
tps     180   150   195   148     0     0     0    348792 inact
MB/s   0.63  0.79  0.78  0.76  0.00  0.00  0.00     30864 cache
%busy    69    84    49    46     0     0     0   5603460 free
                                                   378688 buf

%busy..

I will see about making more space..
 
I'd use zpool get capacity basefs to get the overall percent utilization (capacity in ZFS-speak) of your basefs zpool. df(1) is not the appropriate command for this.
 
With gstat or zpool iostat -v1 you'll see if it is reading or writing.

If you want to know what process is doing I/O ops use top -m io or press "m" while top is running.
 
Back
Top