ZFS FreeBSD 10.1, ZFS zvol, ctld -> high LA

Hello. I use FreeBSD with as a SAN for vSphere cluster of several ESXi hosts. Here is my pool:
Code:
zpool status
  pool: zroot
state: ONLINE
  scan: scrub repaired 0 in 0h9m with 0 errors on Fri Feb 13 11:41:15 2015
config:

        NAME          STATE     READ WRITE CKSUM
        zroot         ONLINE       0     0     0
          mirror-0    ONLINE       0     0     0
            gpt/sys0  ONLINE       0     0     0
            gpt/sys1  ONLINE       0     0     0
          mirror-1    ONLINE       0     0     0
            gpt/sys2  ONLINE       0     0     0
            gpt/sys3  ONLINE       0     0     0

errors: No known data errors
I created zvol on this pool and did an iSCSI target on it. Here is my ctld.conf:
Code:
auth-group ag0 {
chap login password
}

portal-group pg0 {
discovery-auth-group no-authentication
listen 0.0.0.0
}

target iqn.2014-12.fish.prorator:prorator0 {
auth-group ag0
portal-group pg0
lun 0 {
  device-id prorator0
  serial 550428
  path /dev/zvol/zroot/prorator0
}
}
Everthing was fine, but today morning target stopped working with strange state. Here is top -SHP output of this state:
Code:
last pid: 13926;  load averages: 37.31, 37.46, 36.01                              up 4+15:58:06  03:28:20
333 processes: 46 running, 250 sleeping, 2 zombie, 35 waiting
CPU 0:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle
CPU 1:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle
CPU 2:  0.0% user,  0.0% nice,  3.7% system,  0.0% interrupt, 96.3% idle
CPU 3:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle
CPU 4:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle
CPU 5:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle
CPU 6:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle
CPU 7:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle
Mem: 12M Active, 47M Inact, 13G Wired, 3116K Cache, 2233M Free
ARC: 12G Total, 1235M MFU, 10G MRU, 1848K Anon, 692M Header, 339M Other
Swap: 4096M Total, 4096M Free

  PID USERNAME   PRI NICE   SIZE    RES STATE   C   TIME    WCPU COMMAND
   11 root       155 ki31     0K   128K CPU1    1 109.2H 100.00% idle{idle: cpu1}
   11 root       155 ki31     0K   128K CPU7    7 109.1H 100.00% idle{idle: cpu7}
   11 root       155 ki31     0K   128K CPU3    3 109.1H 100.00% idle{idle: cpu3}
   11 root       155 ki31     0K   128K CPU2    2 109.0H 100.00% idle{idle: cpu2}
   11 root       155 ki31     0K   128K CPU4    4 108.9H 100.00% idle{idle: cpu4}
   11 root       155 ki31     0K   128K CPU5    5 108.7H 100.00% idle{idle: cpu5}
   11 root       155 ki31     0K   128K CPU0    0 107.3H 100.00% idle{idle: cpu0}
   11 root       155 ki31     0K   128K RUN     6 108.6H  97.17% idle{idle: cpu6}
13915 raven       23    0 21912K  3412K CPU6    6   0:03   3.66% top
   12 root       -88    -     0K   560K WAIT    0  82:32   0.00% intr{irq275: ahci0}
   12 root       -92    -     0K   560K WAIT    0  48:44   0.00% intr{irq265: igb0:que}
    0 root       -16    0     0K  3504K -       0  45:42   0.00% kernel{zio_read_intr_1}
    0 root       -16    0     0K  3504K -       0  45:37   0.00% kernel{zio_read_intr_5}
    0 root       -16    0     0K  3504K -       2  45:31   0.00% kernel{zio_read_intr_0}
    0 root       -16    0     0K  3504K -       1  45:30   0.00% kernel{zio_read_intr_3}
    0 root       -16    0     0K  3504K -       0  45:19   0.00% kernel{zio_read_intr_2}
    0 root       -16    0     0K  3504K -       7  45:04   0.00% kernel{zio_read_intr_4}
   12 root       -92    -     0K   560K WAIT    6  42:07   0.00% intr{irq272: igb1:que}
Look at load averages. None of processes or kernel threads is under load, but load averages is very high. iSCSI target has stopped work with timeout errors on initiators. My system is logging all.log, but there is no errors. Reboot was solved this, but what happened? How to fix it?
 
I don't know what exactly happened in your case, but from the point of CPU use reduction I would recommend you to check that your zvol is configured for device mode, not default GEOM mode.
 
Back
Top