Hello,
I know a lots of BSD users experiencing the disk fall off and come back in a seconds as a result of internal disk power management timers which cause of ZFS pools to be degraded and sometimes make system completely unresponsive.
The same issue with removable usb backup drives.
here is the sample log entries, time between disk foes offline and back online ~15 sec:
In the most cases, disabling/modifying disk internal APM (advanced power management) and/or EPC (extended power conditions) does solves the issue. Sometimes....
My question, why disk idle cause so big impact on the system while on other systems this is only a minor os freeze caused by disk wakeup?
Is there a general solution for this issue ?
I know a lots of BSD users experiencing the disk fall off and come back in a seconds as a result of internal disk power management timers which cause of ZFS pools to be degraded and sometimes make system completely unresponsive.
The same issue with removable usb backup drives.
here is the sample log entries, time between disk foes offline and back online ~15 sec:
Code:
08:11:27 vmhost kernel: (da1:mrsas0:1:1:0): WRITE(10). CDB: 2a 00 05 49 48 d8 00 00 10 00
08:11:27 vmhost kernel: (da1:mrsas0:1:1:0): CAM status: SCSI Status Error
08:11:27 vmhost kernel: (da1:mrsas0:1:1:0): SCSI status: OK
08:11:27 vmhost kernel: (da1:mrsas0:1:1:0): Invalidating pack
08:11:27 vmhost kernel: da1 at mrsas0 bus 1 scbus17 target 1 lun 0
08:11:27 vmhost kernel: da1: <ATA KINGSTON SA400S3 B1E2> s/n 50026B7683B6A18D detached
08:11:27 vmhost ZFS[22210]: vdev I/O failure, zpool=$zhost path=$/dev/da1p2 offset=$270336 size=$8192 error=$6
08:11:27 vmhost ZFS[22211]: vdev I/O failure, zpool=$zhost path=$/dev/da1p2 offset=$120032862208 size=$8192 error=$6
08:11:27 vmhost ZFS[22212]: vdev I/O failure, zpool=$zhost path=$/dev/da1p2 offset=$120033124352 size=$8192 error=$6
08:11:27 vmhost ZFS[22213]: vdev probe failure, zpool=$zhost path=$/dev/da1p2
08:11:27 vmhost kernel: mrsas0: System PD deleted target ID: 0x1
08:11:27 vmhost ZFS[22214]: vdev state changed, pool_guid=$8743077180665994084 vdev_guid=$3959867686622359320
08:11:27 vmhost ZFS[22215]: vdev I/O failure, zpool=$zhost path=$/dev/da1p2 offset=$270336 size=$8192 error=$6
08:11:27 vmhost ZFS[22216]: vdev I/O failure, zpool=$zhost path=$/dev/da1p2 offset=$120032862208 size=$8192 error=$6
08:11:27 vmhost ZFS[22217]: vdev I/O failure, zpool=$zhost path=$/dev/da1p2 offset=$120033124352 size=$8192 error=$6
08:11:27 vmhost ZFS[22218]: vdev probe failure, zpool=$zhost path=$/dev/da1p2
08:11:27 vmhost ZFS[22219]: vdev state changed, pool_guid=$8743077180665994084 vdev_guid=$3959867686622359320
08:11:27 vmhost kernel: (da1:mrsas0:1:1:0): Periph destroyed
08:11:27 vmhost ZFS[22220]: vdev state changed, pool_guid=$8743077180665994084 vdev_guid=$3959867686622359320
08:11:27 vmhost ZFS[22221]: vdev is removed, pool_guid=$8743077180665994084 vdev_guid=$3959867686622359320
08:11:41 vmhost kernel: mrsas0: System PD created target ID: 0x1
08:11:41 vmhost kernel: da1 at mrsas0 bus 1 scbus17 target 1 lun 0
08:11:41 vmhost kernel: da1: <ATA KINGSTON SA400S3 B1E2> Fixed Direct Access SPC-4 SCSI device
08:11:41 vmhost kernel: da1: Serial Number 50026B7683B6A18D
08:11:41 vmhost kernel: da1: 150.000MB/s transfers
08:11:41 vmhost kernel: da1: 114473MB (234441648 512 byte sectors)
08:11:41 vmhost kernel: ses2: pass3,da1 in 'Drive Slot 1', SAS Slot: 2 phys at slot 1
08:11:41 vmhost kernel: ses2: phy 0: SATA device
08:11:41 vmhost kernel: ses2: phy 0: parent 500056b36d81e5ff addr 500056b36d81e5c1
08:11:41 vmhost kernel: ses2: phy 1: SAS device type 0 phy 0
08:11:41 vmhost kernel: ses2: phy 1: parent 0 addr 0
In the most cases, disabling/modifying disk internal APM (advanced power management) and/or EPC (extended power conditions) does solves the issue. Sometimes....
My question, why disk idle cause so big impact on the system while on other systems this is only a minor os freeze caused by disk wakeup?
Is there a general solution for this issue ?