camcontrol standby weirdness

frijsdijk

Active Member

Reaction score: 18
Messages: 249

I'm trying to spindown disks in a zpool to save energy costs. This concerns a home NAS setup. I've broken in down to a single disk.

Here I want the disk to go into standby mode after 20 minutes of idle time:

camcontrol standby ada2 -t 1800

What actually happens is the disk spins down immediately. Even if it was just accessed.

Whenever I access the zpool/disk, it will spin up. Ah well, I could live with that. But, then, when the disk is idle for > 1800sec (monitor with "zpool iostat <zpool> 60"), it will not spin down. Huh?

The same goes for da0-da7, SATA disks connected to a SAS1068E flashed as pure HBA controller.

It's FreeBSD 11-RELEASE, on a Fujitsu d3417-b board with 16GB ECC DDR4. All works like sunshine.

I've found that "camcontrol standby" works with ZFS. "camcontrol idle" will also spindown the disks (immediately as well), but this breaks the zpool (ZFS considers the disk broken/disconnected).

ada2 (local backup disk on a seperate zpool) is a 4TB WDC disk connected to the motherboard:

Code:
pass10: <WDC WD4000FYYZ-01UL1B1 01.01K02> ATA8-ACS SATA 3.x device
pass10: 600.000MB/s transfers (SATA 3.x, UDMA6, PIO 8192bytes)

protocol              ATA/ATAPI-8 SATA 3.x
device model          WDC WD4000FYYZ-01UL1B1
firmware revision     01.01K02
serial number         WD-WCC132283575
WWN                   50014ee2b4fc5ba2
cylinders             16383
heads                 16
sectors/track         63
sector size           logical 512, physical 512, offset 0
LBA supported         268435455 sectors
LBA48 supported       7814037168 sectors
PIO supported         PIO4
DMA supported         WDMA2 UDMA6
media RPM             7200

Feature                      Support  Enabled   Value           Vendor
read ahead                     yes      yes
write cache                    yes      yes
flush cache                    yes      yes
overlap                        no
Tagged Command Queuing (TCQ)   no       no
Native Command Queuing (NCQ)   yes              32 tags
NCQ Queue Management           no
NCQ Streaming                  no
Receive & Send FPDMA Queued    no
SMART                          yes      yes
microcode download             yes      yes
security                       yes      no
power management               yes      yes
advanced power management      yes      yes     128/0x80
automatic acoustic management  no       no
media status notification      no       no
power-up in Standby            yes      no
write-read-verify              no       no
unload                         yes      yes
general purpose logging        yes      yes
free-fall                      no       no
Data Set Management (DSM/TRIM) no
Host Protected Area (HPA)      yes      no      7814037168/7814037168
HPA - Security                 no

.. and then we have da0..da7 (the actual 14TB raidz1 zpool), 2TB WDC disks:

Code:
pass0: <WDC WD2000FYYZ-01UL1B0 01.01K01> ATA8-ACS SATA 3.x device
pass0: 300.000MB/s transfers, Command Queueing Enabled

protocol              ATA/ATAPI-8 SATA 3.x
device model          WDC WD2000FYYZ-01UL1B0
firmware revision     01.01K01
serial number         WD-WCC1P0104881
WWN                   50014ee25d3e644d
cylinders             16383
heads                 16
sectors/track         63
sector size           logical 512, physical 512, offset 0
LBA supported         268435455 sectors
LBA48 supported       3907029168 sectors
PIO supported         PIO4
DMA supported         WDMA2 UDMA6
media RPM             7200

Feature                      Support  Enabled   Value           Vendor
read ahead                     yes      yes
write cache                    yes      yes
flush cache                    yes      yes
overlap                        no
Tagged Command Queuing (TCQ)   no       no
Native Command Queuing (NCQ)   yes              32 tags
NCQ Queue Management           no
NCQ Streaming                  no
Receive & Send FPDMA Queued    no
SMART                          yes      yes
microcode download             yes      yes
security                       yes      no
power management               yes      yes
advanced power management      yes      yes     128/0x80
automatic acoustic management  no       no
media status notification      no       no
power-up in Standby            yes      no
write-read-verify              no       no
unload                         yes      yes
general purpose logging        yes      yes
free-fall                      no       no
Data Set Management (DSM/TRIM) no
Host Protected Area (HPA)      yes      no      3907029168/3907029168
HPA - Security                 no
What am I doing wrong?
 

aribi

Member

Reaction score: 24
Messages: 69

(monitor with "zpool iostat <zpool> 60"), it will not spin down. Huh?
I don't think zpool iostat would create activity on the pool; if there is no r/w activity on the pool, does that mean there are no disk accesses by zfs itself?
Also I don't think zfs is will be happy with disks taking a nap.
If you really want "go to sleep" functionality I would suggest to use the automounter in combination with some scripting combining camcontrol for sleep/idle/wake and zpool import once disks are spinning or zpool export before disks are put to sleep.
 
OP
OP
F

frijsdijk

Active Member

Reaction score: 18
Messages: 249

I don't think zpool iostat would create activity on the pool; if there is no r/w activity on the pool, does that mean there are no disk accesses by zfs itself?
Also I don't think zfs is will be happy with disks taking a nap.
If you really want "go to sleep" functionality I would suggest to use the automounter in combination with some scripting combining camcontrol for sleep/idle/wake and zpool import once disks are spinning or zpool export before disks are put to sleep.
I've found a working way to do it, but it needs some scripting.

First of all: ZFS is fine with disks that go into STANDBY (accessing a zpool with disks in STANDBY will wake them up and ZFS will wait for it). ZFS cannot handle disks that are in IDLE mode (it considers the disks lost/defect/detached and the zpool will go degraded or even unavailable).

So I log a 'zpool iostat tank 60' and if tail -20 /var/log/io.log | grep -c '0 0 0 0$' = "20" , I consider the pool to be idle for 20 minutes, and I simply camcontrol idle <disk> for all disks in the pool. Accessing the pool (via Network/Samba) will wake them up and this causes no unexpected behaviour in FreeBSD or ZFS. Only had to take special care of situations where a client on the network just woke up the disks (or a cron job), and the spindown cron coming along spinning the disks down again. But that is solved.

In a NAS with 9 disks (a backup disks and 8 in a raidz1), this makes the wattage go down from 85-90W to about 37W, so that's really worth it. Specially if you consider that during an average day (until now), these disks are actually spinned down for 60-70% of the day.
 
Top