PCI energy saving features (D0->D3, L0s/L1)

A while ago I pondered about this one:
This is the 82571 (mfd dec09), and it is a hottie (needs air moving). Didn't find a way to switch off unused ports.
I came across this thread https://forums.freebsd.org/threads/how-to-change-power-state-of-a-pci-e-device.38417/post-212957 - and that approach is not useful at all.

Furthermore, it is now wrong. The power state can nowaday be changed in a simple way:
Code:
# pciconf -lbvBc igb2 | grep D0
    cap 01[40] = powerspec 3  supports D0 D3  current D0
# devctl suspend igb2
# pciconf -lbvBc igb2 | grep D0
    cap 01[40] = powerspec 3  supports D0 D3  current D3
# devctl resume igb2

But that doesn't seem to be of much use.
Some of my devices will go into D3, but the temperature of the chips doesn't change at all. So it's a no-op.
With others it just doesn't work, and yet others may change temperature by a marginal amount.

Specifically, with my em gigabit card the suspend has no effect (except for hosing the network). As shown above, it works with the igb. But then I tried to measure the temperature for a dual-port card, and with both ports up and running it was near 40°C, and with both ports suspended it was near 35°C. Not really whopping.
An explanation might be that in D3 the device must still wake-on-lan - so much of the circuitry must be kept running. There might be other devices where the benefit is larger.

I came across this on the by-walk. I had changed some settings concerning ASPM in the BIOS (not really knowing what I'm doing, because it's impossible to figure that from the documentation), and was rewarded with a couple of these
Code:
MCA: Bank 7, Status 0x8c00004000010090
MCA: Global Cap 0x0000000007000c16, Status 0x0000000000000000
MCA: Vendor "GenuineIntel", ID 0x306f2, APIC ID 0
MCA: CPU 0 COR (1) RD channel 0 memory error
MCA: Address 0x18d5b3b40 (Mode: Physical Address, LSB: 6)
MCA: Misc 0x150443686
plus an unaccounted coredump while building ports.
Only after that I noticed that pciconf -lvc actually tells us something about the ASPM - and the concerned devices appear to not support it. This will need some further analysis - but now I don't expect a noticeable benefit at all.

Corollary:
Generally, from what I notice, the interest in energy saving seems to be driven almost solely by extending battery worktime with laptops. I for my part, however, take the climate issue serious, and I think that every Wh saved whereever is a good thing. And there are lots of subsystems that could be entirely switched off for extended periods of non-use. The climate propaganda (speak: Guardian etc.) however seems not at all interested in switching things off, but rather in just making people worried and make them buy lots of expensive new things of marginal benefit except for piling up the heap of waste ever further.
 
Fair point.

I wasn't aware of devctl suspend and resume for individual devices, but have watched verbose logs of system suspend and resume for years.

Of course system s/r only leaves RAM refresh running, but WoL on e.g. em devices must use power, and would reduce suspend on battery time ... you'd need to measure by how much.
 
I wasn't aware of devctl until recently, but it is very handy for experimentation. For instance, when manually hotplugging (aka live power rewiring) a SCSI disk, it did always come back with camcontrol rescan,but that doesn't work for (S)ATA. Until somebody told me about devctl, and devctl detach/attach ahcichX does the job.

Then I thought, as the PCI D0/D3 state is so obviousely visible, somebody must already have thought about a way to switch it.
The rest is a design decision: I didn't want to have a separate router/Wlan-AP plus a separate NAS storage box plus a separate powerhorse that can build world+ports in decent time. Having it all in one has some advantages and is no longer an additional security risk with IPv6 (one needs precise firewalling in any case), but then most of it should power-up on demand only. This works for the cores from Haswell onwards, RAM from DDR4 onwards, it can be made to work with disks, and PCIe is theoretically hotplug, so it should work with them also - but there is not so much general focus on that.
 
Back
Top