"Energy Efficiency" - I'm pissed...

There is an article here published today, and it does not even allow responses.

Reading this article, there is nothing new in it, nothing really concerning to the subject title, no differenciation, just advertisement propaganda.
It is exactly the kind of article that chatGPT could have written as well, because it only collects information that is widely available on the web, and rewords it to match the question asked.

So after reading the article, one is frustrated that nothing new came out of it. But now, concerning to the topic: I was always motivated to reduce the power consumption of my machines as far as feasible, because for a lot of time they're just idling along, and then it is of no use having them run fully engaged. I am not at all interested in eco-dogmatism and such, I would not buy some new and "eco friendly" stuff because I love my old crap, but I focus on power savings because it is technically possible, it doesn't cost anything and it makes sense to me.

But, I found myself widely alone in that field. The only thing of concern regarding power savings seems to be the battery capacity of laptops, but nothing concerning stationary systems. And usually when I buy a used mainboard, I find all the CMOS settings tuned to maximum performance, i.e. maximum power consumption.

What is worse, I even get laughed at and repelled. For instance, postgresql has the habit to read the database disk every five minutes, even when there is no activity on the database. The explanation is that the developers need to make sure the user hasn't accidentially swapped the disk.
When I asked if this feature could be turned off, because it effectively makes the disks never spin down (e.g. for databases that are not used during nighttime), I got briskly rejected with the argument that it is obvious how to decide between the safety of the users and some disks to spindown.
Apparently those guys are not at all interested in saving the rainforest (or probably they are, and they also consider that only a matter of politically correct newspeak wording, and no practical consequence).

And we have another example, right here, right now. Two days ago I reported a bug, because zpool import did repeatedly fail and not return on my machine. There was surprisingly fast reaction: within a few hours there were three or four parties suggesting how to acquire more details.
Then, when it became apparent that tbe bug is caused by my energy saving configuration and wouldn't appear otherwise, interest ceased and there is no further reaction as of now. (bug #270340)
 
I agree that those "link posts" on this forum should be open to replies. On other forums there is useful discussion coming out of it.

Spinning disks down - some are convinced that it would kill spinning rust disks quickly. And as you say, there is a lot of software nowadays that prevents it anyway. Including ZFS which is stirring the pot on a regular basis.

Really power saving platforms such as Intel Atom with ECC support and Xeon-D combos are very expensive, it will be hard to make the money back. I'm still running full power systems 24/7 and just pay the power bill (and A/C bill).
 
Sometimes I think power down stuff on computers (I'm writing this as a user of desktops, not laptops or mobiles) is very much like new cars that turn the engine off if you are stopped long enough. I recognize that "yes it does improve the corporate fleet fuel economy" but "what if someone is trying to carjack me? It takes a finite amount of time for the engine to restart, get up to power to accelerate me out of danger". That's also ignoring the fact that where I live, winter temps are typically -15C to 0C, but we've had stretches of -20C or less. I always disable it as soon as I get in because I find it detrimental to me.

Power stuff on computers is similar: I'm using a desktop for a reason, I need the power so I set it to max performance. When not in use, it gets powered off.

I have zero issues with others being concerned with power consumption, spending less is never a bad thing. I also wonder about old myths related to powering down, spinning disks down are actually true any more. Certainly spinning a disk down adds to latency on the next operation because it has to spin up.

Power states, spinning things down, hibernate, suspend, all raise interesting corner cases for software, much like the difference between a power cycle and "reset" in devices.

As for the OP zpool import issue: BIOS settings were causing the import to fail during boot? Interesting. My opinion on that is "a system should always boot with everything max performance, then software should control any power saving options (like with powerd)". But that's my opinion, it cost you nothing so feel free to ignore or disagree.
 
I agree that those "link posts" on this forum should be open to replies. On other forums there is useful discussion coming out of it.
They're automatically posted by a script from various blogs and other sites. It's not a 'person' that's posting these and could therefor never respond. And what's stopping anyone from opening a new thread, referring to the article in question and discussing it there? Just like PMc is doing right here?
 
Spinning disks down - some are convinced that it would kill spinning rust disks quickly.
Yes, according to what we know about mechanical engineering, it should. Either mechanical stress or temperature fluctuation stess (or both) should go against the lifetime.
OTOH, stopping a mechanical disk is something that makes a real difference, somewhere near 20$ per year per disk. If the disk survives that for five years, it is almost amortized.

And, from practical experience, I am doing this for more than 15 years now, and I have a bunch of disks nearing the 100k service hours, and no visible problems.

And as you say, there is a lot of software nowadays that prevents it anyway. Including ZFS which is stirring the pot on a regular basis.
Not necessarily. I have 17 disks installed, 13 of them mechanical, and in normal operation all of them are stopped, only two SSD remain active.
I need a strong machine, to rebuild sources with my patches, so this is a Xeon 2660v3 - but I have no job, I literally do not have enough to eat anymore, and I have no other choice than to power down everything possible. Last time I measured, it took some 56W from mains.

Really power saving platforms such as Intel Atom with ECC support and Xeon-D combos are very expensive, it will be hard to make the money back
Exactly. These things are not so very cheap, difficult to obtain as used, and they do not have the interfaces for all my hardware, so I would need to keep the big machine also alongside and power it up when needed. I don't think that would save any money.
 
As for the OP zpool import issue: BIOS settings were causing the import to fail during boot?
No. A multitude of stopped disks is causing zpool import to not return during normal operation, after sysutils/gstopd had engaged, and it requires a pushbutton reset to get things back to normal.

From what I could find out so far, it seems the aio subsystem can cope with a single stopped disk, whereas when encountering a whole number of them, it correctly starts the disks, and then apparently goes into some lock order reversal or such.
 
  • Like
Reactions: mer
Looks like a good argument for SSD's over HDD's. Yeah, SSD's are more expensive. I switched to SSD's back in 2012, and never looked back. The very first laptop with an SSD that I bought - it's still quite usable for basic research and Skype/Zoom, and runs Win10 without complaints or slowdowns. I did have a scare in 2016, but TBF, that was because I pushed the limits of the hardware with bittorrent and video transcoding work.
 
Looks like a good argument for SSD's over HDD's.
My last mirror of spinning disks, I keep looking at this question. SSD vs HDD. Currently 3TB drives, I get "more TB per dollar" using HDDs over SSDs. roughly $800 US for SSDs, vs roughly $150 for HDDs. Even with cost of power increasing, it's a non trivial length of time for me to recover that delta.
Now I've always kept my OS separate from my Data so OS drives of 500GB to 1TB SSDs makes a lot of sense (at least to me). Those are quick searches so is probably consumer/prosumer grade devices vs enterprise grade.
 
how about this / i got on a VPS provider in UK when trying to circumvent primevideo region BS

1679505785237.png
 
I never got over the trust issues that early consumer SSDs built with me. I have many spinning disks around. I partially compensate with lots of RAM, but of course that doesn't do any good for power consumption either.

In one case I'm gonna convert, though: my second PXE server runs on an 8-set of 10k rpm SAS disks. Performance is just not there over NFS for random seeks like in `make world`. Those will be SSD'ed pretty soon. Should also reduce power consumption by a lot. Right now there are 10-packs of Intel Enterprise SSDs on Ebay for auction, I plan to snipe one of those sets.

If I really wanted to reduce power consumption on my primary fileserver I would use a smaller number of larger disks. But I'm also not ready to go down to 2-way RAID1 from zRaid3, so the saving are limited and the power bill is not.
 
...
If the disk survives that for five years, it is almost amortized.
...
Your calculation does not take production resources into account. If you take other resources into account, the best thing for our environment is to keep old stuff running as long as possible (in general, not with everything).

For my disks I use an external 4x USB case, which is switched on/off via a raspberry pi. So my firewall/storage systems (pcengines APU2) consumes roughly 5W, and when used 7W for the USB case and 4-8W per disk. I do suspend-to-disk whenever I can on my AMD Ryzen powered workstation, and wake-on-lan via a script I can trigger with my phone. wake-on-lan does not work on my HP workstations which act as test servers, but then they have the Intel Management Engine activated (which I would never do normally), so a script can power on that machine via a web-ui. Previously I have used the raspberry pi connected to the power switch on the mainboard. I often use old notebooks with broken displays as testserver - putting them to good use is the best thing on can do plus those energy-efficient devices are really nice and generally are capable enough for my work.
 
Fact is that tried and proven conventional hardware has its advantages. New power-saving stuff often has some drawbacks, especially in the I/O department.

Meanwhile new high-speed CPUs and graphics cards break records in power consumption.

P.S. why is 10 Gbit ethernet a power hog, but 40 Gbit Thunderbolt is not? I know transmission range plays a role, but why do I have to go full power on the interface regardless of whether I actually use a long cable or not?
 
P.S. why is 10 Gbit ethernet a power hog, but 40 Gbit Thunderbolt is not? I know transmission range plays a role, but why do I have to go full power on the interface regardless of whether I actually use a long cable or not?
Because ethernet has longer distances to travel? Most Thunderbolt cables are like 12 inches (30 cm), while Cat6 cables are usually closer to 100 ft (33m) or more... I'd think that you gotta spend some energy pumping all that data down the long-ass wire.
 
Because ethernet has longer distances to travel? Most Thunderbolt cables are like 12 inches (30 cm), while Cat6 cables are usually closer to 100 ft (33m) or more... I'd think that you gotta spend some energy pumping all that data down the long-ass wire.

Yeah, but they make me pay that power bill even if my actual Ethernet cable is in fact 0.3m...
 
P.S. why is 10 Gbit ethernet a power hog, but 40 Gbit Thunderbolt is not? I know transmission range plays a role, but why do I have to go full power on the interface regardless of whether I actually use a long cable or not?
Isn't that obvious? Because then people will buy new gear when something with a "power-saving" option comes out.
 
P.S. why is 10 Gbit ethernet a power hog, but 40 Gbit Thunderbolt is not? I know transmission range plays a role, but why do I have to go full power on the interface regardless of whether I actually use a long cable or not?

The physical medium. Dirt cheap cable plant with jacks and much longer distances vs. super-controlled and shielded short distance cables that are frequently active (have power and circuits in them) themselves.
 
Back
Top