Mozilla downplaying Firefox, moving into A.I.

Shhhhhh astyle ..... ;) Is a known secret that datacenters; especially those used by HPC labs or house Supercomputers, eat 25-50% of a near city clean water YEARLY.
That statement is way over generalized. Some data centers use very little cooling water, some use none. Some are cooled using river water which they release back into the river. But some indeed use a large amount of fresh water which they evaporate. In general, the amount of energy used to cool data centers has decreased significantly in the last 25 years; it used to be that for every 1W of compute "power" you needed to spend 1.5 to 2W of cooling energy; today that factor is down by roughly an order of magnitude. I have a personal story about that. but it is sad so I won't share it: this great improvement killed a startup that I helped found.

Furthermore, the bulk of data centers today are not HPC nor "supercomputers", nor the intelligence and national security community. The bulk of them are commercial: The FAANG (Facebook, Amazon, Apple, Netflix, Google) and their Chinese counterparts (Alibaba, Tencent, Baidu ...) and their ilk, plus their data center vendors. For example, this very forum where we're discussing is part of that industry.

It is however true that data centers use a significant amount of energy. I read somewhere that 1% of all electricity (or energy?) worldwide is going into data centers. And then there is always crypto mining ...
 
That statement is way over generalized. Some data centers use very little cooling water, some use none. Some are cooled using river water which they release back into the river. But some indeed use a large amount of fresh water which they evaporate. In general, the amount of energy used to cool data centers has decreased significantly in the last 25 years; it used to be that for every 1W of compute "power" you needed to spend 1.5 to 2W of cooling energy; today that factor is down by roughly an order of magnitude. I have a personal story about that. but it is sad so I won't share it: this great improvement killed a startup that I helped found.
I am not disagreeing with you in anything you said, BUT while it has gone down substantially (what the industry uses to measure efficiency in datacenters is PUE= Power Usage Effectiveness). PUE has leveled off at around 1.6, but the majority of the inefficient datacenter still average PUE of 2.0-2.5 and that include that rack your business has in your office in that cool room with so much BTU.

So what does it mean to have an average PUE of 1.6 (Facebook and some other private companies have PUE of 1.2 they're known in industry as hyperscalers) that means that 60% of the energy to power those servers is wasted on the cooling. So how do Facebook, Microsoft, Openai, _insert_hyperscaler_ go from PUE 1.6 to 1.2 by using a load of water, the cost of the water is cheaper than the 40% extra cost in energy to power the BTU from AC.

Anyways AI and the GPU/FPGA are going to make this extremely expensive (bottleneck for AI will be this), and the second after someone fix this will be the electric grid we will need so much more energy it will be crazy for electric grid.

FYI from what I read the electricity use is higher than 1% for these datacenter, actually is expected that by 2030 to be closer to 8-21% depending if you include Network, Consumer Device, UPS and other IOT to access these datacenter tools/data. Either way crazy numbers.
 
PUE has leveled off at around 1.6, but the majority of the inefficient datacenter still average PUE of 2.0-2.5 ...

Facebook and some other private companies have PUE of 1.2 they're known in industry as hyperscalers ...
I think that's the important point here. A large fraction of all servers today are installed in hyperscaler locations. In particular, a large fraction of all worldwide compute power is used by a small group of internet companies (those that provide services to individual users, for example Amazon (*) Facebook, Google, Netflix), and by the big cloud service companies where businesses of all size (from mom and pop stores to the world largest) do their computing, dominated by Amazon, Google and Microsoft. That's the sloppily-named FAANG (plus their chinese counterparts).

The hyperscaler data centers were running at a PUE of ~1.2 in about 2008. They have since gotten lower; both Google and Facebook report an average of 1.1 (not for small experimental systems, but for their fleet with many data centers). That means that cooling has become a relatively small energy overhead, and at this point reducing the energy usage of "the computer" itself is 10x more important than making the cooling better. How does one go from the historical PUE of 1.5 ... 2 down to 1.1? The use of evaporative cooling (turning water into steam) is one ingredient, but not the major one, nor is it always used. Other tools are reducing wasteful air movement, not mixing hot and cold air, running electronics at efficient temperatures (which is much warmer than commonly expected), moving cooling water to where it really helps, and an enormous amount of attention to detail. Data center cooling is a large and important field of engineering.

The other approach is to reduce the energy usage of the computing itself. Example include to change the instruction set of the computer from x86 to Arm or RISC-V, which is happening right now. Another example is to move AI workloads away from general-purpose CPUs (very inefficient) to GPUs (better) and then to dedicated AI chips (best). But as you said, AI training uses a lot of compute cycles. There are many other things that are being done to reduce power usage: Amazon turns disk drives off for hours at a time, other companies use drives that spin slow. On the opposite side, IO-intensive workloads are being moved from generic enterprise nearline disks to fast multi-actuator disks (where the cost of running the spindle motor is amortized over multiple sets of heads that move independently), and to SSDs (which are more energy efficient per IO, even if their total energy usage can be high). Similar things are happening in networking, by not overprovisioning networks, making hops shorter (fewer routers touched), and avoiding media conversion (fiber to the chip). Even the power delivery mechanism within the computer is being optimized, with large-scale adoption of low-voltage DC distribution, coupled with superconducting cables; and UPS batteries deployed in optimal places and with optimal capacities (just large enough to handle the starting delay of the diesel generator).

It is true that a lot of data center cooling is done by evaporating water. And this is where activists start screaming "these evil computer companies are using water that could be used for millions of people to drink". Those statements are vastly exaggerated, to the point of being partly nonsense. To begin with, some of the evaporated water is not treated and drinkable city water, but existing surface water. There is a reason a lot of data centers are built along the Mississippi and Columbia rivers, where large amounts of fresh water are running into the ocean anyway. And a lot of data centers do not use evaporative cooling, in particular in colder climates. There is a reason many data centers are are way north (Canada, Scandinavia).

(*) Amazon is in the list of internet companies because their public-facing "we sell everything" side has become one of the most used search engines, when people search for products, and because it delivers a significant fraction of all advertising. In theory, Microsoft should also be in the list of internet companies, with Bing being the 2nd largest search engine, and a reasonable fraction of advertising delivery.

Having said that: The PUE for hyperscalers is excellent, so much so that cooling computers has become a minor inefficiency when deployed like that. And as you said, this does not apply to all data centers, and not to all modes of computing. The gamer with his 750W tower with blinking lights, who has to crank up the AC in the summer: that is ridiculously inefficient. As is the typical small business who converted an old cleaning closet to a "server room" by sticking a cooling air duct in there, and having a single rack with their DSL gear, network and phone switch, and a few servers, and keep it at 12 degrees C = 55 degrees F because someone told them that computers like it cool: that's also insanely inefficient.
 
Back
Top