[ale] Saving electricity with Linux
Jim Kinney
jim.kinney at gmail.com
Sat Jun 1 21:20:34 EDT 2024
The most rare resource I've seen in large HPC clusters is a willingness to
totally refactor code to be optimized for the new cluster.
>From a sysadmin viewpoint, in order to get enough data to find all the
performance problems of old code on new hardware requires enough logging at
short time intervals to bog down the performance to unacceptable levels and
a small HPC cluster to analyze the data in real time.
On Fri, May 31, 2024, 5:34 PM jon.maddog.hall at gmail.com <
jonhall80 at comcast.net> wrote:
> From my perspective the reasons why Linux was selected for data centers
> had little to do with efficiency of electric power usage and more to do
> with:
>
> o licensing cost
> o license management
> o management of systems
> o Mean time to fix security problems
> o Availability of needed application functionality
>
> and other factors.
>
> Of course saving money in electricity is important, and its counter of
> saving money on electricity for cooling too, but I think this is mostly
> around the actual application load.
>
> In large farms that simply do storage of data, the amount of electrical
> power in the actual storage of data is more the function of the hardware
> used (hard disk vs SSD). But if the application is bit coin mining or AI
> which heavily use GPUs then the amount of electricity (and cooling) will be
> much higher.
>
> "Back in the day" when supercomputers like Cray, Control Data and others
> ruled the market their use of electricity was legendary. A Cray-1 was
> rumored to produce enough hot water to heat 10 homes in Toronto, Canada.
> The cost of a supercomputer was so high that very few agencies could afford
> one.
>
> Then the traditional "Beowulf" system was designed at NASA, made of
> commodity boxes and using decomposition of the problem reduced the cost of
> the hardware to about 1/40th of the price of a comparatively powered
> supercomputer.
>
> For a while if you looked at the list of the 500 fastest computers in the
> world (i.e. the greatest number of nodes, the greatest number of CPUs, the
> greatest number of cores) there were one or two that ran windows as the
> operating system. But as more and more larger and faster computers were
> commissioned, those two computers went further down the list until they
> disappeared. Did that mean that Linux was "more efficient", or did it mean
> that more and more people knew how to build HPC systems using Linux and it
> was simply too expensive to find people that knew how to use Windows in
> that environment.
>
> Of course HPC computers typically are oriented to "compute" and modern
> cloud systems are more oriented to a mixture of data storage and compute.
>
> In the early days of Linux I met with the head systems administrator of
> Deutsche Bank in Germany. He had 3000 Linux servers at the time. I asked
> him why he used Linux and not Windows NT. He told me that he had 4
> systems administrators to manage his 3000 servers, and if he were to use
> Windows NT he would need 2999 systems administrators.
>
> Of course we now have great systems administration applications, both
> closed source and Open Source, but this issue of per-system license price
> is still an issue.
>
> In the end if server farms really use 9% of the electricity we should make
> sure that those servers and applications are as efficient as possible and
> (unfortunately) when I try to discuss this with people I often get the
> words "memory is cheap", "get a faster cpu", "networking is fast", and most
> do not spend the time with optimizations after the application works.
>
> I noticed that someone mentioned desktop computers. These days they
> often use 350-450 watts of power, with some gaming computers using
> 1000-2000 watts (and water cooling!) If you leave these on all the time,
> and particularly in hot climates, you not only use power to run the
> systems, but power to cool them.
>
> I am working on a project to use thin clients (using less than 20 watts)
> to act as display modules, and do the heavy computing on a local server,
> shared by many people. These servers can be tuned to turn off units that
> are not being used. This will use redundancy to create high availability.
>
> This has been a lot of writing and typing to try and push the point that
> datacenter managers will try to use the best, most efficient of systems,
> whether it be electricity, licenses, management or whatever.
>
> Peace out.
>
> maddog
>
> On 05/31/2024 3:55 PM EDT Jim Kinney via Ale <ale at ale.org> wrote:
>
>
> So far the giant cloud systems and super computers run Linux. And those
> monsters suck down power like it free. In fact, that's why datacenters get
> built where they do - cheaper power.
>
> In terms of efficiency, think FLOPS/Watt, the compute style matters. Is it
> raw number crunching or IO heavy? Those GPUs are very amperage hungry but
> flops/watt are more efficient than a general purpose cpu. AMD still
> outpaces Intel in the cpu flops/watt race.
>
> I don't see Microsoft servers being the beast behind the datacenter power
> hogs. I do see almost all aspects of datacenter power, cooling, and access
> management being run by microsoft servers. But that's 4-5 servers in a
> datacenter.
>
> Linux won. I got that car tag a long time ago.
>
> LINUX1
>
>
>
> On Fri, May 31, 2024, 1:07 PM Solomon Peachy via Ale <ale at ale.org> wrote:
>
> On Fri, May 31, 2024 at 12:43:39PM -0400, Bob Toxen via Ale wrote:
> > Datacenters may use as much as 9% of total U.S. electricity in six
> years.
> > Home computers use some too.
> >
> > Since Linux uses roughly half the electricity as Windows, switching to
> Linux in the
> > data centers will save a lot of electricity. Get the treehuggers to
> champion this.
>
> The overwhelming majority of the systems in datacenters are already
> running Linux. So what you're saying is that, Linux is, by
> itself, responsible for nearly 9% of total US electrical consumption. [1]
>
> ...And I should point out that if you put Linux vs Windows on modern
> laptop harwdare, the odds are high that Windows actually uses less, both
> when on-but-otherwise-idle and in a suspended state.
>
> (This is even more true when comparing Linux vs MacOS on Apple hardware)
>
> [1] Excluding cooling, which can be up to half of that energy usage..
>
> - Solomon
> --
> Solomon Peachy pizza at shaftnet dot org
> (email&xmpp)
> @pizza:shaftnet dot org (matrix)
> Dowling Park, FL speachy (libera.chat)
> _______________________________________________
> Ale mailing list
> Ale at ale.org
> https://mail.ale.org/mailman/listinfo/ale
> See JOBS, ANNOUNCE and SCHOOLS lists at
> http://mail.ale.org/mailman/listinfo
>
> _______________________________________________
> Ale mailing list
> Ale at ale.org
> https://mail.ale.org/mailman/listinfo/ale
> See JOBS, ANNOUNCE and SCHOOLS lists at
> http://mail.ale.org/mailman/listinfo
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mail.ale.org/pipermail/ale/attachments/20240601/20b3cc98/attachment.htm>
More information about the Ale
mailing list