[LINK] Data Centers Waste Vast Amounts of Energy, Belying Industry Image - NYTimes.com

grove at zeta.org.au grove at zeta.org.au
Mon Sep 24 10:59:21 AEST 2012


On Mon, 24 Sep 2012, Jim Birch wrote:

> This sounds a bit sensational to me - 30 nuclear power plants, blah, blah,
> blah.
>
> If we are all going to use computers flat out there's going to be an
> impact, just like there is for driving cars, heating houses, eating and
> drinking water.  We could ask whether we are likely to stop using the
> Internet any time soon.  A more useful question is whether the net impact
> of large data centres compares to everyone having their own servers and
> cold room.  My guess is that the data centres would be way more efficient
> and I didn't see anything in the article that contradicts that.  I would
> expect that data centres would have big cost drivers to minimise waste heat
> production and you can bet that there are a lot of people working on this.

This goes against what I have been hearing in the sysadmin world 
regarding low power CPU's, DC Power in the datacentre and of course 
Virtualisation.

All of the above are spun to "slash datacentre power costs", "reduce cooling
requirements" and "maximise processes per rackspace".

On the one hand, you have Virtualisation, which is supposed to concentrate
all those individual fiddly servers into just a few rack spaces, with 
just a handful of physical systems to attend to.  But that doesn't deal 
with things like switches and other infrastructure that doesn't scale
that well virtually.   There are of course now "blade" switches that
can concentrate the number of ports into a smaller space, but these still
require large redundant power supplies.

DC power is debated around the place.  This is obviously replacing the 
huge 240v high wattage power supplies in traditional server gear with 
48v or 12-15v power supplies that of course use far less power and 
generate less heat.  Low power CPU's are also mentioned to take advantage
of this.

But we still seem to be stuffing the racks with huge boxen.   Those 
redundant or 3 way power supplies are great too.  They give you high 
resilience against an interruption to power caused by a fault.  But they are 
redundant - usually these run alongside the in use power supply, all the time,
so their power is adding to the waste.  Perhaps these power supplies should 
have a standby mode (one that you don't have to visit the DC physcially to
enable)....   Also the UPS systems generate their own heat and power waste.
These are also on all the time, but generally only filtering the line 
and waiting for an interruption.

I have seen and heard of some crazy and also innovative cooling schemes.
Most DC standard racks are designed to manage airflow as well.  But none 
of it is enough.  I think personally a lot of the waste is in the form 
of poorly written or managed code/systems that are underutilised and 
the waste heat/energy they put out is a side effect of having too much 
systems power in the hands of the application developers and their reliance
on knowing that a big beefy system will mask the deficiencies of 
what is actually running on it.   An example of this is a $30,000 
computer with dual 400w pwer supplies running a constant high load due to the 
poorly built PHP/Mysql appserver running on it, used to serve 15 or so 
clients. Multiply this by 90.   Some of these should be virtualised
and are, but others are slow to take up the process....   Good Datacentre
management of energy starts with the server's application.... :)


rachel

-- 
Rachel Polanskis                 Kingswood, Greater Western Sydney, Australia
grove at zeta.org.au                http://www.zeta.org.au/~grove/grove.html
    "The perversity of the Universe tends towards a maximum." - Finagle's Law



More information about the Link mailing list