[LINK] Greening ICT
Carl Makin
carl at stagecraft.cx
Wed Feb 11 16:06:31 AEDT 2009
On 11/02/2009, at 9:24 AM, Stilgherrian wrote:
> On 11/02/2009, at 9:15 AM, Richard Chirgwin wrote:
>> I'm not so sure it's "very little". Each rack needs front and
>> (preferably) rear access, and that access has to be usable. So for a
>>
> Or, each rack can be mounted on a track so that you slide open the
> 1.5m gap between the rack you want to work on and its friend like,
> showing my age, a Compactus.
Front and rear access is important but actually cooling capacity
becomes the limiting factor in most data centres. Most old style data
centres with airconditioning units against the walls having cold air
delivered via an underfloor plenum and hot air extracted from the roof
space typically max out around 5KW average per rack.
One local commercial data centre (Canberra) uses the APC hot aisle
containment system (www.apc.com) where the cooling units are actually
between the racks sucking the hot air from an enclosed space at the
rear. This place supports 7.5KW average per rack however they can be
upgraded to 12KW average per rack (at a suitably upgraded price of
course).
Closer to Stil's idea is the "Data Centre in a 20ft Shipping
Container" such as Sun's Project Blackbox (http://www.sun.com/products/sunmd/s20/index.jsp
) which fits 8 racks at either 12KW or 25KW average per rack. The
racks run down both sides facing sideways with a cooling unit between
each rack sucking the hot air from the rack in front and blowing cold
air into the rack behind. The racks slide into the centre aisle for
servicing. These seem like a great idea until you realise you need
another container full of support gear such as a UPS and water
chillers (and the cooling tower on the roof) to support it.
Where I work we've just been looking at all of this in the context of
implementing a disaster recovery facility.
Carl.
More information about the Link
mailing list