If you were wondering what the data center has to do with the central station business, look around. Central stations have become data centers in and of their own rights; and their capacities will expand as monitoring of services and remote technologies continue on the upswing. Cooling and air distribution have become crucial for central stations.
When legacy cooling systems were designed, air delivery methods were totally reliant on a “chaos air distribution strategy.” Massive amounts of cool air were supplied through a jet stream to stir up stagnant or warm air in the data center. This supply of air cooled the IT equipment and moved warm air mass to the A/C return, or away from IT inlets. The hope was that the newly supplied, jet streamed air would reach all of the data center’s IT equipment -- relying on the same chaos strategy, the air conditioning system -- would extract the warm air generated by the IT equipment. The strategy didn’t work because increasing data loads kept driving up the amount of warm air present.
The vendor-based community responded to this challenge. Many ascertained that best-in-class data centers should employ a hot aisle/cold aisle arrangement of the IT racks. A short time later, as the IT loads continued to grow the problem became how to make the recently adopted hot/cold aisle system perform better.
Hot aisle versus cold
This “tale” of data center cooling was built on false assumptions and poor problem analysis. The hot aisle/cold aisle arrangement sounded like a good idea, but proved less than ideal for keeping pace with the cooling demands of high density equipment and added unwanted data center constraints and lack of flexibility for the manager.
A wider group on the supply side of the data center industry was quick to join the hot aisle/cold aisle movement. (After all, who doesn’t want to be part of the next great thing?) That thing turned out to be the supply side’s introduction of a new cooling platform that was dependent on the hot aisle/cold aisle arrangement and became a platform for a new industry segment called “supplemental cooling.”
Some of these products consumed additional floor space or made it impossible to pass interconnections of data cables from rack to rack, forcing the use of longer cables and additional rack-based exit/entry holes. Other supplemental cooling products created environmental health hazards.
The root of the problem is the chaos model of air distribution. What today’s data center needs is a simple, scalable and organized air flow system.
The typical embodiment of this heat containment design strategy is a rack exhaust system that connects to a return plenum with a cold air either from a flooded room or under the floor supply system design. Another benefit of this type of system is its independence from the physical arrangement of the enclosures. Equipment can be organized in any row configuration and high density loads can be spread across the data center instead of in a dedicated location.
Remember, the legacy data center’s “front-facing the back-of-rack” row arrangement was how the problem started. Now the center can benefit from a heat containment strategy.
In a well-designed heat containment system, with an open cold air supply and contained exhaust airflow, both tiles in front of every rack can be dedicated to supplying cold air to that one rack, rather than splitting it between front-facing racks. This design allows the legacy floor plan—the method considered inefficient—to become the most efficient method for meeting the cold air demands of today’s data center.
A key benefit of heat containment, which includes a rear plenum implementation and the aggregation of all the heat into a single location, is that it allows the data center to take the best advantage of an integrated Air Side Economizer. An ASEer simply introduces cool outside air into the data center and provides a subsequent reduction in use of energy.