Data centers are under attack. There are concerns with the energy they use and the cost of that energy both in dollars and the effect on the environment. Data center operators are required to efficiently use resources and are being held responsible for how those resources are generated.

For a new data center, issues such as the source and cost of electricity, renewable energy and the availability of proper environmental conditions to allow increased use of water or air-side economization (free cooling) are factors that can be evaluated during site selection. Building and data center design can be optimized for air flow with adequate paths for air and building orientation for minimal heat gain and to maximize natural air flow. Before you implement data center changes, you’ve got to find its energy output.

Economizer systems are designed to minimize the use of energy while maximizing energy transfer. They typically fall into two major classes: air and water. Water economizer systems use heat exchangers to allow the transfer of heat from indoors to outdoors without using refrigeration by using water cooled by the exterior air temperatures. Air economizers use outside air with the appropriate temperature and humidity characteristics (enthalpy) in relation to the data center operating parameters to condition the data center. Data centers allowing wide humidity ranges and high equipment entering air temperatures can maximize the number of hours economizer systems can be used to cool the data center. Air contamination (e.g. smoke, pollution, etc.) must be monitored and outdoor air minimized if the quality of the air is not acceptable.

Alternative renewable electrical generating systems can include hydroelectric, fuel cell (using hydrogen generated from other alternative energy, methane gas from landfills or other renewable sources), wind and photovoltaic. This power source can be obtained through the utility or generated on a site with the appropriate characteristics. It is also possible to buy renewable energy electricity that is generated remotely but is allocated to your facility through the electrical transmission system. The further away the electrical source is the more losses there are in the transmission system. Losses can be over 20% of the original generated electricity.

External bodies of water can be used to reject heat from the data center. Nearby lake or river water can be used to cool the data center directly or through heat pumps. Consideration of environmental effects must be taken into consideration. Existing facilities could possibly make use of the same site specific features indicated above if the opportunities exist.

Rating Your PUE

Power Usage Effectiveness, or PUE, is an energy metric developed by the non-profit data center efficiency group, The Green Grid organization. It is a ratio of the energy used to power the data center divided by the energy used by the IT equipment. A PUE of 1.0 indicates that all energy used by the data center is used by the IT equipment, and no energy is required for cooling, lighting or any other use.

The Department of Energy has adopted the PUE as a measure of the efficiency of a data center. The lowest 25 percent of reported PUEs are eligible for an Energy Star rating as an efficient data center. Presently that value is approximately a PUE of 1.6. Data centers that measure their energy use typically have PUE values ranging from 1.3 to 2.5. It is possible to have lower PUE values but this typically requires specially designed facilities and IT equipment.

Since the PUE is a ratio, any increase in the IT power requirement with the total energy remaining constant will result in a lower PUE. By moving load to the IT side of the equation the PUE can be lowered but the energy use will actually be higher. Raising the computer room average temperature can be an example of this depending on the specific IT equipment. ASHRAE’s recommended temperature limit is 80.6 degrees Fahrenheit at the intake of the IT equipment. However, as the temperature exceeds approximately 76 degrees the internal fans of the IT equipment speed up to provide internal cooling. This increase in room temperature results in energy savings on the cooling side of the equation, but the server fans increase power usage on the IT side of the equation, resulting in a lower PUE but potentially a higher energy use.

PUE is not intended to be a way to compare efficiency of different data centers. It will vary widely based on the reliability and redundancy level of the facility, the ambient environment and the IT equipment. It can be used as one of the benchmarks to monitor the improvement of a facility as changes are made.

High-Efficiency Design Features

There are design features that new and existing data centers can incorporate to improve the energy use in a data center, including:

  1. High efficiency uninterruptible power supply systems. UPS systems are available with various levels of efficiency and energy efficient modes. UPS system efficiency varies by loading. Select systems with the highest efficiencies at the typical loading level anticipated. Improvement of a 1000kW UPS system efficiency by 5 percentage points will save 438,000 kWH per year. There is a savings of $43,800 dollars annually at an electrical cost of $0.10/kWH without considering the additional savings from the cooling system due to reduced heat generated by the losses of the UPS system.
    • Economical mode – A double conversion type UPS system is operated in a line interactive mode. Double conversion mode converts AC power to DC and then regenerates the AC output to provide isolated power to the load. Line interactive mode allows the AC input to pass through to the load and provides some power conditioning. This mode operates at a much higher efficiency (up to 99 percent). If the incoming AC power is outside acceptable limits the system reverts back to double conversion mode to protect the load.
    • Variable module mode – A multimodule UPS system typically operates at a low percentage of its rated capacity. When the actual load is less than the number of UPS modules available the system puts excess modules into an idle mode. In the idle mode the module uses less power and the operating modules perform more efficiently since they are now operating at a higher load.
  2. High efficiency transformers. The efficiency rating of transformers complying with NEMA TP1 or DOE level CSL-3 is based on a partially loaded unit to reflect typical operating conditions. Transformers including PDUs can have various efficiencies affected by load. Transformer no-load loss is a measure of the energy used to create the magnetic field in the transformer. No-load loss is always there, independent of the load on the transformer. Select high efficiency transformers at the anticipated typical load level with the lowest no-load loss to maximize efficiencies.
  3. Maximizing air flow management. The separation of equipment intake air (cold air) and equipment exhaust air (hot air) eliminates the need to over cool the data center to compensate for hot spots. Increasing return air temperatures improves the efficiency of heat transfer through cooling coils and allows higher outdoor air temperatures to be used for cooling the data center, thereby increasing the number of hours of economizer operation available. Higher return air temperatures increases the cooling capacity of the equipment allowing less units to cool the same space or higher temperature cooling water to be used increasing the efficiency of the chilled water system. Some methods of air flow management are the following:
    • Hot aisle and cold aisle separation. Less cold air leaking into the hot aisle will increase the average temperature of the hot aisle thereby increasing coil efficiency and the number of hours of outdoor air economizer that will be used. Keeping hot air out of the cold aisle will allow higher temperature cold aisle air because there will be less hot spots due to hot air leaking into the cold aisle.
    • Blanking plates in cabinets. Addition of blanking plates to cabinets to fill open areas in the cabinet or on the side of the cabinet will reduce the flow of cold aisle air to the hot aisle.
    • Return air plenums. Using the space above the ceiling to concentrate and direct the hot air back to the cooling system will improve cooling system efficiencies.
    • Cabinets with hot air ducting. Cabinets designed with direct exhaust to the return air plenum will improve air flow management and reduce the hot air in the computer room by directing it into the return air plenum and back to the cooling equipment.
    • Hot or cold aisle containment. Containment restricts the possibility of unwanted air flow between the hot and cold aisles, assuming the open spaces in the cabinets have been blanked off. Hot aisle containment allows the hot aisle to be very warm (could be 100 degrees Fahrenheit or higher) and the remainder of the computer room to be at a temperature approximately equal to the desired equipment air intake temperature (approximately 80F per ASHRAE). Cold aisle containment controls the cold air distribution to the cabinets but allows the remainder of the room to be at the very warm temperatures of a hot aisle.
    • Match air flow to cabinet load. For raised floor systems each additional perforated tile or grate added to the system reduces the static pressure under the floor and therefore reduces the amount of air that will pass through that tile. Also, there is a finite amount of air produced by the cooling units that will be divided among the number of perforated tiles or grates installed. Locate tiles only where required by the load. A typical 25 percent free area perforated 2x2 floor tile can provide enough cooling for four to six kW depending on the floor static pressure and the heat rise across the equipment in the cabinet.
    • Computational fluid dynamics. CFD will indicate hot spots and can be used to predict the affect of equipment or operational changes. The addition of IT equipment can be guided by determining where the most advantageous location is for providing adequate cooling or where changes will be required when the location of the IT equipment is pre-determined.
  4. Localized cooling. Water or refrigerant based cooling systems can be located close to the racks providing cooling to the local hot spots. This results in less fan power being required since the air flow is local, and not across a room. The use of water or refrigerant as a cooling medium is much more efficient than using air. The use of a passive rear door heat exchanger provides the benefit of localized cooling without the need for additional fans. The unit uses the fans in the IT equipment to push air through the heat exchanger. When appropriately coordinated for load, the air exiting the IT equipment rear door heat exchanger is the same temperature as the air entering the IT equipment.
  5. Putting redundant equipment to work. Data centers typically have excess capacity both in heat rejection equipment and computer room air handling (CRAH) units. This redundant equipment can be used to increase energy efficiency or reduce operating costs.
    • Chillers can be used to produce excess cooling during off hours while electrical rates are reduced. This cooling can be stored as chilled water or ice and used during the day when electrical rates and demand is high.
    • Redundant chilled water CRAH units, which are typically left operating for on line redundancy, can be used to provide adequate cooling with higher chilled water temperatures. Increasing the chilled water temperature increases the efficiency of the chillers. It also reduces the efficiency of cooling coils and their ability to dehumidify air. The excess CRAH units make up for the reduced cooling coil capacity. By varying the temperature of the chilled water system based on data center internal conditions the efficiency of the cooling systems can be increased.
  6. ASHRAE recommended temperature and humidity levels. Energy use will be reduced if the data center is allowed to operate at the ASHRAE recommended ranges of equipment intake air temperatures of 18 degrees Celsius to 27 degrees (64.4 degrees Fahrenheit to 80.6 degrees Fahrenheit) and humidity levels of 5.5C (41.9F) dew point minimum and a maximum of 60% relative humidity and 15C (59F) dew point.
  7. Higher service voltage for IT equipment. If the voltage is increased the amps are reduced for a given amount of power. Power supplies can accept a range of voltage inputs typically from 100 to 250 volts. If your equipment is operating at 120V it can be converted to a higher voltage by changing the power cord and the source circuit. Energy savings will result from less amps flowing over the wires and the increased efficiency of the power supply operating at a higher voltage.
  8. Virtualization, Consolidation, Decommissioning. Improve the efficiency of the IT equipment by combining services and eliminating redundant or partially used servers. The power supplies on IT equipment are generally oversized by the vendor and then configured into a redundant design of N+1 or 2N. This results in power supplies operating well below their capacity. The power savings come from the fact that the losses in power supplies are mostly realized when the power supply is energized regardless of actual loading or use. By increasing the load factor on the equipment and therefore the power supplies and decommissioning the remaining equipment power supplies a substantial reduction in energy losses can be obtained.
  9. Metering and monitoring. Power use metering prevents unintended overloading of circuits and cabinets. Monitoring of IT equipment intake temperatures allows the supplying cooling equipment set points to be raised to increase energy efficiency in the data center. Metering and monitoring can be provided by adding capabilities to the existing power system equipment. Intelligent power strips can be installed to monitor power down to the individual outlet and also to monitor local temperature and humidity. This information can be brought back to the building automation system (BAS/BMS) or can be part of a KVM network or other IT monitoring system.

Although data centers require intense energy loads, data center designers and facility managers can significantly reduce the draw upon traditional methods by employing some measures that are easy to put in place. Many of those indicated above can provide concurrent improvement in efficiencies. Both new and existing data centers can improve their energy use and efficiency thereby improving the facility PUE, lowering the facility carbon footprint and saving operating costs.

Register or login for access to this item and much more

All Information Management content is archived after seven days.

Community members receive:
  • All recent and archived articles
  • Conference offers and updates
  • A full menu of enewsletter options
  • Web seminars, white papers, ebooks

Don't have an account? Register for Free Unlimited Access