Keeping Cool is Mission Critical

author-image
DQI Bureau
New Update

Most businesses and even the average person today all rely on the functioning and availability of data centers. Almost every aspect of typical daily activities from energy, lighting and telecommunications to transport, healthcare, and security systems are controlled by data centers.

Today's data centers are evolving quickly to keep up with the growing demands, resulting in a growth of server density. Companies increasingly rely on information technology, web-based applications, communication services as well as outsourced business processes.

Increased energy consumption and the need to reduce the related economic and environmental impacts are making it important for data centers to implement strategies to reduce their energy consumption without hampering their mission critical functions. An interruption in these data centers could have significant impact.

DATA CENTERS

Data centers are either standalone buildings or integrated information technology (IT) rooms within commercial buildings. The servers they house represent the computer hardware used for data processing, computing, and networking. Data centers need a stable and significant power supply to run and require specialized cooling in order to handle the heat loads associated with computational processes. Hot spots and inadequate cooling can cause costly disruptions. Therefore, power and cooling are the two key aspects of a data center infrastructure.

Compared to other commercial facilities, data centers put increased demands on thermal management, airflow, cooling, and humidity control. Reliable, efficient, and capable heating, ventilation and air conditioning (HVAC) systems are critical for them to achieve optimal performance.

That cooling is mission critical is especially clear when looking at three key concerns for data center facilities:

  • Uptime: Cooling systems account for about 30% of a data center's downtime
  • Efficiency: Cooling in a conventional data center accounts for from 35% to as much as 50% of energy consumption
  • Sustainability: Cooling represents 33% of the global data center carbon emissions footprint

With the need to meet the surging demand for web-based business applications, cloud computing, internet and data storage requirements, server density in existing and newly constructed data centers is increasing. As power densities in server rooms are high, meeting the cooling capacity requirements and managing the temperature are vital for the data center cooling infrastructure.

 

According to the analyst firm, Gartner, the intense power requirements needed to run and cool data centers currently account for almost a quarter of global carbon dioxide emissions from the information and communications technology sector. Meeting the increasing heat loads requires specialized, scalable and reliable temperature management strategies.

DATA CENTER REQUIREMENTS

Every data center is unique and operates to meet the specific demands of the businesses and customers it supports. Therefore, the supporting infrastructure of the data center also has to be designed and built with those unique operational needs in mind.

Unlike air conditioning in commercial office buildings, hotspots and inadequate cooling in data centers can lead to server failures that prove to be costly disruptions for businesses, owners, and facility operators. The reliability of cooling systems in a data center environment is therefore, of paramount importance.

Ensuring reliability begins with the quality of the cooling system components and the design of the overall HVAC system and continues through to establishing and ensuring continuous operational efficiency. Three necessary components of a data center cooling infrastructure are chillers, computer room air conditioning (CRAC) units, and the humidifier. Tightness of the data center facility determines the heat load and the need for a humidifier.

With today's technological advancements, server technologies are evolving rapidly. Server lifetime today is between three and five years. However, the rest of the facility's non-IT infrastructure has a lifetime estimated to be at least 15 to 20 years.

The indoor air temperature that allows the servers to operate properly is to typically maintained at 23°C. This requires the chiller water temperature to be at 12°C. Tests conducted by the American Society of Heating and Air-Conditioning Engineers (ASHRAE) proved server resilience for up to 30°C ambient temperature.

Considering that new servers can operate at 26°C ambient, the chilled water temperature can be set to 15°C. While old servers are replaced by new and often more efficient ones during facility upgrades, most data centers do not change the settings of their cooling operation to adapt to the new server infrastructure. By modifying air temperature from 23°C to 26°C and therefore raising water temperature by 3°C, the energy consumption of the chillers can be lowered by 5% without impacting the capacity of the CRAC unit.

 

ENSURING FACILITY AVAILABILITY

When choosing cooling equipment such as chillers and CRAC units for data centers, it is important to consider the quality and reliability of the systems. How stringently is the equipment design performance tested for extreme ambient temperature conditions? How much time does it take to restart the system after a power interruption? These are just a few key considerations to make when choosing systems for a data center cooling infrastructure.

Careful design of the overall non-IT infrastructure is the next step to ensuring reliability. Factoring redundancy within the system is one of the highest priorities for data center designers.

The Uptime Institute, which was founded in 1993, developed and introduced the well-defined Tier Classification system: I, II, III and IV, with Tier IV representing the highest level of projected availability. Today, its Tier Certification system is globally recognized and adapted as an industry standard to asses and benchmark reliability of the power and cooling system infrastructure.

Tier I and II levels use a single distribution path for power and cooling. Tier III has multiple, independent distribution paths; only one path is required to provide power and cooling. This means that planned maintenance can be done without any disruption. Tier IV data centers have multiple, independent, physically isolated systems that provide redundant capacity components. Multiple, active distribution paths simultaneously serve the data center. This avoids disruption due to maintenance and provides back-up for unplanned downtime. For the piping, there are two active paths to avoid overheating. For Tier III and Tier IV, there is dual power for IT and cooling system.

ENERGY EFFICIENCY STRATEGIES

Energy consumption and server availability are the number one worry for data center operators. Based on recent research conducted by Gartner, servers consume only about 15% of the direct energy in data centers. Cooling constitutes an average of 40 to 45% of total operating costs for many data centers.

Choosing the most energy efficient equipment is the first step to controlling lifecycle cost. In most cases, data centers have to operate day and night, 365 days a year. Using free cooling when the outside temperature allows it and thereby, limiting the use of mechanical cooling is a smart energy efficient strategy to pursue.

 

Infrastructure Improvements at Intesa Sanpaolo Data Center in Parma, Italy, Increase Operational and Energy Efficiency

Intesa Sanpaolo, one of the top banks in the Euro Zone, upgraded the infrastructure at its data center in Parma, Italy to increase energy and operational efficiency. Recent energy savings improvements provide reliable, efficient and capable cooling critical to the data center environment. The upgrades also meet sustainability standards and are part of the Intesa Sanpaolo commitment to environmental stewardship and the journey to reach a high performance building outcome.

Trane experts started with a two-month study of the data center buildings, monitoring temperatures and chiller load. The audit revealed that cooling system operations was not optimized for efficiency, leading the chillers to operate at 50% load. Based on these results, bank leaders selected energy conservation measures that met their needs while improving energy efficiency and sustainability.

Upgrades included replacing two low-efficiency 400 kilowatt (kW) air-cooled chillers with one high-efficiency 1,000 kW water cooled system. A Tracer Summit chiller plant management system was installed to control 7,500 kW of cooling production and distribution. The control system reduced the number of chillers required to efficiently operate the chiller plant.

Intesa Sanpaolo leaders also introduced an ongoing maintenance contract in addition to predictive and preventative services provided by an offering called Care. The implemented maintenance and service programs help ensure that the energy efficient and environmentally responsible systems keep running at optimum performance.

The improvements, completed in 2011, are expected to have a three-year payback period. The system upgrade also provides reliable, efficient and capable cooling critical to the data center environment. The improvements to the facility's chilled water plant are expected to increase energy efficiency by 16 percent.

 

Mechanical cooling can be minimized through a system design that maximizes the free cooling capacity. This is best achieved through an HVAC system design that incorporates dry coolers as part of the chiller plant system.

Free cooling is effective when outside temperatures are lower than the return temperature, which is often the case as data centers operate round the year, and often during the day and night. Depending on the geographic location of the data center, a high efficiency free cooling system can achieve an annual energy efficiency improvement of up to 50 to 60%.

Data centers can also use a combination of a chiller and an air handling unit as an airside system design strategy. This type of system, also known as an economizer, can minimize mechanical cooling. A specialized control ensures minimal use of fresh air when outside temperature is high, and uses adequate amounts of outside air during low temperature seasons. Such a system can help achieve a cooling efficiency improvement of 60 up to 70%.

As dust particles in incoming outside air could cause major issues for data centers, a plate heat exchanger or heat wheel between the return air and outside air could be used. This prevents dust particles from entering while still recovering energy depending on ambient temperature.

ACHIEVING OPERATIONAL EFFICIENCY

Designing the cooling system with advanced control technologies is another important step to drive energy and operational efficiency of the infrastructure right from the start. Chiller plant controls can help sequence and schedule the chiller plant operation and they also allow facility managers to proactively adapt chilled water supply to the facility cooling demand during the business high and low cycles.

Adopting a building management system (BMS) is the natural next step for managing multiple facilities and building infrastructure, it helps to establish operational benchmarks and to ensure repeatability, flexibility and easier management. Data centers use metrics such as power usage effectiveness (PUE) and HVAC systems effectiveness to measure the energy efficiency of the infrastructure. Using a BMS will allow facility owners and operators to measure and manage their facility against established industry performance metrics.

Software applications programmed to interface with hardware such as chilled water systems are complex enough and pre-engineered. Given this, it is advisable to use the same supplier who has a thorough design and working understanding of the system components, chiller plant controls, and the BMS. Single source responsibility for critical systems can bring peace of mind for facility managers and owners.

While efficient design approaches and reliable cooling systems will keep servers up and running, effective system lifecycle management strategies are essential to minimize unplanned downtime, to continuously improve performance, to achieve optimal energy efficiency and to uncover potential cost savings.

A study conducted by Ponemon Institute reveals that cooling systems account for around 30% of data center downtime (heat related failures).

Proactive maintenance that takes into account having a service plan, site monitoring and predictive maintenance can reduce this high failure rate during the operational cycle. Service plans for the HVAC or cooling system infrastructure consider regular maintenance and systematic ways to extend the equipment lifecycle. Site monitoring strategies by technical experts can monitor equipment status, manage critical alarms and default of operation. Predictive management incorporates alarm diagnosis, events mitigation and take corrective actions to eliminate downtime.