SWEP uses cookies to make your visit to our web pages as pleasant as possible. By using our services, we assume that you agree to the use of cookies. Further information on data protection can be found in our privacy policy.

Data center cooling

Global transitioning to liquid cooling and a demand for more high performing data centers, without compromising on energy efficiency, is more crucial than ever and calls on smart and forward-thinking solutions. SWEP offers solutions for compact heat transfer for low PUE and high-density data centers.

Optimized data center cooling solutions

Efficient data center cooling and energy recovery solutions are vital in the fast-paced world of digital storage.  SWEP brazed plate heat exchangers excel at providing highly effective, yet compact solutions that do not compromise on space. We offer optimized data center cooling solutions in a small package.

 

Top tips for data center energy managers

Sustainable energy
Use renewable sources, typically wind, hydro power and solar.

Liquid cooling
Invest in the most efficient system possible. Air-cooled systems will phase out in the short to medium term.

Efficient chiller system with modern low GWP refrigerants
SWEP has a wide range of compact, efficient brazed plate heat exchangers that can serve as chiller condenser, evaporator or economizer. Lower pressure drop in the system leads to reduced pump size and reduced energy consumption. Compact brazed plate heat exchangers means lower carbon footprint and low refrigerant charge. And natural and low GWP refrigerants mean reduced ozone depletion and greenhouse effect.

Free cooling
Take advantage of ‘free cooling’, which involves lowering the temperature in a data center by utilising naturally cool water instead of mechanical refrigeration. With brazed plate heat exchangers from SWEP you have tight temperature approach and can take advantage of free cooling even at small temperature differences (so for a longer period of the year). Large capacities are often needed and with SWEP´s modular system, several brazed plate heat exchangers can be mounted together and give the capacity needed, also adding redundancy.

Excess heat
Data center excess heat obtained from cooling can be recovered using brazed plate heat exchangers and supplied directly to a district energy network if available, or maybe a nearby office building. Income from this will offset your data centers’ energy costs.

White space and the machine room
Optimize your white space by going for compact design with efficient cooling. With brazed plate heat exchangers, the Coolant Distribution Unit (CDU) can be designed smaller and even fit an in-rack chassis level CDU. By saving on space, you save money. Chassis level cooling and SWEP’s brazed plate technology are well-placed to manage increasing heat loads, providing an ultra-compact, high efficiency cooling capability.

FAQ data center cooling

We have collected the most common questions and answers about data center cooling.

A traditional data center cooling approach
A traditional data center cooling approach deals with Computer Room Air Conditioner (CRAC) in order to keep the room and its IT racks fresh. Very similarly, Computer Room Air Handlers (CRAH) centralize the cooling water production for multiple units and/or rooms. Cooling water might be issued by an adiabatic cooling tower, a dry cooler, which counts as free-cooling, or with a dedicated chiller when the climate is too warm.

Various improvements have been developed
Because air is a bad heat carrier, various improvements have been developed to increase cooling efficiency. Raised floor, hot aisle and/or cold aisle containment, and in-row up to In-rack cooling, have consistently decreased the losses.

Water usage has been growing year after year
While CRAH units and cooling towers have become legacy, water usage has been growing year after year to become a challenge. Water is sprayed in the air to dissipate heat better than in a dry cooler. With growing water scarcity, Water Usage Effectiveness (WUE) is now an important factor for the data center industry.

Liquid cooling
Liquid cooling is the most recent and advanced technology improvement and includes hybrid systems with integral coil or Rear Door Heat-Exchanger (RDHX), and Direct-to-Chip (DTC) while immersed systems offer the best possible Power Usage Efficiency (PUE) with highest energy density and unequaled WUE.

Depends on the type of data center
The cost of data center cooling depends on the type of data center, the Tier level, the location, design choices including cooling technology, etc. Total Cost of Ownership (TCO) and Return on Investment (ROI) are probably a better approach to get a full view on cost.

TCO comprises of three critical components:

  1. CAPEX (Capital Expenditure) The initial investment which takes Tier level, expected lifetime and design choices into consideration – the cost to build.
  2. OPEX (Operational Expenditure) Refers to the operating and maintenance costs and considerations like location and design choices, including PUE and cooling
    technology etc.
  3. Energy costs: since water scarcity and climate warming increase as well as fossil energy stocks decrease, increased attention should be given to Leadership in Energy and Environmental Design (LEED) certification.

These considerations lead to a more holistic view and better evaluation of ROI and strategic choices.

Comparing the annual data center water consumption with the IT equipment energy consumption
Water Usage Effectiveness (short WUE) is a simple rating in l/kWh comparing the annual data center water consumption (in liters) with the IT equipment energy consumption (in kilowatt hours).

Water usage includes cooling, regulating humidity and producing electricity onsite. Uptime Institute claims that a medium-sized data center (15 MW) uses as much water as 3 average-sized hospitals or more than two 18-holes golf courses.

While the demand is growing for more data centers, WUE becomes crucial while water scarcity becomes more and more common. As a result, data centers must rely on more sustainable cooling methods. Ramping up on renewable energies (solar and wind) also allows data centers to indirectly curb their water consumption while lowering carbon emissions.

A metric for the energy-efficiency of data centers
Power Usage Effectiveness (short PUE) is a metric for the energy-efficiency of data centers; specifically how much energy is used by the computing equipment, in contrast to cooling and other overhead that supports the equipment.

PUE is also the inverse Data Center Infrastructure Efficiency (DCIE). An ideal PUE is 1.0. Anything that isn’t considered a computing device in a data center (e.g. lighting, cooling, etc.) falls into the category of facility energy consumption. Traditional data centers score PUE around 1,7-1,8 or more while aisle containment lowers PUE down to 1,2. Liquid cooling technologies allow down to 1,05-1,1.

A system that enables smaller, more efficient and more precise liquid cooling
A Coolant Distribution Unit (short CDU) is a system that enables smaller, more efficient and more precise liquid cooling in a data center, often integrating facility water. The CDU circulates the coolant in a closed-loop system on the secondary side (cooling application) and utilizes facility water on the primary side (heat rejection).

A CDU has a pump, reservoir, power supply, control board, and a brazed plate heat exchanger as the key components. Filters, flow meters, pressure transducers, and other devices are also used for managing the operation of the CDU optimally. In-Rack CDUs are designed to integrate into a server chassis and distribute coolant to a series of servers or heat sources. In-Rack CDU offer up to 60-80kW of cooling capacity. These can feature redundant pump design, dynamic condensation-free control, automatic coolant replenishing, a bypass loop for stand-by operation, and automatic leak detection. Freestanding In-Row CDUs are larger and designed to manage high heat loads across a series of server chassis in data center. These full liquid cooling systems distribute coolant in and out of server chassis and can integrate into existing facility cooling systems or be designed to be fully self-contained. In-Row CDU capacity ranges typically around 300 kW with models up to 700 kW.

Direct-to-chip cooling (short DTC) utilizes cold plates in contact with hot components and removes heat by running cooling fluid through the cold plates. Cooling fluids can be a refrigerant (Direct expansion DX or 2-phase systems) or chilled water (single phase) in direct feed or via CDU. Practically, liquid cooled systems often have one or more loops for each server. In the GPU server (Graphic Processing Unit), there are five loops, so one needs a CDU for the rack. DTC extends cooling to CPU (Core Processing Unit), GPU, RAM (Random Access Memory) and NIC (Network Interface Card) for High-frequency trading, Hyperscale Computing, Rendering and Gaming, Supercomputer, Telecommunications, etc.

Involve submerging the hardware
Immersion systems involve submerging the hardware itself into a bath of non-conductive and non-flammable liquid. Both the fluid and the hardware are contained within a leak-proof case. The dielectric fluid absorbs heat far more efficiently than air and is circulated to a brazed plate heat exchanger where heat is transferred to the chilled facility water.

In a 2-phase system, the dielectric liquid is evaporated to vapor phase, re-condensed into liquid phase on top of the casing. Heat is captured by fluid’s evaporation and dissipated into the condenser toward chilled facility water. Because latent heat (phase change) is far more important than sensible heat (temperature change), data center density can reach unequaled level. Also, temperature stability is over the top since phase change occurs at constant temperature. Finally, peak loads are shaved by the thermal mass that the dielectric fluid volume represents.

An alternative system makes the dielectric fluid circulate inside the racks where IT equipment is enclosed into leakproof casings. More likely in single phase, dielectric fluid actively absorbs heat and is then cooled again in the CDU. As such, immersion cooling is the best data center cooling method, encouraging future applications like High Power Computing (HPC), machine learning Artificial Intelligence (AI), Crypto Money mining, Big data analytic programs, Internet of Things (IoT) with 5G and cloud computing deployment, etc.

Copper-free brazed plate heat exchangers is not a must
There is a significant quantity of copper in direct contact with the dielectric coolant, which is likely non-corrosive. Hence, copper-free brazed plate heat exchangers is not a must. Printed circuit boards (short PCB) are used in nearly all electronic products. This medium is used to connect electronic components to one another in a controlled manner. It takes the form of a laminated sandwich structure of conductive and insulating layers: each of the conductive layers is designed with an artwork pattern of traces, planes and other features (similar to wires on a flat surface) etched from one or more sheet layers of copper laminated onto and/or between sheet layers of a non-conductive substrate.

All-SS or copper-free brazed plate heat exchangers should be considered
In Direct-to-Chip or DTC cooling, there is no direct contact between the electronics and the cooling fluid. It is crucial that the fluid is non-conductive in order to avoid perturbating the electronics operation and deionized water could be used. When reaching high purity and low electric conductivity (typically < 10 µS/cm), pure water becomes copper-corrosive.

When the DC uses evaporative or adiabatic cooling towers to reject heat, water is sprayed on the cooling air for better efficiency and resulting in a lower temperature than with a dry cooler. Unfortunately, in addition to water evaporation, salt concentration also increases to becoming fouling and corrosive. Water treatment then, becomes necessary, including water make-up for compensation, but associated operational cost rise. In order to limit this extra-cost, systems might be operated close to minimum water quality, which could result in copper-corrosive water. In these conditions, All-SS or copper-free brazed plate heat exchangers should be considered, but assessed case-by-case.

Depends on the type of data center
The cost of data center cooling depends on the type of data center, the Tier level, the location, design choices including cooling technology, etc. Total Cost of Ownership (TCO) and Return on Investment (ROI) are probably a better approach to get a full view on cost.

TCO comprises of three critical components:

  1. CAPEX (Capital Expenditure) The initial investment which takes Tier level, expected lifetime and design choices into consideration – the cost to build.
  2. OPEX (Operational Expenditure) Refers to the operating and maintenance costs and considerations like location and design choices, including PUE and cooling
    technology etc.
  3. Energy costs: since water scarcity and climate warming increase as well as fossil energy stocks decrease, increased attention should be given to Leadership in Energy and Environmental Design (LEED) certification.

These considerations lead to a more holistic view and better evaluation of ROI and strategic choices.

A metric for the energy-efficiency of data centers
Power Usage Effectiveness (short PUE) is a metric for the energy-efficiency of data centers; specifically how much energy is used by the computing equipment, in contrast to cooling and other overhead that supports the equipment.

PUE is also the inverse Data Center Infrastructure Efficiency (DCIE). An ideal PUE is 1.0. Anything that isn’t considered a computing device in a data center (e.g. lighting, cooling, etc.) falls into the category of facility energy consumption. Traditional data centers score PUE around 1,7-1,8 or more while aisle containment lowers PUE down to 1,2. Liquid cooling technologies allow down to 1,05-1,1.

A system that enables smaller, more efficient and more precise liquid cooling
A Coolant Distribution Unit (short CDU) is a system that enables smaller, more efficient and more precise liquid cooling in a data center, often integrating facility water. The CDU circulates the coolant in a closed-loop system on the secondary side (cooling application) and utilizes facility water on the primary side (heat rejection).

A CDU has a pump, reservoir, power supply, control board, and a brazed plate heat exchanger as the key components. Filters, flow meters, pressure transducers, and other devices are also used for managing the operation of the CDU optimally. In-Rack CDUs are designed to integrate into a server chassis and distribute coolant to a series of servers or heat sources. In-Rack CDU offer up to 60-80kW of cooling capacity. These can feature redundant pump design, dynamic condensation-free control, automatic coolant replenishing, a bypass loop for stand-by operation, and automatic leak detection. Freestanding In-Row CDUs are larger and designed to manage high heat loads across a series of server chassis in data center. These full liquid cooling systems distribute coolant in and out of server chassis and can integrate into existing facility cooling systems or be designed to be fully self-contained. In-Row CDU capacity ranges typically around 300 kW with models up to 700 kW.

Direct-to-chip cooling (short DTC) utilizes cold plates in contact with hot components and removes heat by running cooling fluid through the cold plates. Cooling fluids can be a refrigerant (Direct expansion DX or 2-phase systems) or chilled water (single phase) in direct feed or via CDU. Practically, liquid cooled systems often have one or more loops for each server. In the GPU server (Graphic Processing Unit), there are five loops, so one needs a CDU for the rack. DTC extends cooling to CPU (Core Processing Unit), GPU, RAM (Random Access Memory) and NIC (Network Interface Card) for High-frequency trading, Hyperscale Computing, Rendering and Gaming, Supercomputer, Telecommunications, etc.

Involve submerging the hardware
Immersion systems involve submerging the hardware itself into a bath of non-conductive and non-flammable liquid. Both the fluid and the hardware are contained within a leak-proof case. The dielectric fluid absorbs heat far more efficiently than air and is circulated to a brazed plate heat exchanger where heat is transferred to the chilled facility water.

In a 2-phase system, the dielectric liquid is evaporated to vapor phase, re-condensed into liquid phase on top of the casing. Heat is captured by fluid’s evaporation and dissipated into the condenser toward chilled facility water. Because latent heat (phase change) is far more important than sensible heat (temperature change), data center density can reach unequaled level. Also, temperature stability is over the top since phase change occurs at constant temperature. Finally, peak loads are shaved by the thermal mass that the dielectric fluid volume represents.

An alternative system makes the dielectric fluid circulate inside the racks where IT equipment is enclosed into leakproof casings. More likely in single phase, dielectric fluid actively absorbs heat and is then cooled again in the CDU. As such, immersion cooling is the best data center cooling method, encouraging future applications like High Power Computing (HPC), machine learning Artificial Intelligence (AI), Crypto Money mining, Big data analytic programs, Internet of Things (IoT) with 5G and cloud computing deployment, etc.