Fans Air handling units Fire safety Air distribution Air conditioning Heating products. Data Center Cooling Solutions

Fans | Air handling units | Fire safety | Air distribution | Air conditioning | Heating products Data Center Cooling Solutions 2 | Systemair Sys...
Author: Mark Stewart
0 downloads 0 Views 4MB Size
Fans | Air handling units | Fire safety | Air distribution | Air conditioning | Heating products

Data Center Cooling Solutions



2 | Systemair

Systemair Your reliable partner in data centre cooling

© Systemair 2014. Systemair reserves the right to make technical changes. For updated documentation, please refer to www.systemair.com.



Systemair | 3

Content Systemair across the world

4

Temperature differential

UPS and Battery Room

40

Product range

6

between cold and hot aisle 19

Control Room

42

General advice

8

Density of Electrical Power

Electric switches and

Planning tools

9

and Distribution

21

­transformer room

43

Redundancy (TIER and

Diesel Power Generator area 44

RATINGLevels) 22

Chilled Water generation area 45

Geographical Location

24

Accessories 46

Certification and

Appropriate Maintenance

28

References – Green IT

accreditation ­bodies

Monitoring and Control

29

Introduction 10 The evolution of data centres in recent years

in data centre design

12

14

Most common data center

Factors that influence energy

­design and c­ ooling systems 30

­efficiency in a data centre 16

IT equipment room

Efficiency Indicators

16

Environmental Parameters

18

48

Since then the product portfolio has grown considerably and today comprises of a wide range of energy efficient fans, air handling units, air distribution products, chillers, air curtains and heating products. Our business idea is that with simplicity and reliability as core values, develop, manufacture and market ventilation products of high quality.

30

With the business idea as a base and our customers in focus, we will be perceived as a company to trust with a focus on delivery security, availability and quality. Our focus is to develop innovative and energy efficient products - that are easy to select, install and maintain. With over 4000 employees in 45 countries, we are always close to our customers.



4 | Systemair

Systemair across the world

Skinnskatteberg, Sweden

Windischbuch, Germany

Ukmergé, Lithuania

The Group headquarters, distribution center and largest production site. Production of compact air handling units and a wide range of fans and accessories. Production of air curtains and fan heaters for Frico, a company within the Systemair Group.

Producton of an extensive range of axial and roof fans, plus tunnel and garage ventilation.

Production of residential units and large air handling units.

Langenfeld, Germany

Maribor, Slovenia

Production of air curtains.

Production of high-temperature fans for smoke extract ventilation.

Hässleholm, Sweden

Mühlheim an der Ruhr, Germany

Production of heating products for air handling units, mobile and fixed fan heaters, plus dehumidifiers.

Production of air handling units for swimming pool halls and comfort ventilation with extra high efficiency.

Aarhus, Denmark

Tillières, France Production of air conditioning products.

Bratislava, Slovakia

Production of large air handling units – ”central units”.

Production of air distribution products; fire dampers.



Systemair | 5

Quality: Systemair is certified in accordance with ISO 9001; ISO 14001, ATEX and European fire safety standard EN 12101-3. Our research and development laboratories are one of the most modern in Europe; measurements are made in accordance with international standards such as AMCA and ISO.

Bouctouche, Canada Tillsonburg, Canada Lenexa, USA

Save Energy, lower running cost! Our label “Green Ventilation” features products with a high energy saving potential. All products labelled with “Green Ventilation” combine energy economy with energy efficiency.

Skinnskatteberg, Sweden Eidsvoll, Norway Hässleholm, Sweden Aarhus, Denmark Ukmergé, Lithuania Mühlheim a.d.R., Germany Langenfeld, Germany Waalwijk, The Netherlands Bratislava, Slovakia Windischbuch, Germany Maribor, Slovenia Tillières, France Milan, Italy Istanbul, Turkey Madrid, Spain

New Delhi, India

Hyderabad, India

Kuala Lumpur, Malaysia

Hyderabad, India

Milan, Italy

Tillsonburg, Canada

Production of air distribution products.

Production of a wide range of liquid- and air-cooled chillers and heat pumps for comfort cooling.

Production of air handling units for classroom ventilation in the North American market.

Greater Noida, New Delhi, India Production of duct, axial and box fans, air handling units and air distribution products. Kuala Lumpur, Malaysia Production of duct and axial fans. Lenexa, USA Production of duct, axial and roof fans chiefly for the North American market. Distribution centre for the USA market.

Eidsvoll, Norway Madrid, Spain Production of large air handling units and box fans for markets in southern Europe, the Middle East and North Africa. Bouctouche, Canada Production of air handling units for residential use in North America, plus dehumidifiers.

Production of air handling units. Istanbul, Turkey Production of a wide range of air handling units and fan coils. Waalwijk, The Netherlands Production of air handling units.



6 |

Product Range

Product range Systemair has an extensive range of ventilation products, the majority of which are fans and air handling units. Other products include a wide range of air terminal devices for various applications. These products are installed in a variety of locations, including homes, offices, healthcare premises, shops, industrial buildings, tunnels, parking garages, training facilities, sports centres. The most common usage is comfort ventilation, but safety ventilation in various forms is also an important market. Smoke gas ventilation and tunnel ventilation are two examples.

Fans

Circular duct fans

Rectangular duct fans

Systemair is one of the world’s largest suppliers of fans for use in various types of property.

Duct fans with a circular connection.

Duct fans with a rectangular connection.

Axial fans

Roof fans

Axial fans for duct connection or wall mounting.

Roof fans with a circular or square connection.

Air handling units

Horizontal units

Vertical units

Systemair produces a wide range of air handling units.

A broad range of horizontal air handling units with or without heat recovery. Useable everywhere from smaller premises to schools, stores and larger offices. Air flow: 20-1500 l/s

A broad range of vertical air handling units with or without heat recovery. Useable everywhere from smaller premises to schools, stores and larger offices. Air flow: 20-1500 l/s

Our range includes everything from duct fans with a round connection – the company’s original product – to rectangular duct fans, roof fans, axial fans, explosionproof fans, and smoke gas fans. These fans can be supplied in sizes suitable for everything from ducts with a diameter of just 100 mm to large road tunnel fans. All our fans have been developed to comply with stringent requirements and are characterised by user-friendliness, a high level of quality and a long service life.



Product Range | 7

Fire safety ventilation

Smoke gas fans

Fire dampers

Systemair produces fans, dampers and control equipment for protection against smoke and fire certified for use during normal operation and in the event of a fire. The axial fans are certified for installation inside or outside fire risk areas.

High-capacity fans for evacuation of smoke gases.

Dampers that reduce the spread of smoke and fire.

Chillers & Heat pumps

Air cooled and water cooled chillers

Water terminals and close control

Our wide range of Chillers and Heat Pumps cover a huge variety of applications. Our production is equipped with high-tech machinery and has one of the most modern research centers in Europe.

Scroll compressor with or without heat recovery

Fan coils, casettes and chilled beams

Air terminal devices

Supply, extract & transfer air terminal devices

Nozzle air devices

Systemair’s range also includes a wide selection of air terminal devices for all possible environments and positions. Development and manufacture take place at a modern factory in Slovakia.

Optimum air distribution for rooms.

For mounting in ceilings or walls.

Supply & extract air ventilators

Duct products

For mounting in ceilings and walls.

Dampers, plenum boxes, and duct accessories



8 | General Advice

General advice A good indoor climate is vital It goes without saying that everyone prefers fresh air. We are also aware of the fact that we must be frugal with the resources we take from Mother Earth. That is why there is sometimes a conflict between supplying ventilation systems with energy and saving the earth’s resources and protecting the environment. Does it have to be this way? No. Today, there are energy-efficient solutions that create

a good indoor climate. Systemair has products that have been specially adapted to protect the environment with well-thought-out material consumption and production methods. These products are also designed to be economic in terms of energy consumption. The best of Systemair’s ventilation products are labelled “Green ventilation”.

Heat recovery

Night cooling

In areas with a relatively low average annual temperature, ventilation systems employ effective heat recovery that returns energy from extract air to the supply air. A good rotary heat exchanger can recover up to 90% of the energy present.

In warmer parts of the world, energy savings may be possible by drawing cool night-time supply air into premises, thus cooling the building structure.

Energy-efficient fans Today, there is a new generation of fan motors that contribute to a dramatic reduction in energy consumption, as much as 50% in some cases. The new EC motors are better suited to speed control functions, which is where considerable energy savings can be made. A bonus of this is also quieter operation.

Pressure The design of the duct system and the unit has an impact on required system pressure. There are often tens, sometimes hundreds, of Pascals to be saved here.

Quality-certified products How can you choose the right solution and product when there are so many alternatives? Nowadays, most major suppliers are ISO-certified and have CEmarked products, but is that enough? At Systemair we are going one step further and working hard to ensure that our products maintain a high standard and are approved by various bodies. For units, this may mean Eurovent certification or other local certification for the country in question. To achieve this, you need resources and expertise. Within the Group, you will find, among other things, one of Europe’s most modern development centres, which is AMCA-certified.

One of Europes most modern development centers A room that is so quiet that the only thing you’ll hear is your heartbeat. The development centre in Skinnskatteberg, recently accredited by AMCA, signifies an investment of EUR 700,000 and is fitted with measurement and testing equipment, making it one of the most modern facilities of its kind in Europe. The quiet room is one of the test stations or a “reverberation chamber”, producing a background sound of less than 10 dB(A). When measuring supply air terminals, a green laser is used to show how the air is expelled from wall-mounted or ceilingmounted devices. There is also a climate chamber that cools the air to -20°C, which means we can use it all year round to develop our recovery units. As well as the test centre in Skinnskatteberg, there are also test facilities in Germany and Denmark.



Planning Tools | 9

Planning tools We have developed this unit overview to make it easier for you to get an idea of which product best suits your specific needs. More detailed analysis or planning usually requires additional information, which is where the following tools come in.

Product catalogue and specification data More detailed technical information, sufficient to carry out complete planning, is available in separate catalogues and specification data. These describe all incorporated functions, available accessories, and additional technical data.

Online catalogue and computer software For those who prefer to work online, it is possible to select and dimension most products using Systemair’s online catalogue. In addition to complete product information, there is also a selection function that suggests alternative products to suit actual needs. For certain products, such as Topvex and DV there is computer software that you can download and install locally.

Personal support Systemair aims to have local expertise close to the customer. We do our utmost to ensure that we have our own representatives on the markets where we operate. On some markets, contact is via distributors. You can find up-to-date information and contact details for each country on our website, www.systemair.com.

10 | Systemair

1.0 Introduction

Nowadays, the term ‘Data Centre’ is a common term for many people. However, it is important to have a clear understanding of the concept in order to be able to understand the essential role that energy savings have in this type of facility.

A data centre is a specially conditioned space in which temperature and humidity are controlled; the electricity supply is stabilised and uninterrupted. There is structured cabling, control on access, security camera systems and fire detection and extinction systems, among other things. This is to contain all of a company’s IT systems and equipment.



The huge growth in the number of users connected to the internet and the enormous increase in data has led companies to invest in Data Centres that are increasingly larger in size and with greater energy density per m2. Nowadays, it is essential to have an energy efficient cooling system as savings in energy costs will largely contribute to the profitability of the Data Centre.

Systemair | 11

This document outlines the important aspects to be considered when designing a Data Centre with high levels of energy efficiency, that is economically sustainable and that has a low environmental impact by using the solutions from the SYSTEMAIR group, the world leader in cooling and ventilation systems.

12 | Systemair

2.0 The evolution of data centres in recent years

Over the first decade of the 21st century, countless new technologies, unprecedented business demands and increased IT budgets created the modern world of the Data Centre. At the peak of the second decade of this century, the Data Centre can be found balancing efficiency and availability while the technology demand and energy costs are increasing and IT budgets are decreasing. Over the next 10 years, the leading companies will be those capable of maintaining or improving availability while implementing technologies or services that cut costs by improving the design, management and operational efficiency. The Data Centre as we know it today started to take shape like the dot-com bubble and it grew at the end of the 1990s. There was a standstill in the growth when the bubble burst, but the rate of change began to accelerate again in 2013. Sales of servers in the fourth quarter of 2003 was 25% more than in the fourth quarter of 2002 and it continued to grow at two-digit rate in the two years that followed. All this while the IT organisations strived to cover the almost insatiable demand for equipment and expectations of 24x7 availability. In the absence of management tools for predicting future capacity,

Data Centres were constructed automatically to assume capabilities that were two or three times greater than the initially established requirements. It was not just the number of servers that increased, but also their electricity consumption. The density of the servers rapidly increased between 2000 and 2005, allowing greater computing power in smaller chassis. In 2004, the industry was anticipating power densities of up to 100 kW/m2, however this heat load assumption was greatly mistaken as the following charts show. See diagram “Foreseen evolution” and “Real power density”. The new generation of 1U servers (44.45mm in width), meant that the racks, which up until then could barely hold 10 servers, could house more than 40 servers. The solutions for this new environment came through new generation UPS and specific cooling systems for greater density of electrical power as well as infrastructure monitoring and management systems for resolving the problems arising from reliability in these new Data Centres. However, the rate of change and the inability to foresee future demand would continue to be a challenge. This

Foreseen evolution of power densities in the Data Center (2004).



challenge was met thanks to new infrastructure solutions that were being adapted more efficiently to short and long term changes. At the same time, a new problem arose: energy consumption. According to a survey by Digital Reality Trust, the energy use in Data Centres (average kW use per rack) leaped up by 12% between 2007 and 2008. Looking further back, the Uptime Institute stated that the energy use in a Data Centre would double between 2000 and 2006 and they predicted it would double again in 2012. With this information on the table, the industry directed its efforts to reducing the levels of energy consumed in Data Centres. See diagram “Energy consumption in Data Centres”.

Systemair | 13

The cooling solutions by the SYSTEMAIR group are ­completely aimed at supplying the current needs of the Data Centres:

These efforts were intensified in the second semester of 2008 when the US went into a strong recession and companies were obliged to look for savings solutions. IT organisations began to take energy efficiency problems seriously in terms of cost savings and environmental responsibility. Although the concerns over energy savings were not one of the top five concerns in 2005, they were in 2009. It was the second most important concern for the Data Center User’s Group (DCUG). As a result of the wave of cuts in electricity supply and as a consequence of the increase in the time of inactivity, concerns over availability went from the fourth position to the first in just six months. It was one of the most important concerns between 2010 and 2011 while energy efficiency went to the fourth position. Again, this is because of the economy. A significant interruption in a Data Centre is so costly that it can ruin the savings achieved over years. Now, the challenge for administrators in the Data Centres where the loading densities, and therefore the energy required, is increasingly greater, is to maintain or improve the availability and increase energy efficiency while reducing costs.

Real power density in the Data Center (2016)

Energy consumption in Data Centres

Cooling systems with better performance, greater ­reliability and improved energy efficiency.

14 | Systemair

3.0 Certification and accreditation ­bodies in data centre design 3.1 S  tandard Regulations for Data Centers

All the support infrastructure in a Data Centre is important. Interruptions to the service owing to a bad design, use of inadequate components, badly performed installations, deficient administration and inadequate support put the Data Centre’s operations at risk and so too the continuation of the businesses linked to the data processed or stored at the said centre. Various organisations produce standards for correctly designing a Data Centre. This glossary offers an overall view of which organisation produces which standard and what each acronym means. ASHRAE: The American Society of Heating, Refrigerating and AirConditioning Engineers produce standards and recommendations for heating, cooling and ventilation facilities in Data Centres. The technical committee develops standards for the configuration, the design, the maintenance and the energy efficiency of a Data Centre. Data Centre designers must consult all the technical documents by ASHRAE TC 9.9. http://www.ashrae.org. BICSI: The Building Industry Consulting Service International Inc. is a global association that covers the cabling design and installation. ANSI / BICSI 002-2014. Their objective is the correct design and implementation of good practices in the electric, mechanical and telecommunications structure. Their integral considerations range from fire protection to infrastructure management. http://www.bicsi.org. BREEAM: The BRE Environmental Assessment Method (BREEAM) is an environmental standard for buildings in the United Kingdom and countries under British influence. It classifies the design, the construction and the operations. The standard is part of a framework for sustainable buildings that consider financial, social and environmental factors. http://www.breeam.org

GB 50174-2008 (China): The National Standard Code for Design of Electronic Information System Room en China. This includes three tiers that range from the strictest to the least strict: A, B and C. This standard establishes a classification in the design and renovation of the IT equipment and communication. The Green Grid Association: The Green Grid Association is known for its PUE ratio, defined as the energy efficiency in a Data Centre. PUE measures the total amount of energy used in a Data Centre divided by the energy used in the IT equipment. The lower the ratio, the more efficient is the energy consumption linked to the infrastructure supporting the Data Centre. The opposite is the case when the ratio is greater. This association also has ratios for water (WUE) and carbon (CUE) based on the same principle of energy efficiency. www.thegreengrid.org IDCA: The International Data Center Authority is mainly known as a training institute, but it also publishes a system for establishing a classification over the overall design of Data Centres and how they operate. The classifications cover seven levels of Data Centres, from the location and the facilities to the data infrastructure and applications. http://www.idc-a.org IEEE: Institute of Electrical and Electronics Engineers offers more than 1300 standards and projects for a range of technological fields. Data Centre designers and operators base their activities on both the IEEE 802.3ba cabling standards and the IEE 802 standards or on the IEEE 802 standard for local area networks and on IEEE 802.11 for wireless LAN. http://www.ieee.org



ISO: The International Organization for Standardization offers a broad spectrum of standards focused on Data Centres. There are many of these in use in the facilities. The ISO 9001 standard measures a company’s quality control capabilities. The ISO 27001 standard certifies the best security practices in an operation, both physical and in terms of data when protecting the business and its continuation. Other ISO standards that data centre designers can request include the environmental standards, ISO 14001 or ISO 50001. www.iso.org JDCC: The Japan Data Center Council is a coalition between the Japanese Government, academic and industry entities that covers construction, security, electronic and cooling systems and communications equipment and maintenance in Data Centres. They also cover the seismic safety regulations. http://www.jdcc.or.jp LEED: The Leadership in Energy and Environmental Design is an international certification for environmentally sensitive buildings, managed by the US Green Building Council. They have five qualification ratios - building design, operations, development of neighbourhoods and other areas - with silver, gold or platinum certification levels - established on the basis of accumulated credits. The organisation offers a control list for each project and its regulations include adjustments for the unique needs of the Data Centres. www.usgbc.org NFPA: The National Fire Protection Association publishes codes and standards in order to minimise and avoid fire damage. This regulation requires an emergency off button for the Data Centre so as to protect it in the event of an emergency. The level of cloud virtualisation or integration in its IT infrastructure is not important, the fire safety regulation still governs its working loads. The NFPA 75 and 76 standards dictates how Data Centres contain cold and hot aisles with obstructions such as curtains or walls. NFPA 70 requires an emergency off button for Data Centres. www.nfpa.org NIST: The National Institute of Standards and Technology supervises the measuring in the US. This institute’s task includes researching nanotechnology for electronics, the integrity of the construction and other types of industries. The NIST offers recommendations on access and authorisation for Data Centres. www.nist.gov OCP: The Open Compute Project is known for its design ideas for servers and networks. The OCP initiative started with the internet giant, Facebook, and its desire to promote open code in hardware. In the Open Rack and optical interconnection projects, the OCP requires racks with gaps of 21 inches and intra-rack photonic connections. The OCP data centre design optimises thermal efficiency with 277 VAC and specially adapted electrical and cooling components www.opencompute.org

Systemair | 15

The OIX Association: it focuses on observing the internet and interconnects Data Centre and network operator performances with the content creators, distribution networks and consumers. It publishes technical requirements for internet exchange points and Data Centres that support them. The requirements cover the resistance and security of Data Centres as well as connectivity and congestion management. http://www.open-ix.org Telcordia: Telcordia is part of Ericsson, a communications technology company. The generic requirements (Telcordia GR-3160) for telecommunications equipment and for the spaces in the Data Centre particularly refer to the telecommunications operators. However, the best practices for network reliability and organisational simplicity can benefit any Data Centre offering applications to users or host applications for telecommunication operators. The standard regulations work on protecting the environment and classified environmental tests from earthquakes to surges caused by lightning. http://www.ericsson.com TIA: The Telecommunications Industry Association produces communication standards that are aimed at reliability and interoperability. The main standards by the group, ANSI/TIA-942-A, cover network architecture and access security, facility design and location, backups and redundancy, energy administration, etc. TIA certifies the Data Centres on classification levels within TIA-942 and this is based on redundancy in the cabling system. http://www.tiaonline.org EUROPEAN COMMISSION (INSTITUTE FOR ENERGY – Renewable Energies Unit): Code of Conduct for Energy Efficiency in Data Centres. Created to respond to the continuous increase in energy consumption in data processing centres and the need to reduce the environmental and financial impact and the security of supply. The objective is to inform and encourage the operators and owners of Data Centres to reduce energy consumption effectively in terms of cost, but without affecting the mission-critical function of the Data Centres themselves. Following this is optional.



16 | Systemair

4.0 Factors that influence energy ­efficiency in a data centre 4.1 Efficiency Indicators in a Data Centre: PUE / DCiE / CUE / WUE The Green Grid (TGG) is an international consortium of companies, government agencies and educational institutions dedicated to promoting energy efficiency in Data Centres and business IT ecosystems. TGG has developed a new measure as a supplement to the series of metrics that have been introduced in recent years, which, among other things, includes:

PUE: Power Usage Effectiveness

DCiE: Data Centre Infrastructure Efficiency DCiE= 1/ PUE The DCiE ratio is defined as reciprocal to that of the PUE.

Calculating the PUE is relatively simple. It is the ratio of the total amount of energy used in the Data Centre (including the energy dedicated to IT devices) divided by the energy used in the IT equipment. This measurement is carried out on an annual basis due to the fact that the changes in climate in the different seasons of the year impact on energy consumption. As an example: In the summer months, with greater incidences in some latitudes than in others, the cooling machines work less efficiently and require more energy to provide the same cooling capacity. Example calculation of the PUE

PUE=2

Total energy of the Data Centre:

1,800,000 KWH

Return Air Infrastructure:

900.000 KWH + IT Equipment: 900,000 KWH

Total IT equipment:

900,000 KWH

As can be seen in this PUE=2 example, the energy used in the support infrastructure as a whole (including the cooling system) without counting the IT devices, is the same that is used in the IT equipment. See tables below. The cooling systems represent the greatest part of energy consumption in the support infrastructure of a Data Centre, and it is where the best improvement opportunity exists.

CUE: Carbon usage effectiveness The new metrics proposed by TGG to address water consumption in Data Centres, which is becoming an important aspect both for the design and for the location and operation of future Data Centres, is the WUE (Water Usage Effectiveness). The WUE combined with the PUE and the CUE allows Data Centre operators to quickly evaluate the aspects relating to water usage effectiveness, energy and environmental sustainability through carbon emissions. It also allows them to quickly assess the energy efficiency of their structures, to compare the results with other Data Centres and to identify possible errors. The PUE determines the amount of specific electrical power for IT systems in regard to ancillary systems like the cooling system, loss of UPS or the monitoring system.

Years ago, the basic parameter and almost the only one in Data Centre design, was reliability. Energy efficiency was an entirely secondary aspect. In the Data Centres from previous generations, it was very normal for infrastructure support to need as much energy as the IT equipment housed in the Data Centre. (This represents a PUE of 2.0 or 50% of the operational efficiency). It was even common to find centres with a PUE of 2.5 - 3.0. This means that much more energy was being used to support the infrastructure (cooling, electrical distribution, monitoring, etc.) than for the IT equipment itself. Currently, the new Data Centres are being designed and constructed with a substantial improvement in operational efficiency with a PUE between 1.3 - 1.6. However, the new trend of using high performance and high ef-

Facility Power



Systemair | 17

4.2 Power Consumption distribution in legacy Data Centers

ficiency air handling units (AHU) by SYSTEMAIR allows PUE values as low as 1.06 to be reached in given latitudes.

Water Usage Effectiveness = WUE It is used to evaluate the water usage effectiveness in the cooling equipment in relation to the quantity of KW/h consumed. It is defined as the annual water usage divided by the amount of energy used by the IT equipment. WUE units are litres per kW consumed per hour and are calculated annually. WUE: Annual Water Usage WUE = (1) IT Equipment Energy

The use of water linked to the Date Centre is a complex issue in many aspects. With WUE, the amount of external water (source based) needed to generate a kW must be considered as must the amount of water (site based) to cool the Data Centre. The main problem is that changes in the water usage strategy can and usually do affect other supplies in the Data Centre. The reduction of water usage in the Data Centre can be achieved in

various ways. The most attractive way is simply to use an optimum design, then increase operational efficiency and adjust the existing systems. With the new commissioning of a facility, it is simpler to achieve these parameters. In the industry, it is common to see CRAC units in the same room, dehumidifying while, somewhere else in the same room, humidifying with the resulting energy and water used. Additionally, many Data Centres still have to take advantage of the environmental recommendation extended in accordance with ASHRAE 2008, where the recommended minimum levels of humidity have been reduced to a dew point of 5.5ºC (42ºF) Thus, in Air Handling Units (AHU) or external condensing units incorporating an adiabatic system to cool the exchange of air, it is common to see that there is not a water recovery system with the resulting huge consumption of litres and litres of water. There are designs in the industry that are incredibly efficient in this sense.

18 | Systemair

4.3 E nvironmental Parameters (Temperature and Relative Humidity / Dew Point) Humidity is the least visible threat to the equipment within a Data Centre. Even some IT Managers neglect to monitor it.

The temperature and humidity limits must be maintained in accordance with the energy consumption limits.

Environmental humidity is the amount of water vapour present in the air. It can be expressed in two ways: by the absolute or relative humidity or the degree of humidity (also know by the acronym RH). Relative humidity is the percentage relationship between the amount of real water vapour contained in the air and the amount it would need to contain to be saturated at the same temperature. For example, a relative humidity of 60% means that of the total amount of water vapour (100%) that could be contained in the air at this temperature, only has 60% vapour.

There are two possible threats related to relative humidity within a Data Centre.

Another important term is the condensation point, or dew point, represented by the temperature at which the water in the air changes state from a gas to a liquid: that is, when the RH = 100%. Then, the air is considered to be saturated. As the air temperature increases so does its capacity to retain water, which is another good reason to keep the temperature under control. This is a secondary effect of the IT equipment’s consumption of cold air. When the cold air passes in front of the servers, it leaves with a greater temperature and with a greater capacity to retain water.

Ashrae 2011 Thermal guideline

1. Electrostatic discharges: the possibilities of electrostatic discharges, also known as ESD, occur when humidity is low. Additionally, these possibilities increase even more if the temperature is low. Electrostatic discharges can be barely noticeable for people, and typically do not cause injuries. However, a discharge of 10 volts is capable of damaging equipment. 2. Corrosion: this occurs when a metallic element is exposed to water, whether because it gets wet or because small drops are generated as a result of water condensation in the air. For example: in an environment with high humidity. The elements within the servers can be damaged and suffer a loss of data. The key is to find an exact balance so that humidity is kept in an optimum range where condensation and electrostatic discharges are avoided. For this, the most suitable range of humidity is between 40% and 55%. (This is also recommended by the TIA/EIA 942 standard).



Systemair | 19

4.4 Temperature differential between the cold aisle and the hot aisle – airflow In the scope of cooling, one of the characteristics of Data Centres is that the importance is focused on the temperature of the air entering the IT equipment. In this sense, the closing of the aisles and resulting physical separation of hot and cold areas has become the normal system for increasing energy efficiency. The concept itself of separating the aisles shows an obvious logic. The most obvious advantage is that the possibility of hot and cold airflows mixing diminishes enormously. Some of the typical problems found in Data Centres linked to the positioning of the racks and aisles are as follows:

4.4.1 Historical development Racks in the same direction: The configuration of the aisles in the oldest Data Centres consisted of a series of hot and cold areas facing the same direction so that the hot air leaving the IT equipment in one row was directly aimed at the front of the equipment in the following row, as showing in fig. XX. The problem is that this distribution of racks causes the hot and cold airflows to become almost entirely mixed. This means that the set-point temperature for the cooling machines must be lowered enormously in order for this mixed hot and cold airflow to enter the IT equipment in the recommended range of the recommended temperature.

Opposing racks: More recently, the configuration of the aisles became a series of a hot and cold areas, with the front part of the racks facing each other in each row and the backs opposing each other in another. This layout has become the industry standard and confers a certain level of separation between the hot and cold airflows due to the separation given by the racks filled with IT equipment. However, it is not enough to achieve high levels of efficiency. The problem lies in the fact that the mixing of airflows can still occur at the end of the aisles or above the racks. If the volume of air supplied to the cold aisle is not enough, the IT equipment will suck air from the hot aisle, with the resulting bypass of air and possible “hot spots” as shown in the right hand side of the above drawing. Some typical problems associated with this type of layout that generates inefficiencies:

• Racks of different heights and formats. • Open spaces between racks. • Unoccupied and open spaces between racks. The countermeasure consists of increasing the cold airflow, as is shown in the left hand part of fig. XX with an increase in the fan revolution speed so as to move a greater volume of air. This increases energy consumption and inefficiencies in the cooling equipment as the temperature differential through said equipment reduces.

• Increase in energy costs due to inefficient cooling system. • Increase in power and thermal load density: virtualisation/consolidation

• Quick growth and inadequate maintenance of the Data Centres. • Mixed areas of high and low density IT equipment. • High differences in temperature between the lower and upper parts of the racks.

• Inadequate distribution of cooling vents (in the case of a false floor).

• Openings for cabling outlets inadequately sealed. • Accumulations of cabling in the false floor.

20 | Systemair

Aisle sealing: As a result of the problem described, sealing of the hot and cold aisles has been introduced to eliminate the possibility of the hot and cold airflows mixing. There are a series of advantages linked to this:

• Greater efficiency in the cooling machines (greater temperature differential) and, as a result, energy savings.

• Better predictability in the thermal behaviour of the Data Centre. • Consistent airflow the whole height of the rack and in all the rows.

• Minimal differences in temperature between the units at the lower part of the rack and the upper part.

• Elimination or minimisation of possible hot spots. • Prolongation of the Data Centre’s life: this allows a greater power density to be supported with the same cooling infrastructure. It is possible to seal both the hot aisle and the cold aisle, with certain advantages and disadvantages in each case.

Advantages of sealing the cold aisle: • Smaller area to cool. • Lower airflow requirements. • A more direct airflow to the front part of the servers. • Positive air pressure in the cold area, avoiding hot air mixing with cold air.

• Simpler to adapt to existing Data Centres, especially those with a technical floor or low ceilings. Ducts are not required for returning hot air.

Advantages of sealing the hot aisle: • More comfortable temperatures in the Data Centre. • It is a good solution for Inrow cooling units. • A little more time before the IT equipment stops functioning in the event of a fault in the cooling machines.

• More appropriate if the work stations are within the Data Centre: compliance with the requirements of OSHA for working environments.



Systemair | 21

4.5 Density of Electrical Power and Distribution of the Load in the Room

The power density provides us with an indication of the amount of IT equipment that can be housed in each rack. A Data Centre with a lower electrical power density would indicate that more racks are needed and, therefore, more m2 in the room to house the same amount of equipment as in a Data Centre with greater density.

A Stronger Focus on Energy Efficiency and Sustainability:

The power density is expressed in two ways: watts per m2 or kilowatts (kW) per rack. At times, both are used. Many of the older Data Centres cannot efficiently cool more than 5kW per rack (some, even less) and it frequently falls below 3kW per rack. Even today, a large number of Data Centres are not able to house more than 5-10kW per rack (average density) or 10kW or more per rack (high density).

To help to measure the efficiency of a Data Centre and to establish realistic objectives, the majority of Data Centre operators place their trust in the PUE value previously described above.

Growing Power Densities: Virtualisation, blade servers and cloud computing tend to include a greater number of devices in smaller spaces and with greater usage ratios of the processor, thus generating a greater thermal load. As a result, these technologies radically increase the power densities in the racks. In fact, while a rack equipped with conventional servers can have a load of 4 to 6 kW of power, a typical rack full of blade servers can reach up to 30kW.

An increase in the price of energy, stricter and stricter environmental regulations and growing concerns over pollution and global warming have made efficiency and sustainability one of the most important priorities for any company.

22 | Systemair

4.6 Redundancy (TIER and RATING Levels)

To guarantee reliability and safety in the rooms with IT equipment, the systems need to have suitable redundancy to guarantee the performance of adequate maintenance and continuous functioning without interruptions. The Uptime Institute has determined four levels of redundancy: Tier I, Tier II, Tier III, Tier IV.

4.6.1 TIER Level

IMPORTANT NOTE: A Tier or Rating 4 Data Center without the proper Operations (including Maintenance) performs as a Tier 1 or Rating Data Center 1.

The service might be interrupted for planned or unplanned activities.

The Uptime Institute has determined four levels of redundancy: Tier I, Tier II, Tier III, Tier IV.

Tier 1: Basic Data Centre: 99.671% availability.

There are no redundant components in the cooling or electrical distribution. There may or may not be raised floors, ancillary generators or UPS. Average implementation time: 3 months. The Data Centre’s infrastructure shall be out of service at least once a year for maintenance and repair reasons.

Tier 2: Redundant Data Centre: 99.749% availability. Less susceptible to interruptions for planned or unplanned activities. Redundant components (N+1) There are raised floors, ancillary generators or UPS. Connected to a single cooling and electrical distribution line. From 3 to 6 months to implement. Maintenance of this distribution line or of other parts of the infrastructure requires an interruption of service.



Tier 3: Concurrently Maintainable Data Centre: 99.982% availability.

Systemair | 23

4.6.2 RATING Level

This allows for maintenance activities to be planned without affecting the computation service, but unplanned events can cause unplanned stoppages.

ANSI-TIA 942 in its standard last version from March 2014 “Telecommunication Infrastructure Standard for Data Centers” defines in the Annex F four ratings: Rating1, Rating 2, Rating 3 and Rating 4.

Redundant components (N+1).

The Telecommunications Industry Association’s TIA-942 Telecommunications Infrastructure Standard for Data Centers, specifies the minimum requirements for telecommunications infrastructure of data centers and computer rooms including single tenant enterprise data centers and multi-tenant Internet hosting data centers. The typology proposed in this document is intended to be applicable to any size data center.

Multiple cooling and electrical distribution lines, but only one is active. From 15 to 20 months to implement. The distribution and capacity is enough to be able to carry out maintenance tasks on one line while service is provided through via another.

Tier 4: Fault tolerant Data Centre: 99.995% availability. This allows for maintenance activities to be planned without affecting critical computation services, and it is capable of supporting at least one unplanned “worst case scenario” event without critically impacting on the load. Multiple cooling and electrical distribution lines with multiple redundant components (2 (N+1) means 2 UPS with N+1 redundancy). From 15 to 20 months to implement.

Rating Chart summary Rating1: Basic Data Center Availability • No redundancy • Single path for power, telecom and cooling Rating 2: Data Center Redundant in Components

• Improved availability from Rating 1 due to the addition of redundant components (excluding Fire Suppression, Security, Monitoring, …) • Single path for power, telecom and cooling • Allows some level of maintenance Rating 3: Data Center Concurrent Maintainable

• The Data Center allows scheduled maintenance without interruption of the critical load.

To summarise:

• Supports 24x7operations • Multiple paths for power, telecom and cooling • One path is active and the second is passive • Multiple feeds • Compartmentalization is a must for any critical room (separated • • • •

rooms for I.T., UPS, Batteries, SOC, NOC, Entrance Room, Holding Area, Staging Area, …) One utility allowed for Feed A and Feed B. Cooling N+1 with dual feed with an ATS from feed A and feed B Manual switch-over from the Active to the Passive path. Redundant in paths and components

Rating 4: Data Center Fault Tolerant

• The Data Center allows concurrent maintainability and testing • • • • • • • •

and one (1) fault anywhere in the installation without causing downtime and/or interruption in the critical load. Supports 24x7operations Multiple paths for power, telecom and cooling • All paths are active Multiple feeds Compartmentalization is a must for any critical room (separated rooms for I.T., UPS, Batteries, SOC, NOC, Entrance Room, Holding Area, Staging Area, MDA, …) Two utilities. Cooling N+1 with dual feed with an ATS from feed A and feed B Automatic switch-over. Redundant in paths and components

24 | Systemair

4.7 Geographical Location of the Data Centre

The first important step is to select a place that must be physically safe and it must have a supply of energy and water as well as reliable communication systems. Additionally, with the growing awareness of optimising energy consumption through the use off “Free Cooling” or “Natural Cooling”, the temperature range and annual humidity on the Data Centre site as well as the distribution of frequencies of these variables impact on the energy efficiency of the cooling systems and determine their choice.

supported, a large proportion of the time, by evaporative panels that take advantage of the adiabatic effect or traditional means such as cold water chillers or direct expansion condensers. We are going to focus on “Free Cooling” as it is the current design trend and it allows a greater return on the investment in most mid-latitudes. Within the scope of natural cooling we can distinguish DIRECT FREE COOLING (DFC) and INDIRECT FREE COOLING (IFC or NFC).

4.7.1.1 Direct Free Cooling (DFC)

Of course, although not directly linked to the energy efficiency of the facilities in themselves, the attention to energy costs and fuel source should not be overlooked. Energy costs are highly dependent on location and are based on local costs or are acquired from energy generation (hydraulic, wind or solar), as well as state and local taxes (or financial incentives), which could offer lower costs, energy efficiency cuts or tax benefits.

Direct free cooling (DFC) consists of circulating air outside the Data Centre, previously filtered in accordance with the specifications established in the design, and it is introduced into the space designated for the IT equipment. In given geographical locations it is possible to use free cooling for a broad proportion of the time without needing support from other mechanical means (whether chilled water or direct expansion exchangers).

4.7.1 Cooling Solutions

It is not uncommon for DATA CENTRE operators to extend the dry bulb temperature limits recommended by ASHRAE in 2011 (Thermal Guidelines 9.9) from 19 to 27ºC to temperature ranges greater than this body defines as permitted (15 to 32ºC).

If the external temperature conditions allow it, and the layout of the Data Centre does not present limitations to the cooling system to be implemented, the solution with the lowest total cost of ownership (TCO) is “Free Cooling”. According to the climate conditions, natural cooling will have to be

Thus, relative humidity presents a limitation for this type of cooling as the ranges established by ASHRAE are from a minimum dew point of 5.5ºC to a maximum of 15ºC with a relative humidity limit



of 60%. Again, and with the aim of obtaining a greater number of hours of “free cooling”, operators can opt to exceed these values and arrive at the maximum range permitted from 20% to 80% relative humidity. However, the possibility of condensation must be avoided in all parts of the Data Centre. In the case of relative humidity levels in the IT room below the desired range, it is necessary to use humidification systems, whether in the cooling unit itself or independent to it. For situations with higher than desired relative humidity levels, dehumidification must be controlled in the equipment itself with the resulting energy cost. When designing these cooling systems, the possible overpressure generated and the normalisation of this must be carefully considered. Expected PUE and power savings in different European cities by using SYSTEMAIR’s Direct Free Cooling units (considering RH at 50%).

4.7.1.2 Indirect Direct Free Cooling (IFC) Indirect free cooling (IFC) consists of taking advantage of the cold temperature of the external air through a heat exchanger. In this case, the air in the Data Centre constantly circulates, going from the hot aisle through the exchanger and returning to the cold aisle. Like in the case of direct free cooling, the external temperatures in varied geographical locations can be taken advantage of, thus al-

Systemair | 25

lowing free cooling to be used without needing support from other mechanical cooling means. It has various advantages over DFC and some of them are highlighted below:

• In the case of sites with high relative humidity, free cooling can be taken advantage of for a greater proportion of the time, as the external airflow and the airflow in internal recirculation (Data Centre) are practically isolated.

• There is no mixing with external air and problems with pollutants or corrosive agents are therefore avoided.

• Cooling can be taken advantage of with adiabatic panels for cooling external air and achieving a greater range of usable temperatures, thus reducing the inefficiency generated in the thermal exchange by the exchanger between external air and that in circulation in the Data Centre. Expected PUE and power savings in different European cities by using SYSTEMAIR’s Indirect Free Cooling units (without Adiabatic support). OPEX savings table for 100 kW of cooling. Comparing SYSTEMAIR’s Air Handling Units (AHU) with cooling systems based on Chilled Water. We have considered a cost for power of 0.12 kWh, which is the average of EU-28.



26 | Systemair

City

Country

Savings (kWh/Year)

Savings (Euros/Year)

Amsterdam

Holland

184.262

€ 22.111

Athens

Greece

109.939

€ 13.193

Berlin

Germany

176.826

€ 21.219

Bern

Switzerland

174.227

€ 20.907

Bratislava

Slovakia

164.102

€ 19.692

Bucharest

Romania

149.766

€ 17.972

Copenhagen

Denmark

187.891

€ 22.547

Helsinki

Finland

188.070

€ 22.568

Istanbul

Turkey

138.790

€ 16.655

Kiev

Ucrania

172.973

€ 20.757

Lisbon

Portugal

133.280

€ 15.994

Ljubljana

Slovenia

164.954

€ 19.794

London

U.K.

178.438

€ 21.413

Madrid

Spain

145.286

€ 17.434

Minsk

Belarus

181.216

€ 21.746

Moscow

Russia

180.678

€ 21.681

Oslo

Norway

183.098

€ 21.972

Paris

France

173.645

€ 20.837

Prague

Czech Republic

181.395

€ 21.767

Reykjavik

Iceland

196.762

€ 23.611

Rome

Italy

136.909

€ 16.429

Saint Petersburg

Russia

187.891

€ 22.547

Stockholm

Sweden

184.173

€ 22.101

Tirana

Albania

126.381

€ 15.166

Vilnius

Lithuania

181.978

€ 21.837

Warsaw

Poland

176.691

€ 21.203



4.7.2 Occupation of the Data Centre and Real Energy Power against the Design Conditions Energy efficiency in any Data Centre is directly affected by the percentage of actual electrical power over the electrical power established in the Data Centre’s design. The lower the load used in relation to its designed maximum, the lower efficiency will be. This is directly linked to its occupation ratio. If the Data Centre is not at its maximum occupation level in the initial years, a modular design should be considered to mitigate the impact of underutilisation. Additionally, most Data Centres never operate at 100% of their designed energy power, basically, so as to ensure the reliability of the equipment and its ability for uninterrupted maintenance. Depending on the culture of the organisation, the systems do not generally operate at more than 80% to 85% of the designed energy power at which it is considered “full”. This is a necessary step and it involves a commitment based on prudence between reliability and energy efficiency.

Systemair | 27

4.7.3 Oversizing in the Design Impacts on Energy Efficiency When the designed energy power is decided for a Data Centre, there are many relevant factors that influence this decision. The fear of making it too small or running out of space or energy within a few years is concerning scenario, but a very realistic one. In recent years, the growth in calculations for energy demand and power density has made many Data Centres constructed less than 10 years ago become functionally obsolete. This is a problem that is not easily solved with only one dedicate place. However, oversizing avoids this problem, but causes energy efficiency to reduce considerably.

28 | Systemair

4.8 Appropriate and Continued Maintenance of the Data Center

Regardless of how the Data Centre is designed and built, the equipment must receive maintenance for its upcoming operation. Before, maintenance was performed in order to avoid system errors. Nowadays, while this is still an essential requirement, guaranteeing optimum energy efficiency is also part of the maintenance objectives. This is the case particularly for the cooling systems, whose efficiency and effectiveness quickly diminishes if the filters become blocked and the cooling towers are not rigorously cleaned.



Systemair | 29

4.9 Remote Monitoring System (RMS) and Control (SCADA SYSTEMS)

The Data Centre monitoring systems allow real time information on environmental, quality and energy parameters, communications and even on the physical security of the infrastructure. The complete automation of this process allows the time and cost of managing the IT assets to be reduced. While remote monitoring systems are only for viewing and managing alarms from the parameters monitored (temperature, relative humidity, pressure differentials, presence of water, etc.), Supervisory Control and Data Acquisition systems (SCADA) additionally allow control over different elements in the infrastructure such as set-point changes in the parameters of the cooling machines, opening and closing of valves, etc. or the automation of processes to respond to different situations: for example, the activation of redundant systems in the event of a fault in a unit.

On a cooling level, the data acquisition systems allow: An optimum temperature in the Data Centre to be maintained: using the data obtained from monitoring in order to optimise the temperature in the Data Centre and achieve energy savings. Identifying hot and excessively cold points in order to adequately adjust the airflow and reduce electricity consumption. Relative humidity (RH) to be controlled: maintaining a relative humidity level within the established parameters in order to avoid condensation due to excessive RH and electrostatic discharges due to a lack of RH.

Airflow management: monitoring both propelled airflow and return airflow to the air conditioning machines in order to guarantee that the cooling systems as well as the hot and cold aisle partitioning are functioning correctly. Excessively fast outgoing air from the floor tiles can cause a laminar flow that prevents the adequate cooling of the equipment as well as an excessive consumption of energy. Differential Pressure Control: maintaining optimum differential pressure and avoiding leaks between the hot and cold aisle partitions, thus being able to reduce the airflow necessary for cooling and achieve energy savings. SYSTEMAIR’s cooling systems can be monitored independently or they can be integrated into the existing Building Management System (BMS), Data Centre Infrastructure Management platform or Supervisory Control and Data Acquisition system (SCADA).

30 | Systemair

5.0 Most common data center design and ­cooling systems Within a Data Centre, various areas that are clearly delimited can be distinguished for which the cooling or ventilation requirements are different and where SYSTEMAIR can provide solutions to optimise energy consumption. Therefore, usually in accordance with the different designs, the following can be found: 1. IT equipment room: servers, data storage and communications equipment.

5.1 IT equipment room 5.1.1 Integrating cooling equipment in the room itself: CRAC (Computer Room Air ­Conditioning).

2. UPS and battery room.

The current trend is to locate these units on the outskirts of the room so as to maximise the space for the IT equipment and to facilitate maintenance tasks. The following can be distinguished:

3. Control Room.



Propelling cold air through the false floor (down flow). Output of cold air from the lower part of the unit and returning from the upper part.



Propelling cold air directly from the upper part of the cold aisle and returning from behind the unit.



Special designs.

4. Electric switches and transformer room. 5. Diesel generator area. 6. Cold generation area: Chiller Units, Direct Expansion Condensers, Dry Coolers, Air Handling Units, etc.

5.1.1.1 Down Flow CRAC Units: Propelling air through the false floor.



5.1.1.2 Up Flow CRAC Units: Propelling air to the cold aisle from the upper part of the unit.

Systemair | 31

32 | Systemair

PICTURE CHILLERS

5.1.1.3 SYSTEMAIR CRAC Units:

5.1.1.4 Chiller Units for SYSTEMAIR CRAC

Technical features:

SYSTEMAIR manufactures a complete range of Chiller units.

• ± 1°C temperature control

Range: From 5 kW to 1700 kW

• ± 5% relative humidity control

• Air cooled chillers and heat pumps

• Microprocessor control for unit functions

• Water cooled chillers, heat pimps and condenser units

• BMS communication (Modbus, Bacnet, Lonwork, web/ethernet)

• Free Cooling modules

• Air filtration EU4…EU7

• Roof top units

• High sensible load / Low latent load

For technical data, refer to separate HYDRONIC CATALOG at ­SYSTEMAIR web site: www.systemair.com

• EC fans, backward curved blades. • Easy maintenance access. • Full range of accessories: plenum/frames, pressostatic valves, water leakage detectors, smoke & fire sensors, humidifiers, reheating stages, fresh air intake, etc.

Versions:

• Downflow • Upflow: the air intake as a standard is from the front part, but as an option it is available from the rear side.

Configurations:

• Water cooled (CD & MD) • Air Cooled with remote condenser (CD & MD). • Closed circuit water/glyclol with dry cooler (CD & MD). • Chilled water (CW)

Cooling Range: From 10 kW to 153 kW. For technical data, refer to separate HYDRONIC CATALOG at ­SYSTEMAIR web site: www.systemair.com



Systemair | 33

Computer Room Air Cooling unit

5.1.1.5 Condenser Units for SYSTEMAIR CRAC (Direct Expansion) SYSTEMAIR manufactures a dedicated range of Direct Expansion condenser units for the line of the line of Computer Room Air Conditioning. Range: From 9.4 kW to 65 kW

• Horizontal and vertical air flow. For technical data, refer to separate HYDRONIC CATALOG at ­SYSTEMAIR web site: www.systemair.com

5.1.1.6 Special CRAC unit design SYSTEMAIR can design CRAC units according to the specific requirements of the customers’ Data Center projects. The units are made to order under the technical specifications established by the client. For technical data, refer to separate HYDRONIC CATALOG at ­SYSTEMAIR web site: www.systemair.com

34 | Systemair

5.1.2 Room cooling via Air ­Handling Units (AHU) – Indirect Free Cooling

The IT room shall be provided with a separation between the hot aisle and the cold aisle.

• The thermal exchanger between the internal and external air allows a temperature lower than 27ºC for the air propelled into the cold aisle in the Data Centre (or lower, depending on the Data Centre operator’s criteria). In this case, there is no cooling support by mechanical means and the only consumption is that generated by the centrifugal fans both for the internal and external air. Both the internal and external airflow will be regulated by means of a control system on the cooling unit so as to achieve the desired temperature.

The air will be propelled through the false floor or directly into the cold aisle. The room’s air return will be through a plenum in the upper part of the room. There are multiple possible designs that can be adapted to the existing machine models and other common options. As mentioned in chapter 4.3, these including sealing the hot aisle, maintaining the cold area of the Data Centre open. We will distinguish the modules in our range according to the type of exchanger and the maximum level of energy efficiency:

• The thermal exchanger between the internal and external air does not allow a temperature lower than 27ºC for the air propelled into the cold aisle in the Data Centre (or less if that is the Data Centre operator’s criteria). In this case, if the temperature of the external air is below the temperature of the return air from the Data Canter, the cooling support by mechanical means (either Chilled Water or DX module) will be partial. On the other hand, if external temperature is above the temperature of the Data Center return air, then the mechanical support will have to provide the full cooling support.

• DCC IFC: Excellent energy efficient exchanger (aluminium). Temperature range in free cooling up to 29ºC (at RH of 40%).

• DCC+ IFC: Maximum energy efficient exchanger (polyethylene). Maximum temperature range in free cooling up to 40ºC

5.1.2.1 DCC IFC Indirect Air Handling Unit Series The design of the DCC IFC Series, made up of modules in a standard chassis, allows the unit to be perfectly adapted to the needs of each Data Centre, in accordance with the geographical location and the design requirements. The levels of energy efficiency, the extremely low operation costs, the ease of maintenance and a minimum investment cost per kW of cooling allow the Data Centre to run extremely economically. These cooling units use a cross-flow or counter-flow aluminium plate exchanger (depending on the model). External air input is carried out from the top right part, passing through a filter and an adiabatic panel (optional) for an additional cooling of the air (when necessary) and propelled by high efficiency EC fans that regulate their speed in accordance with the cooling needs. This airflow will pass through the plate exchanger, absorbing thermal energy from the air returning from the Data Centre. The air that returns from the hot aisle in the Data Centre is channelled through the top left part of the machine. It passes through a filter and then through a plate exchanger. Optionally, a humidification system can be incorporated, as can a support cooling coil by means of chilled water or Direct Expansion (in the event of the external temperature not permitting free cooling) and a reheating phase in the event of using the unit for dehumidification. The air is again propelled into the cold aisle by means of a high efficiency EC fan in order to achieve the airflow necessary for cooling the IT equipment. For a greater energy performance and a higher level of efficiency, the unit allows a second plate exchanger and a second adiabatic layer to be incorporated. It is important to emphasise that the water from the adiabatic system is recovered in a collection tray, thus minimising the losses and having a lower impact on the WUE. The mode of operation distinguishes two stages depending on the external temperature:

Finally, it is important to highlight that an adiabatic panel system is included (optionally). Water circulates through this, passing through a permeable membrane, thus allowing air to pass and achieving an additional cooling of the external air. This thus improves efficiency and the PUE.

Competitive advantages of the DCC IFC AHU Series: • Available with one or two high performance cross flow ­ xchangers. e Adiabatic panel to increase the power efficiency. Recovery coil for increased efficiency (ERE). Cutting edge EC (electronically commutated) fans. MERV8 up to MERV 13 filters. Easy maintenance with access on each side of the unit. Design by modules and simple configuration. Panels with thermal isolation and treated with zinc-aluminium. Possibility of a humidification module. Exchanger coil as a free cooling backup with Direct Expansion or Water. • Exchanger coil ready to work with incoming water at 14ºC and 19ºC at return. Maximum efficiency. • Connection to BMS with easily accessibly protocol and signal mapping. • CAREL control system.

• • • • • • • • •

Optional accessories: • Adiabatic humidifier module. • Support cooling coil for hot weather (DX or CW). • Integrated DX cooling module (only certain models). • Resistive steam lance humidifiers. • Reheating stage. • Double power feed (enables UPTIME TIER-3 compliance). • Uninterruptible Power Supply (UPS) for control unit. Range: Cooling power from 56 kW to 736 kW For technical data, refer to separate DATA CENTER CATALOG at SYSTEMAIR web site: www.systemair.com



Systemair | 35

Air Handling Units (AHU) with Plate Heat Exchanger.

PHOTOS OR RENDERING OF THE UNITS

Diagram with the different components of the unit.

Diagram?

Psychometric diagram for the DV IFC 150 (230 kW) with 2 Plate Heat Exchangers and adiabatic panel.

Map below shows percentage of annual hours with temperatures under 18,5 ºC. This range represents the case in which no mechanical support is needed to provide 230 kW using free-cooling. Blue colour = annual hours Red colour = Savings in kEur Green colour = Mechanical PUE

5.1.2.2 Support cooling systems (chilled water and direct expansion) SYSTEMAIR manufactures a complete range of Chiller and condensing units for supporting For technical data, refer to separate HYDRONIC CATALOG at SYSTEMAIR web site: www.systemair.com



36 | Systemair

5.1.2.3 DCC IFC+ Air Handling Unit Series

Four operating modes can be distinguished:

The units from the DCC+ IFC series are characterised by an extraordinary level of efficiency that allows “Free Cooling” to be used in practically any latitude.

• High external temperature: The required cooling level is obtained by means of adiabatic evaporative cooling. The external air is humidified through nozzles that turn the water into a mist of small droplets, considerably reducing the temperature. The thermal exchange between the return air in the Data Centre and the external air produced in the exchanger. In this mode of operating the unit, there is a double airflow, internal and external, but the air is not mixed and it is therefore unnecessary to carry out humidification or dehumidification.

The energy efficiency levels are up to 30% greater than other free cooling systems on the market, thus achieving extremely low operational costs. These cooling units use a polyethylene exchanger from technology belonging to SYSTEMAIR and they have a greater thermal exchange capacity than traditional aluminium ones. External air input is carried out from the top right part, passing through a filter and it is propelled by high efficiency EC fans that regulate their speed in accordance with the cooling needs. The air is then sprayed with water for an additional cooling (when necessary), passing through a double polyethylene cross flow exchanger where it is sprayed with water for the second time in order to reduce the temperature, absorbing the maximum amount possible of thermal energy from the return air in the Data Centre. This air has the option of passing through a recovery device so as to use the excess energy for various services within the building, before being propelled outside from the upper part. As with the IFC Series, the water from the adiabatic system is recovered in a collection tray, thus minimising the losses and having a lower impact on the WUE. The air that returns from the hot aisle in the Data Centre is channelled through the right side of the machine. It passes through a filter and then through a plate exchanger. A support cooling coil is incorporated (in the event of the external temperature not allowing free cooling). The air is again propelled into the cold aisle by means of a high efficiency EC fan that regulates its speed through the control system in order to achieve the airflow necessary for cooling the IT equipment.

• Free cooling mode: If the external temperature corresponds to the set-point temperature, the unit changes to Direct Free Cooling. The hot air in the room is pushed outside without there being any thermal exchange.

• Mixed mode: If the external temperature is lower than the cold-aisle set-point temperature, cooling is achieved by means of a combination of direct and indirect free cooling. Depending on the requirements, a part of the airflow is directed through a by-pass, while the rest goes through the exchanger. With a changing external temperature, the percentage of airflow that goes through the by-pass and the exchanger will vary in accordance with the control algorithm.

• Low external temperature: The system works in indirect free cooling mode so as to avoid the external air becoming excessively dehumidified in the room. Depending on the external temperature and the set-point temperature of the cold aisle, the control system will vary the external airflow.

Competitive advantages of the DCC+ IFC AHU Series: • Minimal energy consumption with minimal drops in pressure. • Low level of external airflow, resulting in lower energy consumption. Minimal space required. Cutting edge EC (electronically commutated) fans. Lower OPEX: pPUE: 1.03 Exchanger coil as a free cooling backup and a compressor already incorporated. • Double adiabatic system. • “Free Cooling” can be used for temperatures up to 40ºC.

• • • •

Range: Cooling power from 11.1 kW to 226 kW



High external temperature.

Systemair | 37

PHOTOS OR RENDERING OF THE UNITS

Free cooling mode.

Mixed mode.

Low external temperature.

Diagram with the different components designed.

38 | Systemair

5.1.3 Room cooling via Air Handling Units (AHU) - Direct Free Cooling

The IT room shall be provided with a separation between the hot aisle and the cold aisle so as to maximise energy efficiency. The air will be propelled through the false floor or directly into the cold aisle, and the air that might return through the plenum in the top part of the room or other set-up, will be pushed outside directly. The range of Direct Free Cooling units is called Adcoolair DFC.

Competitive advantages of the Adcoolair DF AHU Series: • Cutting edge EC (electronically commutated) fans. • Easy maintenance with access on each side of the unit. • Design by modules and simple configuration. • Panels with thermal isolation and treated with zinc-aluminium. • Possibility of a humidification module. • Recovery coil for increased efficiency (ERE). • Exchanger coil as a free cooling backup with Direct Expansion or Water.

5.1.3.1 Adcoolair DFC Air Handling Unit Series

• Exchanger coil ready to work with incoming water at 14ºC and 19ºC at return. Maximum efficiency.

The design of the Adcoolair DFC Series, made up of modules in a standard chassis, allows the unit to be perfectly adapted to the needs of each Data Centre, in accordance with the geographical location and the design requirements.

• Possibility of a humidification module. • Connection to BMS with easily accessibly protocol and signal

The levels of energy efficiency, the extremely low operation costs, the ease of maintenance and a minimum investment cost per kW of cooling allow the Data Centre to run extremely economically.

Optional accessories: • Support cooling coil for hot weather (DX or CW). • Integrated DX cooling module (only certain models). • Resistive steam lance humidifier. • Reheating stage. • Double power feed (enables UPTIME TIER-3 compliance). • Uninterruptible Power Supply (UPS) for control unit.

External air input is carried out from the top left part, passing through a filter and it is propelled by high efficiency EC fans. It goes through a support cooling coil by means of chilled water or Direct Expansion (in the event of the external temperature not permitting free cooling) and a reheating phase in the event of having to cool the air in the coil for dehumidification. Finally, the air, now in the required temperature and humidity conditions, is propelled into the room. The return air from the hot aisle in the Data Centre is channelled through the left side of the machine. In the event of the external temperature being in the designated operational range for the Date Centre, this air will be pushed outside directly. If the external temperature is excessively hot, or the relative humidity exceeds the established limits, the system will recirculate the air in the Data Centre and will cool it by means of the support coil. The mode of operation distinguishes three stages, which are applicable to both temperature and humidity:

• External temperature > 27ºC or Relative Humidity > 60%: It is not possible to make use of free cooling and the air must be cooled with the support coil (cold water chillers or direct expansion condensers). In this case, no air comes from outside (or it is kept to a minimum if the renewal of air for the IT room is done through the equipment itself) and the air is recirculated in the Date Centre (from the return from the hot aisle, passing through the cooling coil and propelling the air into the cold aisle again).

• External temperature < 18ºC: External air is mixed with part of the internal air that returns from the hot aisle and is propelled into the cold aisle after passing through a valve system in the proportion determined by the cooling unit’s control system by means of the revolution speed of the fans.

• 18ºC < External Temperature