scholarly journals Cold LOGIK and RDHX Solution for Data Center Energy Optimization

2018 ◽  
Vol 7 (3.4) ◽  
pp. 113
Author(s):  
T Suresh ◽  
Dr A. Murugan

In all types of data center, keeping the right temperature with less cost and energy is one of important objective as energy saving is crucial in increased data driven industry. Energy saving is global focus for all industry. In Information technology, more than 60% of energy is utilized in data centers as it needs to be up and running. As per Avocent data center issue study, across globe more than 54% of data centers are in redesigning process to improve their efficiency and reduce operational cost and energy consumption. Data center managers and operators major challenge was how to maintain the temperature of servers with less power and energy. When the densities of data center energy nearing 5 kilowatts (kW) per cabinet, organizations are trying to find a way to manage the heat through latest technologies. Power usage per square can be reduced by incorporating liquid-cooling devices instead of increasing airflow volume. This is especially important in a data center with a typical under-floor cooling system. This research paper uses Rear-Door Heat eXchangers (RDHx) and cool logic solutions to reduce energy consumption. It gives result of implementation of Cold Logik and RDHx solution to Data center and proves that how it saves energy and power. Data center has optimized space, cooling, power and operational cost by implementing RDHx technology. This will enable to add more servers without increasing the space and reduce cooling and power cost. It also saves Data center space from heat dissipation from servers.  

2021 ◽  
Vol 11 (11) ◽  
pp. 4719
Author(s):  
Romulos da S. Machado ◽  
Fabiano dos S. Pires ◽  
Giovanni R. Caldeira ◽  
Felipe T. Giuntini ◽  
Flávia de S. Santos ◽  
...  

Data centers are widely recognized for demanding many energy resources. The greater the computational demand, the greater the use of resources operating together. Consequently, the greater the heat, the greater the need for cooling power, and the greater the energy consumption. In this context, this article aims to report an industrial experience of achieving energy efficiency in a data center through a new layout proposal, reuse of previously existing resources, and air conditioning. We used the primary resource to adopt a cold corridor confinement, the increase of the raised floor’s height, and a better direction of the cold airflow for the aspiration at the servers’ entrance. We reused the three legacy refrigeration machines from the old data center, and no new ones were purchased. In addition to 346 existing devices, 80 new pieces of equipment were added (between servers and network assets) as a load to be cooled. Even with the increase in the amount of equipment, the implementations contributed to energy efficiency compared to the old data center, still reducing approximately 41% of the temperature and, consequently, energy-saving.


2020 ◽  
Vol 16 (6) ◽  
pp. 155014772093577
Author(s):  
Zan Yao ◽  
Ying Wang ◽  
Xuesong Qiu

With the rapid development of data centers in smart cities, how to reduce energy consumption and how to raise economic benefits and network performance are becoming an important research subject. In particular, data center networks do not always run at full load, which leads to significant energy consumption. In this article, we focus on the energy-efficient routing problem in software-defined network–based data center networks. For the scenario of in-band control mode of software-defined data centers, we formulate the dual optimal objective of energy-saving and the load balancing between controllers. In order to cope with a large solution space, we design the deep Q-network-based energy-efficient routing algorithm to find the energy-efficient data paths for traffic flow and control paths for switches. The simulation result reveals that the deep Q-network-based energy-efficient routing algorithm only trains part of the states and gets a good energy-saving effect and load balancing in control plane. Compared with the solver and the CERA heuristic algorithm, energy-saving effect of the deep Q-network-based energy-efficient routing algorithm is almost the same as the heuristic algorithm; however, its calculation time is reduced a lot, especially in a large number of flow scenarios; and it is more flexible to design and resolve the multi-objective optimization problem.


Author(s):  
Tahir Cader ◽  
Ratnesh Sharma ◽  
Cullen Bash ◽  
Les Fox ◽  
Vaibhav Bhatia ◽  
...  

The 2007 US EPA report to Congress (US EPA, 2007) on the state of energy consumption in data centers brought to light the true energy inefficiencies built into today’s data centers. Marquez et al. (2008) conducted an initial analysis on the productivity of a Pacific Northwest National Lab computer using The Green Grid’s Data Center Energy Productivity metric (The Green Grid, 2008). Their study highlights how the Top500 ranking of computers disguises the serious energy inefficiency of today’s High Performance Computing data centers. In the rapidly expanding Cloud Computing space, the race will be won by the providers that deliver the lowest cost of computing — such cost is heavily influenced by the operational costs incurred by data centers. As a means to address the urgent need to lower the cost of computing, solution providers have been intensely focusing on real-time monitoring, visualization, and control/management of data centers. The monitoring aspect involves the widespread use of networks of sensors that are used to monitor key data center environmental variables such as temperature, relative humidity, air flow rate, pressure, and energy consumption. Such data is then used to visualize and analyze data center problem areas (e.g., hotspots), which is then followed by control/management actions designed to alleviate such problem areas. The authors have been researching the operational benefits of a network of sensors tied in to a software package that uses the data to visualize, analyze, and control/manage the data center cooling system and IT Equipment for maximum operational efficiency. The research is being conducted in a corporate production data center that is networked in to the authors’ company’s global network of data centers. Results will be presented that highlight the operational benefits that are realizable through real-time monitoring and visualization.


Author(s):  
Kanahavalli Mardamutu ◽  
Vasaki Ponnusamy ◽  
Noor Zaman

Green energy paradigm has been gaining popularity in the computing system from the software, hardware, infrastructure and application perspectives. Within that concept, data center greening is of utmost importance at the moment since data centers are one of the most energy conserving elements. Data centers are seen as the technology era's black energy-swallowing secret. Reducing energy consumption at data centers can reduce carbon footprint effect tremendously. Not addressing the issue immediately will lead to significant energy usage by data centers and will hinder the growth of data centers. The call for sustainable energy efficient data center leads to venturing into data center green computing. The green computing concept can be achieved by using several methods adopted by researchers including renewable energy, virtualization through cloud computing, proper cooling system, identifying suitable location to harvest energy whilst reducing the need for air-conditioning and employing suitable networking and information technology infrastructure. This paper focuses into several approaches used by researches to reduce energy consumption at data centers while deploying efficient database management system. This paper differs from others in the literature by giving some suitable solutions by looking into a hybrid model for green computing in data centers.


2016 ◽  
Vol 24 (04) ◽  
pp. 1630008 ◽  
Author(s):  
Kofi Owura Amoabeng ◽  
Jong Min Choi

Due to the advancement of the telecommunication and information technology (IT) industry, internet data centers (IDCs) have become widespread in the public and private sectors. As such, energy demand in the center has also become increasingly prominent. Several technologies on energy management have been studied to determine the options available to minimize the energy required to operate the data center as well as reduce greenhouse gas emissions. The cooling system is required to remove the high heat dissipated by the IT electronic components especially the servers in order to ensure safe and reliable working condition. However, it utilizes more than one-third of the total energy consumption in the data center. In this study, the energy efficiency technologies that are usually applied to cooling systems in data centers were reviewed. The aim is to find out the strategies that will reduce the energy consumption of the cooling system since the cooling demand in data center is all year round. Prior to that, the performance metric tool that is mostly used in analyzing data center efficiency was discussed. The conventional cooling system technologies that are utilized in data centers were also provided. Lastly, innovative cooling technologies for future solutions in data centers were discussed.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Jayati Athavale ◽  
Minami Yoda ◽  
Yogendra Joshi

Purpose This study aims to present development of genetic algorithm (GA)-based framework aimed at minimizing data center cooling energy consumption by optimizing the cooling set-points while ensuring that thermal management criteria are satisfied. Design/methodology/approach Three key components of the developed framework include an artificial neural network-based model for rapid temperature prediction (Athavale et al., 2018a, 2019), a thermodynamic model for cooling energy estimation and GA-based optimization process. The static optimization framework informs the IT load distribution and cooling set-points in the data center room to simultaneously minimize cooling power consumption while maximizing IT load. The dynamic framework aims to minimize cooling power consumption in the data center during operation by determining most energy-efficient set-points for the cooling infrastructure while preventing temperature overshoots. Findings Results from static optimization framework indicate that among the three levels (room, rack and row) of IT load distribution granularity, Rack-level distribution consumes the least cooling power. A test case of 7.5 h implementing dynamic optimization demonstrated a reduction in cooling energy consumption between 21%–50% depending on current operation of data center. Research limitations/implications The temperature prediction model used being data-driven, is specific to the lab configuration considered in this study and cannot be directly applied to other scenarios. However, the overall framework can be generalized. Practical implications The developed framework can be implemented in data centers to optimize operation of cooling infrastructure and reduce energy consumption. Originality/value This paper presents a holistic framework for improving energy efficiency of data centers which is of critical value given the high (and increasing) energy consumption by these facilities.


Author(s):  
Srinivas Yarlanki ◽  
Rajarshi Das ◽  
Hendrik Hamann ◽  
Vanessa Lopez ◽  
Andrew Stepanchuk

Energy consumption has become a critical issue for data centers, triggered by the rise in energy costs, volatility in the supply and demand of energy and the wide spread proliferation of power-hungry information technology (IT) equipment. Since nearly half the energy consumed in a data center (DC) goes towards cooling, much of the efforts in minimizing energy consumption in DCs have focused on improving the efficiency of cooling strategies by optimally provisioning the cooling power to match the heat dissipation in the entire DC. However, at a more granular level within the DC, the large range of heat densities of today’s IT equipment makes this task of provisioning cooling power at the level of individual computer room air conditioning (CRAC) units much more challenging. In this work, we employ utility functions to present a principled and flexible method for determining the optimal settings of CRACs for joint management of power and temperature objectives at a more granular level within a DC. Such provisioning of cooling power to match the heat generated at a local level requires the knowledge of thermal zones — the region of DC space cooled by a specific CRAC. We show how thermal zones can be constructed for arbitrary settings of CRACs using the potential flow theory. As a case study, we apply our methodology in a 10,000 sq. ft commercial DC using actual measured conditions and evaluate the usefulness of the method by quantifying possible energy savings in this DC.


2013 ◽  
Vol 411-414 ◽  
pp. 634-637
Author(s):  
Pei Pei Jiang ◽  
Cun Qian Yu ◽  
Yu Huai Peng

In recent years, with the rapid expansion of network scale and types of applications, cloud computing and virtualization technology have been widely used in the data centers, providing a fast, flexible and convenient service. However, energy efficiency has increased dramatically. The problem of energy consumption has been widespread concern around the world. In this paper, we study the energy-saving in optical data center networks. First, we summarize the traditional methods of energy-saving and meanwhile reveal that the predominant energy consuming resources are the servers installed in the data centers. Then we present the server virtualization technologies based on Virtual Machines (VMs) that have been used widely to reduce energy consumption of servers. Results show server consolidation based on VM migration can efficiently reduce the overall energy consumption compared with traditional energy-saving approaches by reducing energy consumption of the entire network infrastructure in data center networks. For future work, we will study server consolidation based on VM migration in actual environment and address QoS requirements and access latency.


2017 ◽  
pp. 386-401
Author(s):  
Kanahavalli Mardamutu ◽  
Vasaki Ponnusamy ◽  
Noor Zaman

Green energy paradigm has been gaining popularity in the computing system from the software, hardware, infrastructure and application perspectives. Within that concept, data center greening is of utmost importance at the moment since data centers are one of the most energy conserving elements. Data centers are seen as the technology era's black energy-swallowing secret. Reducing energy consumption at data centers can reduce carbon footprint effect tremendously. Not addressing the issue immediately will lead to significant energy usage by data centers and will hinder the growth of data centers. The call for sustainable energy efficient data center leads to venturing into data center green computing. The green computing concept can be achieved by using several methods adopted by researchers including renewable energy, virtualization through cloud computing, proper cooling system, identifying suitable location to harvest energy whilst reducing the need for air-conditioning and employing suitable networking and information technology infrastructure. This paper focuses into several approaches used by researches to reduce energy consumption at data centers while deploying efficient database management system. This paper differs from others in the literature by giving some suitable solutions by looking into a hybrid model for green computing in data centers.


Author(s):  
Rongliang Zhou ◽  
Zhikui Wang ◽  
Cullen E. Bash ◽  
Tahir Cader ◽  
Alan McReynolds

Due to the tremendous cooling costs, data center cooling efficiency improvement has been actively pursued for years. In addition to cooling efficiency, the reliability of the cooling system is also essential for guaranteed uptime. In traditional data center cooling system design with N+1 or higher redundancy, all the computer room air conditioning (CRAC) units are either constantly online or cycled according to a predefined schedule. Both cooling system configurations, however, have their respective drawbacks. Data centers are usually over provisioned when all CRAC units are online all the time, and hence the cooling efficiency is low. On the other hand, although cooling efficiency can be improved by cycling CRAC units and turning off the backups, it is difficult to schedule the cycling such that sufficient cooling provisioning is guaranteed and gross over provisioning is avoided. In this paper, we aim to maintain the data center cooling redundancy while achieving high cooling efficiency. Using model-based thermal zone mapping, we first partition data centers to achieve the desired level of cooling influence redundancy. We then design a distributed controller for each of the CRAC units to regulate the thermal status within its zone of influence. The distributed controllers coordinate with each other to achieve the desired data center thermal status using the least cooling power. When CRAC units or their associated controllers fail, racks in the affected thermal zones are still within the control “radius” of other decentralized cooling controllers through predefined thermal zone overlap, and hence their thermal status is properly managed by the active CRAC units and controllers. Using this failure resistant data center cooling control approach, both cooling efficiency and robustness are achieved simultaneously. A higher flexibility in cooling system maintenance is also expected, since the distributed control system can automatically adapt to the new cooling facility configuration incurred by maintenance.


Sign in / Sign up

Export Citation Format

Share Document