Key Technologies to Implement High Thermal Ambient Data Center

Author(s):  
Shu Zhang ◽  
Yu Han ◽  
Nishi Ahuja ◽  
Xiaohong Liu ◽  
Huahua Ren ◽  
...  

In recent years, the internet services industry has been developing rapidly. Accordingly, the demands for compute and storage capacity continue to increase and internet data centers are consuming more power than ever before to provide this capacity. Based on the Forest & Sullivan market survey, data centers across the globe now consume around 100GWh of power and this consumption is expected to increase 30% by 2016. With development expanding, IDC (Internet Data Center) owners realize that small improvements in efficiency, from architecture design to daily operations, will yield large cost reduction benefits over time. Cooling energy is a significant part of the daily operational expense of an IDC. One trend in this industry is to raise the operational temperature of an IDC, which also means running IT equipment at HTA (Higher Ambient Temperature) environment. This might also include cooling improvements such as water-side or air-side economizers which can be used in place of traditional closed loop CRAC (Computer room air conditioner) systems. But just raising the ambient inlet air temperature cannot be done by itself without looking at more effective ways of managing cooling control and considering the thermal safety. An important trend seen in industry today is customized design for IT (Information Technology) equipment and IDC infrastructure from the cloud service provider. This trend brings an opportunity to consider IT and IDC together when designing and IDC, from the early design phase to the daily operation phase, when facing the challenge of improving efficiency. This trend also provides a chance to get more potential benefit out of higher operational temperature. The advantages and key components that make up a customized rack server design include reduced power consumption, more thermal margin with less fan power, and accurate thermal monitoring, etc. Accordingly, the specific IDC infrastructure can be re-designed to meet high temperature operations. To raise the supply air temperature always means less thermal headroom for IT equipment. IDC operators will have less responses time with large power variations or any IDC failures happen. This paper introduces a new solution called ODC (on-demand cooling) with PTAS (Power Thermal Aware Solution) technology to deal with these challenges. ODC solution use the real time thermal data of IT equipment as the key input data for the cooling controls versus traditional ceiling installed sensors. It helps to improve the cooling control accuracy, decrease the response time and reduce temperature variation. By establishing a smart thermal operation with characteristics like direct feedback, accurate control and quick response, HTA can safely be achieved with confidence. The results of real demo testing show that, with real time thermal information, temperature oscillation and response time can be reduced effectively.

2021 ◽  
pp. 85-91
Author(s):  
Shally Vats ◽  
Sanjay Kumar Sharma ◽  
Sunil Kumar

Proliferation of large number of cloud users steered the exponential increase in number and size of the data centers. These data centers are energy hungry and put burden for cloud service provider in terms of electricity bills. There is environmental concern too, due to large carbon foot print. A lot of work has been done on reducing the energy requirement of data centers using optimal use of CPUs. Virtualization has been used as the core technology for optimal use of computing resources using VM migration. However, networking devices also contribute significantly to the responsible for the energy dissipation. We have proposed a two level energy optimization method for the data center to reduce energy consumption by keeping SLA. VM migration has been performed for optimal use of physical machines as well as switches used to connect physical machines in data center. Results of experiments conducted in CloudSim on PlanetLab data confirm superiority of the proposed method over existing methods using only single level optimization.


2018 ◽  
Vol 8 (4) ◽  
pp. 118-133 ◽  
Author(s):  
Fahim Youssef ◽  
Ben Lahmar El Habib ◽  
Rahhali Hamza ◽  
Labriji El Houssine ◽  
Eddaoui Ahmed ◽  
...  

Cloud users can have access to the service based on “pay as you go.” The daily increase of cloud users may decrease the performance, the availability and the profitability of the material and software resources used in cloud service. These challenges were solved by several load balancing algorithms between the virtual machines of the data centers. In order to determine a new load balancing improvement; this article's discussions will be divided into two research axes. The first, the pre-classification of tasks depending on whether their characteristics are accomplished or not (Notion of Levels). This new technique relies on the modeling of tasks classification based on an ascending order using techniques that calculate the worst-case execution time (WCET). The second, the authors choose distributed datacenters between quasi-similar virtual machines and the modeling of relationship between virtual machines using the pre-scheduling levels is included in the data center in terms of standard mathematical functions that controls this relationship. The key point of the improvement, is considering the current load of the virtual machine of a data center and the pre-estimation of the execution time of a task before any allocation. This contribution allows cloud service providers to improve the performance, availability and maximize the use of virtual machines workload in their data centers.


2013 ◽  
Vol 135 (3) ◽  
Author(s):  
Dustin W. Demetriou ◽  
H. Ezzat Khalifa

This paper expands on the work presented by Demetriou and Khalifa (Demetriou and Khalifa, 2013, “Thermally Aware, Energy-Based Load Placement in Open-Aisle, Air-Cooled Data Centers,” ASME J. Electron. Packag., 135(3), p. 030906) that investigated practical IT load placement options in open-aisle, air-cooled data centers. The study found that a robust approach was to use real-time temperature measurements at the inlet of the racks to remove IT load from the servers with the warmest inlet temperature. By considering the holistic optimization of the data center load placement strategy and the cooling infrastructure optimization, for a range of data center IT utilization levels, this study investigated the effect of ambient temperatures on the data center operation, the consolidation of servers by completely shutting them off, a complementary strategy to those presented by Demetriou and Khalifa (Demetriou and Khalifa, 2013, “Thermally Aware, Energy-Based Load Placement in Open-Aisle, Air-Cooled Data Centers,” ASME J. Electron. Packag., 135(3), p. 030906) for increasing the IT load beginning with servers that have the coldest inlet temperature and finally the development of load placement rules via either static (i.e., during data center benchmarking) or dynamic (using real-time data from the current thermal environment) allocation. In all of these case studies, by using a holistic optimization of the data center and associated cooling infrastructure, a key finding has been that a significant amount of savings in the cooling infrastructure's power consumption is seen by reducing the CRAH's airflow rate. In many cases, these savings can be larger than providing higher temperature chilled water from the refrigeration units. Therefore, the path to realizing the industry's goal of higher IT equipment inlet temperatures to improve energy efficiency should be through both a reduction in air flow rate and increasing supply air temperatures and not necessarily through only higher CRAH supply air temperatures.


Load balancing algorithms and service broker policies plays a crucial role in determining the performance of cloud systems. User response time and data center request servicing time are largely affected by the load balancing algorithms and service broker policies. Several load balancing algorithms and service broker polices exist in the literature to perform the data center allocation and virtual machine allocation for the given set of user requests. In this paper, we investigate the performance of equally spread current execution (ESCE) based load balancing algorithm with closest data center(CDC) service broker policy in a cloud environment that consists of homogeneous and heterogeneous device characteristics in data centers and heterogeneous communication bandwidth that exist between different regions where cloud data centers are deployed. We performed a simulation using CloudAnalyst an open source software with different settings of device characteristics and bandwidth. The user response time and data center request servicing time are found considerably less in heterogeneous environment.


2019 ◽  
Vol 16 (4) ◽  
pp. 627-637
Author(s):  
Sanaz Hosseinzadeh Sabeti ◽  
Maryam Mollabgher

Goal: Load balancing policies often map workloads on virtual machines, and are being sought to achieve their goals by creating an almost equal level of workload on any virtual machine. In this research, a hybrid load balancing algorithm is proposed with the aim of reducing response time and processing time. Design / Methodology / Approach: The proposed algorithm performs load balancing using a table including the status indicators of virtual machines and the task list allocated to each virtual machine. The evaluation results of response time and processing time in data centers from four algorithms, ESCE, Throttled, Round Robin and the proposed algorithm is done. Results: The overall response time and data processing time in the proposed algorithm data center are shorter than other algorithms and improve the response time and data processing time in the data center. The results of the overall response time for all algorithms show that the response time of the proposed algorithm is 12.28%, compared to the Round Robin algorithm, 9.1% compared to the Throttled algorithm, and 4.86% of the ESCE algorithm. Limitations of the investigation: Due to time and technical limitations, load balancing has not been achieved with more goals, such as lowering costs and increasing productivity. Practical implications: The implementation of a hybrid load factor policy can improve the response time and processing time. The use of load balancing will cause the traffic load between virtual machines to be properly distributed and prevent bottlenecks. This will be effective in increasing customer responsiveness. And finally, improving response time increases the satisfaction of cloud users and increases the productivity of computing resources. Originality/Value: This research can be effective in optimizing the existing algorithms and will take a step towards further research in this regard.


Author(s):  
Michael K. Patterson ◽  
Michael Meakins ◽  
Dennis Nasont ◽  
Prasad Pusuluri ◽  
William Tschudi ◽  
...  

Increasing energy-efficient performance built into today’s servers has created significant opportunities for expanded Information and Communications Technology (ICT) capabilities. Unfortunately the power densities of these systems now challenge the data center cooling systems and have outpaced the ability of many data centers to support them. One of the persistent problems yet to be overcome in the data center space has been the separate worlds of the ICT and Facilities design and operations. This paper covers the implementation of a demonstration project where the integration of these two management systems can be used to gain significant energy savings while improving the operations staff’s visibility to the full data center; both ICT and facilities. The majority of servers have a host of platform information available to the ICT management network. This demonstration project takes the front panel temperature sensor data from the servers and provides that information over to the facilities management system to control the cooling system in the data center. The majority of data centers still use the cooling system return air temperature as the primary control variable to adjust supply air temperature, significantly limiting energy efficiency. Current best practices use a cold aisle temperature sensor to drive the cooling system. But even in this case the sensor is still only a proxy for what really matters; the inlet temperature to the servers. The paper presents a novel control scheme in which the control of the cooling system is split into two control loops to maximize efficiency. The first control loop is the cooling fluid which is driven by the temperature from the physically lower server to ensure the correct supply air temperature. The second control loop is the airflow in the cooling system. A variable speed drive is controlled by a differential temperature from the lower server to the server at the top of the rack. Controlling to this differential temperature will minimize the amount of air moved (and energy to do so) while ensuring no recirculation from the hot aisle. Controlling both of these facilities parameters by the server’s data will allow optimization of the energy used in the cooling system. Challenges with the integration of the ICT management data with the facilities control system are discussed. It is expected that this will be the most fruitful area in improving data center efficiency over the next several years.


Energies ◽  
2020 ◽  
Vol 13 (12) ◽  
pp. 3164
Author(s):  
Rasool Bukhsh ◽  
Muhammad Umar Javed ◽  
Aisha Fatima ◽  
Nadeem Javaid ◽  
Muhammad Shafiq ◽  
...  

The computing devices in data centers of cloud and fog remain in continues running cycle to provide services. The long execution state of large number of computing devices consumes a significant amount of power, which emits an equivalent amount of heat in the environment. The performance of the devices is compromised in heating environment. The high powered cooling systems are installed to cool the data centers. Accordingly, data centers demand high electricity for computing devices and cooling systems. Moreover, in Smart Grid (SG) managing energy consumption to reduce the electricity cost for consumers and minimum rely on fossil fuel based power supply (utility) is an interesting domain for researchers. The SG applications are time-sensitive. In this paper, fog based model is proposed for a community to ensure real-time energy management service provision. Three scenarios are implemented to analyze cost efficient energy management for power-users. In first scenario, community’s and fog’s power demand is fulfilled from the utility. In second scenario, community’s Renewable Energy Resources (RES) based Microgrid (MG) is integrated with the utility to meet the demand. In third scenario, the demand is fulfilled by integrating fog’s MG, community’s MG and the utility. In the scenarios, the energy demand of fog is evaluated with proposed mechanism. The required amount of energy to run computing devices against number of requests and amount of power require cooling down the devices are calculated to find energy demand by fog’s data center. The simulations of case studies show that the energy cost to meet the demand of the community and fog’s data center in third scenario is 15.09% and 1.2% more efficient as compared to first and second scenarios, respectively. In this paper, an energy contract is also proposed that ensures the participation of all power generating stakeholders. The results advocate the cost efficiency of proposed contract as compared to third scenario. The integration of RES reduce the energy cost and reduce emission of CO 2 . The simulations for energy management and plots of results are performed in Matlab. The simulation for fog’s resource management, measuring processing, and response time are performed in CloudAnalyst.


2018 ◽  
Vol 7 (2.7) ◽  
pp. 1
Author(s):  
Gatla Vinay ◽  
T Pavan Kumar

Penetration testing is a specialized security auditing methodology where a tester simulates an attack on a secured system. The main theme of this paper itself reflects how one can collect the massive amount of log files which are generated among virtual datacenters in real time which in turn also posses invisible information with excessive organization value. Such testing usually ranges across all aspects concerned to log management across a number of servers among virtual data centers. In fact, Virtualization limits the costs by reducing the need for physical hardware systems. Instead, require high-end hardware for processing. In the real-time scenario, we usually come across multiple logs among VCenter, ESXi, a VM which is very typical for performing manual analysis with a bit more time-consuming. Instead of configuring secure-ids automatically in a Centralized log management server gains a powerful full insight. Along with using accurate search algorithms, fields searching, which includes title, author, and also content comes out of searching, sorting fields, multiple-index search with merged results simultaneously updates files, with joint results grouping automatically configures few plugs among search engine file formats were effective measures in an investigation. Finally, by using the Flexibility Network Security Monitor, Traffic Investigation, offensive detection, Log Recording, Distributed inquiry with full program's ability can export data to a variety of visualization dashboard which exactly needed for Log Investigations across Virtual Data Centers in real time.


Author(s):  
Tahir Cader ◽  
Ratnesh Sharma ◽  
Cullen Bash ◽  
Les Fox ◽  
Vaibhav Bhatia ◽  
...  

The 2007 US EPA report to Congress (US EPA, 2007) on the state of energy consumption in data centers brought to light the true energy inefficiencies built into today’s data centers. Marquez et al. (2008) conducted an initial analysis on the productivity of a Pacific Northwest National Lab computer using The Green Grid’s Data Center Energy Productivity metric (The Green Grid, 2008). Their study highlights how the Top500 ranking of computers disguises the serious energy inefficiency of today’s High Performance Computing data centers. In the rapidly expanding Cloud Computing space, the race will be won by the providers that deliver the lowest cost of computing — such cost is heavily influenced by the operational costs incurred by data centers. As a means to address the urgent need to lower the cost of computing, solution providers have been intensely focusing on real-time monitoring, visualization, and control/management of data centers. The monitoring aspect involves the widespread use of networks of sensors that are used to monitor key data center environmental variables such as temperature, relative humidity, air flow rate, pressure, and energy consumption. Such data is then used to visualize and analyze data center problem areas (e.g., hotspots), which is then followed by control/management actions designed to alleviate such problem areas. The authors have been researching the operational benefits of a network of sensors tied in to a software package that uses the data to visualize, analyze, and control/manage the data center cooling system and IT Equipment for maximum operational efficiency. The research is being conducted in a corporate production data center that is networked in to the authors’ company’s global network of data centers. Results will be presented that highlight the operational benefits that are realizable through real-time monitoring and visualization.


2019 ◽  
Vol 15 (1) ◽  
pp. 84-100 ◽  
Author(s):  
N. Thilagavathi ◽  
D. Divya Dharani ◽  
R. Sasilekha ◽  
Vasundhara Suruliandi ◽  
V. Rhymend Uthariaraj

Cloud computing has seen tremendous growth in recent days. As a result of this, there has been a great increase in the growth of data centers all over the world. These data centers consume a lot of energy, resulting in high operating costs. The imbalance in load distribution among the servers in the data center results in increased energy consumption. Server consolidation can be handled by migrating all virtual machines in those underutilized servers. Migration causes performance degradation of the job, based on the migration time and number of migrations. Considering these aspects, the proposed clustering agent-based model improves energy saving by efficient allocation of the VMs to the hosting servers, which reduces the response time for initial allocation. Middle VM migration (MVM) strategy for server consolidation minimizes the number of VM migrations. Further, randomization of extra resource requirement done to cater to real-time scenarios needs more resource requirements than the initial requirement. Simulation results show that the proposed approach reduces the number of migrations and response time for user request and improves energy saving in the cloud environment.


Sign in / Sign up

Export Citation Format

Share Document