A Statistical Approach to Real-Time Updating and Automatic Scheduling of Physical Models

Author(s):  
Huijing Jiang ◽  
Xinwei Deng ◽  
Vanessa Lopez ◽  
Hendrik Hamann

Energy consumption of data center has increased dramatically due to the massive computing demands driven from every sector of the economy. Hence, data center energy management has become very important for operating data centers within environmental standards while achieving low energy cost. In order to advance the understanding of thermal management in data centers, relevant environmental information such as temperature, humidity and air quality are gathered through a network of real-time sensors or simulated via sophisticated physical models (e.g. computational fluid dynamics models). However, sensor readings of environmental parameters are collected only at sparse locations and thus cannot provide a detailed map of temperature distribution for the entire data center. While the physics models yield high resolution temperature maps, it is often not feasible, due to computational complexity of these models, to run them in real-time, which is ideally required for optimum data center operation and management. In this work, we propose a novel statistical modeling approach to updating physical model outputs in real-time and providing automatic scheduling for re-computing physical model outputs. The proposed method dynamically corrects the discrepancy between a steady-state output of the physical model and real-time thermal sensor data. We show that the proposed method can provide valuable information for data center energy management such as real-time high-resolution thermal maps. Moreover, it can efficiently detect systematic changes in a data center thermal environment, and automatically schedule physical models to be re-executed whenever significant changes are detected.

2013 ◽  
Vol 135 (3) ◽  
Author(s):  
Dustin W. Demetriou ◽  
H. Ezzat Khalifa

This paper expands on the work presented by Demetriou and Khalifa (Demetriou and Khalifa, 2013, “Thermally Aware, Energy-Based Load Placement in Open-Aisle, Air-Cooled Data Centers,” ASME J. Electron. Packag., 135(3), p. 030906) that investigated practical IT load placement options in open-aisle, air-cooled data centers. The study found that a robust approach was to use real-time temperature measurements at the inlet of the racks to remove IT load from the servers with the warmest inlet temperature. By considering the holistic optimization of the data center load placement strategy and the cooling infrastructure optimization, for a range of data center IT utilization levels, this study investigated the effect of ambient temperatures on the data center operation, the consolidation of servers by completely shutting them off, a complementary strategy to those presented by Demetriou and Khalifa (Demetriou and Khalifa, 2013, “Thermally Aware, Energy-Based Load Placement in Open-Aisle, Air-Cooled Data Centers,” ASME J. Electron. Packag., 135(3), p. 030906) for increasing the IT load beginning with servers that have the coldest inlet temperature and finally the development of load placement rules via either static (i.e., during data center benchmarking) or dynamic (using real-time data from the current thermal environment) allocation. In all of these case studies, by using a holistic optimization of the data center and associated cooling infrastructure, a key finding has been that a significant amount of savings in the cooling infrastructure's power consumption is seen by reducing the CRAH's airflow rate. In many cases, these savings can be larger than providing higher temperature chilled water from the refrigeration units. Therefore, the path to realizing the industry's goal of higher IT equipment inlet temperatures to improve energy efficiency should be through both a reduction in air flow rate and increasing supply air temperatures and not necessarily through only higher CRAH supply air temperatures.


Energies ◽  
2020 ◽  
Vol 13 (12) ◽  
pp. 3164
Author(s):  
Rasool Bukhsh ◽  
Muhammad Umar Javed ◽  
Aisha Fatima ◽  
Nadeem Javaid ◽  
Muhammad Shafiq ◽  
...  

The computing devices in data centers of cloud and fog remain in continues running cycle to provide services. The long execution state of large number of computing devices consumes a significant amount of power, which emits an equivalent amount of heat in the environment. The performance of the devices is compromised in heating environment. The high powered cooling systems are installed to cool the data centers. Accordingly, data centers demand high electricity for computing devices and cooling systems. Moreover, in Smart Grid (SG) managing energy consumption to reduce the electricity cost for consumers and minimum rely on fossil fuel based power supply (utility) is an interesting domain for researchers. The SG applications are time-sensitive. In this paper, fog based model is proposed for a community to ensure real-time energy management service provision. Three scenarios are implemented to analyze cost efficient energy management for power-users. In first scenario, community’s and fog’s power demand is fulfilled from the utility. In second scenario, community’s Renewable Energy Resources (RES) based Microgrid (MG) is integrated with the utility to meet the demand. In third scenario, the demand is fulfilled by integrating fog’s MG, community’s MG and the utility. In the scenarios, the energy demand of fog is evaluated with proposed mechanism. The required amount of energy to run computing devices against number of requests and amount of power require cooling down the devices are calculated to find energy demand by fog’s data center. The simulations of case studies show that the energy cost to meet the demand of the community and fog’s data center in third scenario is 15.09% and 1.2% more efficient as compared to first and second scenarios, respectively. In this paper, an energy contract is also proposed that ensures the participation of all power generating stakeholders. The results advocate the cost efficiency of proposed contract as compared to third scenario. The integration of RES reduce the energy cost and reduce emission of CO 2 . The simulations for energy management and plots of results are performed in Matlab. The simulation for fog’s resource management, measuring processing, and response time are performed in CloudAnalyst.


IEEE Access ◽  
2016 ◽  
Vol 4 ◽  
pp. 941-950 ◽  
Author(s):  
Liang Yu ◽  
Tao Jiang ◽  
Yulong Zou

2012 ◽  
Vol 134 (4) ◽  
Author(s):  
Emad Samadiani ◽  
Yogendra Joshi ◽  
Hendrik Hamann ◽  
Madhusudan K. Iyengar ◽  
Steven Kamalsy ◽  
...  

In this paper, an effective and computationally efficient proper orthogonal decomposition (POD) based reduced order modeling approach is presented, which utilizes selected sets of observed thermal sensor data inside the data centers to help predict the data center temperature field as a function of the air flow rates of computer room air conditioning (CRAC) units. The approach is demonstrated through application to an operational data center of 102.2 m2 (1100 square feet) with a hot and cold aisle arrangement of racks cooled by one CRAC unit. While the thermal data throughout the facility can be collected in about 30 min using a 3D temperature mapping tool, the POD method is able to generate temperature field throughout the data center in less than 2 s on a high end desktop personal computer (PC). Comparing the obtained POD temperature fields with the experimentally measured data for two different values of CRAC flow rates shows that the method can predict the temperature field with the average error of 0.68 °C or 3.2%. The maximum local error is around 8 °C, but the total number of points where the local error is larger than 1 °C, is only ∼6% of the total domain points.


Author(s):  
Emad Samadiani ◽  
Yogendra Joshi ◽  
Hendrik Hamann ◽  
Madhusudan K. Iyengar ◽  
Steven Kamalsy ◽  
...  

In this paper, an effective and computationally efficient Proper Orthogonal Decomposition (POD) based reduced order modeling approach is presented, which utilizes selected sets of observed thermal sensor data inside the data centers to help predict the data center temperature field as a function of the air flow rates of Computer Room Air Conditioning (CRAC) units. The approach is demonstrated through application to an operational data center of 102.2 m2 (1,100 square feet) with a hot and cold aisle arrangement of racks cooled by one CRAC unit. While the thermal data throughout the facility can be collected in about 30 minutes using a 3D temperature mapping tool, the POD method is able to generate temperature field throughout the data center in less than 2 seconds on a high end desktop PC. Comparing the obtained POD temperature fields with the experimentally measured data for two different values of CRAC flow rates shows that the method can predict the temperature field with the average error of 0.68 °C or 3.2%.


2018 ◽  
Vol 7 (2.7) ◽  
pp. 1
Author(s):  
Gatla Vinay ◽  
T Pavan Kumar

Penetration testing is a specialized security auditing methodology where a tester simulates an attack on a secured system. The main theme of this paper itself reflects how one can collect the massive amount of log files which are generated among virtual datacenters in real time which in turn also posses invisible information with excessive organization value. Such testing usually ranges across all aspects concerned to log management across a number of servers among virtual data centers. In fact, Virtualization limits the costs by reducing the need for physical hardware systems. Instead, require high-end hardware for processing. In the real-time scenario, we usually come across multiple logs among VCenter, ESXi, a VM which is very typical for performing manual analysis with a bit more time-consuming. Instead of configuring secure-ids automatically in a Centralized log management server gains a powerful full insight. Along with using accurate search algorithms, fields searching, which includes title, author, and also content comes out of searching, sorting fields, multiple-index search with merged results simultaneously updates files, with joint results grouping automatically configures few plugs among search engine file formats were effective measures in an investigation. Finally, by using the Flexibility Network Security Monitor, Traffic Investigation, offensive detection, Log Recording, Distributed inquiry with full program's ability can export data to a variety of visualization dashboard which exactly needed for Log Investigations across Virtual Data Centers in real time.


2018 ◽  
Vol 9 (4) ◽  
pp. 3748-3762 ◽  
Author(s):  
Liang Yu ◽  
Tao Jiang ◽  
Yulong Zou

Author(s):  
M. Tradat ◽  
S. Khalili ◽  
B. Sammakia ◽  
M. Ibrahim ◽  
Th. Peddle ◽  
...  

The operation of today’s data centers increasingly relies on environmental data collection and analysis to operate the cooling infrastructure as efficiently as possible and to maintain the reliability of IT equipment. This in turn emphasizes the importance of the quality of the data collected and their relevance to the overall operation of the data center. This study presents an experimentally based analysis and comparison between two different approaches for environmental data collection; one using a discrete sensor network, and another using available data from installed IT equipment through their Intelligent Platform Management Interface (IPMI). The comparison considers the quality and relevance of the data collected and investigates their effect on key performance and operational metrics. The results have shown the large variation of server inlet temperatures provided by the IPMI interface. On the other hand, the discrete sensor measurements showed much more reliable results where the server inlet temperatures had minimal variation inside the cold aisle. These results highlight the potential difficulty in using IPMI inlet temperature data to evaluate the thermal environment inside the contained cold aisle. The study also focuses on how industry common methods for cooling efficiency management and control can be affected by the data collection approach. Results have shown that using preheated IPMI inlet temperature data can lead to unnecessarily lower cooling set points, which in turn minimizes the potential cooling energy savings. It was shown in one case that using discrete sensor data for control provides 20% more energy savings than using IPMI inlet temperature data.


Author(s):  
Shu Zhang ◽  
Yu Han ◽  
Nishi Ahuja ◽  
Xiaohong Liu ◽  
Huahua Ren ◽  
...  

In recent years, the internet services industry has been developing rapidly. Accordingly, the demands for compute and storage capacity continue to increase and internet data centers are consuming more power than ever before to provide this capacity. Based on the Forest & Sullivan market survey, data centers across the globe now consume around 100GWh of power and this consumption is expected to increase 30% by 2016. With development expanding, IDC (Internet Data Center) owners realize that small improvements in efficiency, from architecture design to daily operations, will yield large cost reduction benefits over time. Cooling energy is a significant part of the daily operational expense of an IDC. One trend in this industry is to raise the operational temperature of an IDC, which also means running IT equipment at HTA (Higher Ambient Temperature) environment. This might also include cooling improvements such as water-side or air-side economizers which can be used in place of traditional closed loop CRAC (Computer room air conditioner) systems. But just raising the ambient inlet air temperature cannot be done by itself without looking at more effective ways of managing cooling control and considering the thermal safety. An important trend seen in industry today is customized design for IT (Information Technology) equipment and IDC infrastructure from the cloud service provider. This trend brings an opportunity to consider IT and IDC together when designing and IDC, from the early design phase to the daily operation phase, when facing the challenge of improving efficiency. This trend also provides a chance to get more potential benefit out of higher operational temperature. The advantages and key components that make up a customized rack server design include reduced power consumption, more thermal margin with less fan power, and accurate thermal monitoring, etc. Accordingly, the specific IDC infrastructure can be re-designed to meet high temperature operations. To raise the supply air temperature always means less thermal headroom for IT equipment. IDC operators will have less responses time with large power variations or any IDC failures happen. This paper introduces a new solution called ODC (on-demand cooling) with PTAS (Power Thermal Aware Solution) technology to deal with these challenges. ODC solution use the real time thermal data of IT equipment as the key input data for the cooling controls versus traditional ceiling installed sensors. It helps to improve the cooling control accuracy, decrease the response time and reduce temperature variation. By establishing a smart thermal operation with characteristics like direct feedback, accurate control and quick response, HTA can safely be achieved with confidence. The results of real demo testing show that, with real time thermal information, temperature oscillation and response time can be reduced effectively.


Author(s):  
Milan Malić ◽  
Dalibor Dobrilović ◽  
Dušan Malić ◽  
Željko Stojanov

In the past decade there is a significant trend of implementing IoT technologies and standards in different industries. This trend brings cost reductions to the companies and other benefits as well. One of the main benefits is real-time and uniform data collection. The data are transferred using diverse communication protocols, from the sensor nodes to the centralized application. So far, current approaches in developing applications are not proved itself to be efficient enough in scenarios when a significant amount of data needs to be stored and analyzed. The focus of this paper is on development of software architecture suitable for usage in Internet of Things (IoT) systems where the larger amount of data can be processed in real-time. The software architecture is developed in order to support the sensor network for monitoring the small data center and it is based on microservices. Besides the system and its architecture, this paper presents the method of analysis of system performances in real-time environment. The proposal for lightweight microservice architecture, presented in this paper, is developed with .NET Core and RabbitMQ, with the utilization of MongoDB and SQLite databases systems for storing data collected with IoT devices. In this paper, the system evaluation and research results in different stress scenarios are also presented. Because of its complexity, only the most significant segments of architecture will be presented in this paper. The proposed solution showed that proposed lightweight architecture based on microservices could deal with the larger amount of sensor data in the case of using MongoDB. On the other hand, the usage of SQLite database is not recommended due to the lower performances and test results.


Sign in / Sign up

Export Citation Format

Share Document