scholarly journals Artificial intelligence-based network traffic analysis and automatic optimization technology

2021 ◽  
Vol 19 (2) ◽  
pp. 1775-1785
Author(s):  
Jiyuan Ren ◽  
◽  
Yunhou Zhang ◽  
Zhe Wang ◽  
Yang Song

<abstract> <p>Network operation and maintenance (O &amp; M) activities of data centers focus mainly on checking the operating states of devices. O &amp; M engineers determine how services are running and the bearing capacity of a data center by checking the operating states of devices. However, this method cannot reflect the real transmission status of business data; therefore, engineers cannot fully comprehensively perceive the overall running conditions of businesses. In this paper, ERSPAN (Encapsulated Remote Switch Port Analyzer) technology is applied to deliver stream matching rules in the forwarding path of TCP packets and mirror the TCP packets into the network O &amp; M AI collector, which is used to conduct an in-depth analysis on the TCP packets, collect traffic statistics, recapture the forwarding path, carry out delayed computing, and identify applications. This enables O &amp; M engineers to comprehensively perceive the service bearing status in a data center, and form a tightly coupled correlation model between networks and services through end-to-end visualized modeling, providing comprehensive technical support for data center optimization and early warning of network risks.</p> </abstract>

2013 ◽  
Vol 694-697 ◽  
pp. 2308-2312
Author(s):  
Wei Du

In this paper, starting with the current security issues of the Ethernet data center, the author put forward the solutions for the secure data centers. The solutions are based on a security infrastructure, framed according to border protection, and The depth detection as the core of the secure data center solutions. It is apparent to penetrate the concept of security to the entire data center network design, deployment, and operation and maintenance.


Author(s):  
Chris Muller ◽  
Chuck Arent ◽  
Henry Yu

Abstract Lead-free manufacturing regulations, reduction in circuit board feature sizes and the miniaturization of components to improve hardware performance have combined to make data center IT equipment more prone to attack by corrosive contaminants. Manufacturers are under pressure to control contamination in the data center environment and maintaining acceptable limits is now critical to the continued reliable operation of datacom and IT equipment. This paper will discuss ongoing reliability issues with electronic equipment in data centers and will present updates on ongoing contamination concerns, standards activities, and case studies from several different locations illustrating the successful application of contamination assessment, control, and monitoring programs to eliminate electronic equipment failures.


2017 ◽  
Vol 19 (1) ◽  
pp. 4-10 ◽  
Author(s):  
Maria Anna Jankowska ◽  
Piotr Jankowski

The article presents the Idaho Geospatial Data Center (IGDC), a digital library of public-domain geographic data for the state of Idaho. The design and implementation of IGDC are introduced as part of the larger context of a geolibrary model. The article presents methodology and tools used to build IGDC with the focus on a geolibrary map browser. The use of IGDC is evaluated from the perspective of accessa and demand for geographic data. Finally, the article offers recommendations for future development of geospatial data centers.


Author(s):  
Tianyi Gao ◽  
James Geer ◽  
Bahgat G. Sammakia ◽  
Russell Tipton ◽  
Mark Seymour

Cooling power constitutes a large portion of the total electrical power consumption in data centers. Approximately 25%∼40% of the electricity used within a production data center is consumed by the cooling system. Improving the cooling energy efficiency has attracted a great deal of research attention. Many strategies have been proposed for cutting the data center energy costs. One of the effective strategies for increasing the cooling efficiency is using dynamic thermal management. Another effective strategy is placing cooling devices (heat exchangers) closer to the source of heat. This is the basic design principle of many hybrid cooling systems and liquid cooling systems for data centers. Dynamic thermal management of data centers is a huge challenge, due to the fact that data centers are operated under complex dynamic conditions, even during normal operating conditions. In addition, hybrid cooling systems for data centers introduce additional localized cooling devices, such as in row cooling units and overhead coolers, which significantly increase the complexity of dynamic thermal management. Therefore, it is of paramount importance to characterize the dynamic responses of data centers under variations from different cooling units, such as cooling air flow rate variations. In this study, a detailed computational analysis of an in row cooler based hybrid cooled data center is conducted using a commercially available computational fluid dynamics (CFD) code. A representative CFD model for a raised floor data center with cold aisle-hot aisle arrangement fashion is developed. The hybrid cooling system is designed using perimeter CRAH units and localized in row cooling units. The CRAH unit supplies centralized cooling air to the under floor plenum, and the cooling air enters the cold aisle through perforated tiles. The in row cooling unit is located on the raised floor between the server racks. It supplies the cooling air directly to the cold aisle, and intakes hot air from the back of the racks (hot aisle). Therefore, two different cooling air sources are supplied to the cold aisle, but the ways they are delivered to the cold aisle are different. Several modeling cases are designed to study the transient effects of variations in the flow rates of the two cooling air sources. The server power and the cooling air flow variation combination scenarios are also modeled and studied. The detailed impacts of each modeling case on the rack inlet air temperature and cold aisle air flow distribution are studied. The results presented in this work provide an understanding of the effects of air flow variations on the thermal performance of data centers. The results and corresponding analysis is used for improving the running efficiency of this type of raised floor hybrid data centers using CRAH and IRC units.


Author(s):  
Amip J. Shah ◽  
Van P. Carey ◽  
Cullen E. Bash ◽  
Chandrakant D. Patel

Data centers today contain more computing and networking equipment than ever before. As a result, a higher amount of cooling is required to maintain facilities within operable temperature ranges. Increasing amounts of resources are spent to achieve thermal control, and tremendous potential benefit lies in the optimization of the cooling process. This paper describes a study performed on data center thermal management systems using the thermodynamic concept of exergy. Specifically, an exergy analysis has been performed on sample data centers in an attempt to identify local and overall inefficiencies within thermal management systems. The development of a model using finite volume analysis has been described, and potential applications to real-world systems have been illustrated. Preliminary results suggest that such an exergy-based analysis can be a useful tool in the design and enhancement of thermal management systems.


Author(s):  
Kamran Nazir ◽  
Naveed Durrani ◽  
Imran Akhtar ◽  
M. Saif Ullah Khalid

Due to high energy demands of data centers and the energy crisis throughout the world, efficient heat transfer in a data center is an active research area. Until now major emphasis lies upon study of air flow rate and temperature profiles for different rack configurations and tile layouts. In current work, we consider different hot aisle (HA) and cold aisle (CA) configurations to study heat transfer phenomenon inside a data center. In raised floor data centers when rows of racks are parallel to each other, in a conventional cooling system, there are equal number of hot and cold aisles for odd number of rows of racks. For even number of rows of racks, whatever configuration of hot/cold aisles is adopted, number of cold aisles is either one greater or one less than number of hot aisles i.e. two cases are possible case A: n(CA) = n(HA) + 1 and case B: n(CA) = n(HA) − 1 where n(CA), n(HA) denotes number of cold and hot aisles respectively. We perform numerical simulations for two (case1) and four (case 2) racks data center. The assumption of constant pressure below plenum reduces the problem domain to above plenum area only. In order to see which configuration provides higher heat transfer across servers, we measure heat transfer across servers on the basis of temperature differences across racks, and in order to validate them, we find mass flow rates on rack outlet. On the basis of results obtained, we conclude that for even numbered rows of rack data center, using more cold aisles than hot aisles provide higher heat transfer across servers. These results provide guidance on the design and layout of a data center.


2021 ◽  
pp. 85-91
Author(s):  
Shally Vats ◽  
Sanjay Kumar Sharma ◽  
Sunil Kumar

Proliferation of large number of cloud users steered the exponential increase in number and size of the data centers. These data centers are energy hungry and put burden for cloud service provider in terms of electricity bills. There is environmental concern too, due to large carbon foot print. A lot of work has been done on reducing the energy requirement of data centers using optimal use of CPUs. Virtualization has been used as the core technology for optimal use of computing resources using VM migration. However, networking devices also contribute significantly to the responsible for the energy dissipation. We have proposed a two level energy optimization method for the data center to reduce energy consumption by keeping SLA. VM migration has been performed for optimal use of physical machines as well as switches used to connect physical machines in data center. Results of experiments conducted in CloudSim on PlanetLab data confirm superiority of the proposed method over existing methods using only single level optimization.


2019 ◽  
Vol 1 (1) ◽  
pp. 13-20
Author(s):  
Ferhat Yuna

In today's world, the fact that information applications have become an indispensable part of life with the effect of the developments in information technologies has led to a huge rate of data production and usage. As a result of this, the need for data centers has increased. Although Turkey is a country with advantages that can play a leading role in the field of data centers in the region where it is located, it has some disadvantages too. Some of these disadvantages are natural disasters index, climate index, energy index, accessibility index, human capital and quality of life index (HCLQ). In this context, these disadvantages are considered as criteria for data center location selection problem. In this study, criteria weights were determined by fuzzy DEMATEL (The Decision Making Trial and Evaluation Laboratory) method in the problem solving and alternatives (81 provinces) were ranked using EDAS (Evaluation based on Distance from Average Solution) method. According to the results, it was found that Istanbul is the best alternative in data center location selection.


2020 ◽  
Vol 11 (3) ◽  
pp. 228-241
Author(s):  
Almir Pereira Guimarães ◽  
Alan Pereira da Silva
Keyword(s):  

Data centers estão em constante crescimento impulsionando demandas de novas tecnologias tais como computação em nuvem, comércio eletrônico, o que forçou a disponibilização destes sistemas 24 horas por dia, 7 dias por semana sob pena de grandes prejuízos. Em contrapartida, este crescimento tem ocasionado um maior consumo de energia com consequente aumento da dissipação de calor por parte de seus componentes, podendo acarretar elevação da temperatura de operação destes sistemas. Neste trabalho, focamos nos aspectos de dependabilidade de um data center, particularmente considerando a variação da disponibilidade da infraestrutura de comunicação devido à variação de sua temperatura que é proporcionada através da infraestrutura de refrigeração. Com a esta infraestrutura, foram propostas arquiteturas que utilizam diferentes mecanismos de redundância com o objetivo de analisar como ocorre a variação da temperatura quando são utilizadas cada uma destas arquiteturas. Os modelos de dependabilidade foram criados utilizando os mecanismos de modelagem Diagrama de Blocos de Confiabilidade e redes de Petri estocásticas. Além disso, um estudo foi elaborado para a aplicação destes modelos considerando diferentes cenários. Foram feitas diversas análises tais como a verificação da disponibilidade da infraestrutura de comunicação em função da temperatura considerando a variação no tempo de falha de seus componentes, variação da temperatura da infraestrutura de comunicação em função do número de componentes ativos da infraestrutura de refrigeração e as probabilidades de ocorrência de falhas nos componentes da infraestrutura de refrigeração considerando os diferentes mecanismos de redundância adotados.


2016 ◽  
Vol 57 ◽  
pp. 421-464 ◽  
Author(s):  
Arnaud Malapert ◽  
Jean-Charles Régin ◽  
Mohamed Rezgui

We introduce an Embarrassingly Parallel Search (EPS) method for solving constraint problems in parallel, and we show that this method matches or even outperforms state-of-the-art algorithms on a number of problems using various computing infrastructures. EPS is a simple method in which a master decomposes the problem into many disjoint subproblems which are then solved independently by workers. Our approach has three advantages: it is an efficient method; it involves almost no communication or synchronization between workers; and its implementation is made easy because the master and the workers rely on an underlying constraint solver, but does not require to modify it. This paper describes the method, and its applications to various constraint problems (satisfaction, enumeration, optimization). We show that our method can be adapted to different underlying solvers (Gecode, Choco2, OR-tools) on different computing infrastructures (multi-core, data centers, cloud computing). The experiments cover unsatisfiable, enumeration and optimization problems, but do not cover first solution search because it makes the results hard to analyze. The same variability can be observed for optimization problems, but at a lesser extent because the optimality proof is required. EPS offers good average performance, and matches or outperforms other available parallel implementations of Gecode as well as some solvers portfolios. Moreover, we perform an in-depth analysis of the various factors that make this approach efficient as well as the anomalies that can occur. Last, we show that the decomposition is a key component for efficiency and load balancing.


Sign in / Sign up

Export Citation Format

Share Document