scholarly journals CLASSIFICATION OF DATA CENTERS

Telecom IT ◽  
2019 ◽  
Vol 7 (1) ◽  
pp. 1-9
Author(s):  
A. Borodko

The data center (DC) is the most progressive form of computing resources when it is necessary to provide services to a wide range of users. Research subject. The article discusses the classification of data centers, their main functions, composition, purpose of creation and factors affecting them. Methodology and core results. The article provides a classification and structural analysis of the methods and technologies for constructing information storage and processing systems. In the work with a systematic approach analyzed factors affecting data centers. Practical relevance. It consists in the possibility of using the proposed classification in the tasks of systematically introducing the Internet of things devices into the data center, implementing software-defined data centers and developing methods for assessing the effectiveness of the functioning of the data center.

2019 ◽  
pp. 689-693
Author(s):  
Veselka Stoyanova

The Internet of Things (IoT) will connect not only computers and mobile devices, but it will also interconnect smart buildings, homes, and cities, as well as electrical grids, gas, and water networks, automobiles, airplanes, etc. IoT will lead to the development of a wide range of advanced information services that need to be processed in real-time and require data centers with large storage and computing power. In this paper, we present an IoT security framework for smart infrastructures such as Smart Homes (SH) and smart buildings (SB). I also present a general threat model that can be used to develop a security protection methodology for IoT services against cyber-attacks (known or unknown).


2020 ◽  
Vol 24 (6) ◽  
pp. 400-413
Author(s):  
Kaleem Ullah ◽  
Christopher Raitviir ◽  
Irene Lill ◽  
Emlyn Witt

BIM adoption is a complex process and relatively little information exists on the BIM adoption processes of public authorities. This research aims to address this gap by examining how a contemporary public authority is approaching BIM adoption for their building permitting process. Firstly, a systematic literature review was carried out to understand extant descriptions of BIM adoption processes and the factors affecting adoption success. This resulted in the derivation of a generic BIM adoption process and the classification of factors that affect BIM adoption with reference to the Technology Organization and Environment (TOE) framework. The case of the BIM adoption process and the factors affecting its implementation in a contemporary public authority were then analysed in terms of the generic adoption process and factor classification derived from the literature. The findings reveal the planning strategies and execution steps for BIM adoption and the factors affecting them. This study provides a systematic approach to investigating BIM adoption in a public authority. It contributes to the understanding of BIM adoption processes and factors affecting them and is anticipated to be useful for AEC/FM professionals in understanding and facilitating successful BIM adoption.


2020 ◽  
pp. 184-199
Author(s):  
A.F. Bulat ◽  
◽  
M.V. Savytskyi ◽  
T.V. Bunko ◽  
A.S. Belikov ◽  
...  

Today, in order to develop a contemporary society, assimilation of the necessary knowledge and organization of information storage, processing and use are the vital tasks. For this purpose, information data centers are being created, which ensure the realization of these tasks at all levels - from national to individual. Accordingly, location, volume and operation speed of the data centers are changing. Any data center is a fairly costly enterprise, building it "from scratch" requires significant material investments and human resources. Therefore, today researches are aimed at reusing areas and buildings of the out-of-date enterprises and organizations with the exhausted resources, but which, due to the acceptable rate of wear and tear, can be transformed into the innovative enterprises. In Ukraine, it is planned to close a number of coal mines, and, as a result, a significant number of industrial areas suitable for the renovation use will be vacated. Organization of a data center based on the existing facilities of the liquidated coal mine is quite possible and advisable; information about this is given in the article. There are many examples (for example, the Europe's largest data center Lefdal Mine Datacenter in Norway) of the data centers created on the basis of the liquidated industrial enterprises which, after their appropriate adaptation and modification, meet all requirements of the international standard ANSI/TIA/EIA-942. In Ukraine, there are also similar projects (for example, the United DC data center), which can be effectively introduced into the infrastructure of any industrial enterprise to be liquidated. The authors of this article have proven that modern coal mines (in particular, their surface technological complex) are essentially suitable for transformation into a data center due to the existing engineering and transport infrastructure, their favorable location and the required protection. The authors also provide information about the structure of the data center, audit of the surface complex in order to determine "bottlenecks" (non-compliance with the requirements of the ANS/TIA/EIA-942 standard), prerequisites for creating a data center on its basis, principles of calculating degree of depreciation of buildings and areas required for data center layout.


Author(s):  
Kailash C. Karki ◽  
Suhas V. Patankar ◽  
Amir Radmehr

In raised-floor data centers, the airflow rates through the perforated tiles must meet the cooling requirements of the computer servers placed next to the tiles. The data centers house a wide range of equipment, and the heat load pattern on the floor can be quite arbitrary and changes as the data center evolves. To achieve optimum utilization of the floor space and the flexibility for rearrangement and retrofitting, the designers and managers of data centers must be able to modify the airflow rates through the perforated tiles. The airflow rates through the perforated tiles are governed primarily by the pressure distribution under the raised floor. Thus, the key to modifying the flow rates is to influence the flow field in the plenum. This paper discusses a number of techniques that can be used for controlling airflow distribution. These techniques involve changing the plenum height and open area of perforated tiles, and installing thin (solid and perforated) partitions in the plenum. A number of case studies, using a mathematical model, are presented to demonstrate the effectiveness of these techniques.


2021 ◽  
Vol 23 (06) ◽  
pp. 767-774
Author(s):  
Niveditha. V.K ◽  
◽  
Dr. Kiran. V ◽  
Avinash Pathak ◽  
◽  
...  

The fast evolution pace of various technologies such as the Internet of Things (IoT), Cloud Computing and the world moving towards digitalization created an increased need for data centers than ever before. Data centers support a wide range of internet services, including web hosting, e-commerce, and social networking. In recent years huge data centers have been owned and run by tech giants like Google, Facebook, Microsoft, etc., and these firms are known as Hyper-scalers. Hyper-scalers are the next big thing, ready to fundamentally alter the internet world for data storage through a variety of services supplied by them across all technological domains. The tool for automatic software upgrade focuses on having a seamless upgrade for the devices in the datacenters mainly in huge data centers owned by the hyper-scalers. This paper mainly focuses on the technologies used in developing the tool for automatic software upgrade, an overview of how the tool is developed, and its features. By deploying this tool in the datacenters, it supports them in delivering more efficient services.


Author(s):  
Adil Markhaba ◽  
◽  
Islam Zhemeney ◽  
Aman K. Rakhmetullin ◽  
Kalamkas B. Bolatova ◽  
...  

The relevance of this topic lies in the analysis of the study of medieval Kazakh history. After gaining independence, the processes of the revival of national identity, reinstatement of primitive spiritual and moral values and human mentality, which were sharply suppressed during the period of the Soviet totalitarian system, became widespread. Therewith, the widely discussed national-historical structure of the population, the knowledge of ethnic roots, the restoration of traditions and customs, which served as a connecting link, as well as the specificity and originality of the approach are of particular importance. Currently, the problem of objective reading, coverage, and popularisation of the ancient and medieval Kazakh history and culture is acute. By rejecting one-sided interpretations of historical events, established clichés require impartial, academic analysis based on evidence drawn from a wide range of sources. The purpose of this study is to identify the problems of the history of Kazakhstan in the 13th-14th centuries, the general laws of world historical development and the features of the historical process, folk traditions by using a scientific and systematic approach. Based on the systematisation and classification of data from the geographical and Arab historical records of the 13th-14th centuries, the analysis of written monuments is performed, their interdependence is established, and the degree of completeness and reliability of the data in the works of the narrative is determined in an integral system. Due to the scientific expeditions and research trips to Mongolia, China, and Germany, Kazakh orientalists analysed and performed the first systematic processing of archival materials and historical evidence of the early history of resettlement based on the ancient Turkic manuscript, ancient Indian, and Chinese sources that formed a picture of the proto and ancient history. For example, the features of stone figures give an idea of the military hierarchy, military operations, the settlement of ethnic groups (ethnogeography), the worldview of the Turks, etc.


Author(s):  
Anand Mohan

The Internet of Things (IoT) is a system of interrelated computing devices, mechanical and digital machines, objects, animals or people that are provided with unique identifiers and the ability to transfer data over a network without requiring human-to-human or human-tocomputer interaction. It is an ambiguous term, but it is fast becoming a tangible technology that can be applied in data centers to collect information on just about anything that IT wants to control. IoT has evolved from the convergence of wireless technologies, micro-electromechanical systems (MEMS), microservices and the internet. The convergence has helped tear down the silo walls between operational technology (OT) and information technology (IT), allowing unstructured machine-generated data to be analyzed for insights that will drive improvements. The Internet of Things (IoT) is essentially a system of machines or objects outfitted with data-collecting technologies so that those objects can communicate with one another. The machineto-machine (M2M) data that is generated has a wide range of uses, but is commonly seen as a way to determine the health and status of things -- inanimate or living.


Telecom IT ◽  
2020 ◽  
Vol 8 (2) ◽  
pp. 67-76 ◽  
Author(s):  
E. Sapunova ◽  
S. Leontiev ◽  
A. Vybornova

This article is devoted to the haptic communications types and methods. Haptic and tactile codecs were the research subject. As a research method authors engaged analysis of the current research re-sults in the area, research and development of the parametric tactile codec, as well as statistical analysis of the obtained tactile traffic. Core result. In this article authors provide a classification of the haptic interactions and tactile codes approaches. The other result of the work is a simple parametric tactile co-dec. Also, authors have found out that such type of codec created quite intensive flow of the packets of moderate size (500 bytes). Practical relevance of the result consists in the creation of the tactile co-dec that may be used for simple Tactile Internet applications. Besides that, obtained information about tactile traffic characteristics may be used to update forecasts of the global telecom traffic growth.


Telecom IT ◽  
2020 ◽  
Vol 8 (2) ◽  
pp. 20-31
Author(s):  
S. Vladimirov ◽  
D. Berestovoy

Research subject. The article presents a protocol for identifying IoT devices developed by the authors and the results of its testing. Method. Simulation was performed to determine the probabilistic charac-teristics of 8-bit error-correcting codes. The principles of their coding and decoding are considered. Core results. The features of the developed identification protocol when transmitting packets over the transport protocols TCP and UDP are determined. Practical relevance. The application of the devel-oped protocol for identifying network devices of the Internet of things in local and global communication networks is proposed.


2021 ◽  
Vol 1 ◽  
pp. 12
Author(s):  
Thomas Batz ◽  
Reinhard Herzog ◽  
Jon Summers ◽  
Kym Watson

The Internet of Things (IoT) domain has been one of the fastest growing areas in the computer industry for the last few years. Consequently, IoT applications are becoming the dominant work load for many data centers. This has implications for the designers of data centers, as they need to meet their customers' requirements. Since it is not easy to use real applications for the design and test of data center setups, a tool is required to emulate real applications but is easy to configure, scale and deploy in a data center. This paper will introduce a simple but generic way to model the work load of typical IoT applications, in order to have a realistic and reproducible way to emulate IT loads for data centers. IoT application designers are in the process of harmonizing their approaches on how architectures should look, which building blocks are needed, and how they should interwork. While all IoT subdomains are diverse when it comes to the details, the architectural blueprints are becoming more and more aligned. These blueprints are called reference architectures and incorporate similar patterns for the underlying application primitives. This paper will introduce an approach to decompose IoT applications into such application primitives, and use them to emulate a workload as it would be created by the modeled application. The paper concludes with an example application of the IoT Workload Emulation in the BodenTypeDC experiment, where new cooling approaches for data centers have been tested under realistic work load conditions.


Sign in / Sign up

Export Citation Format

Share Document