scholarly journals Applying Multidimensional Geometry to Basic Data Centre Designs

Author(s):  
Pedro Juan Roig ◽  
Salvador Alcaraz ◽  
Katja Gilly ◽  
Carlos Juiz

Fog computing deployments are catching up by the day due to their advantages on latency and bandwidth compared to cloud implementations. Furthermore, the number of required hosts is usually far smaller, and so are the amount of switches needed to make the interconnections among them. In this paper, an approach based on multidimensional geometry is proposed for building up basic switching architectures for Data Centres, in a way that the most common convex regular N-polytopes are first introduced, where N is treated in an incremental manner in order to reach a generic high-dimensional N, and in turn, those resulting shapes are associated with their corresponding switching topologies. This way, N-simplex is related to a full mesh pattern, N-orthoplex is linked to a quasi full mesh structure and N-hypercube is referred to as a certain type of partial mesh layout. In each of those three contexts, a model is to be built up, where switches are first identified, afterwards, their downlink ports leading to the end hosts are exposed, along with those host identifiers, as well as their uplink ports leading to their neighboring switches, and eventually, a pseudocode algorithm is designed, exposing how a packet coming in from any given port of a switch is to be forwarded through the proper outgoing port on its way to the destination host by using the appropriate arithmetic expressions in each particular case. Therefore, all those algorithmic models represent how their corresponding switches may work when dealing with user data traffic within a Data Centre, guiding it towards its destination.

Author(s):  
M. Sudhakara ◽  
K. Dinesh Kumar ◽  
Ravi Kumar Poluru ◽  
R Lokesh Kumar ◽  
S Bharath Bhushan

Cloud computing is an emerging field. With its three key features and its natural favorable circumstances, it has had a few difficulties in the recent years. The gap between the cloud and the end devices must be reduced in latency specific applications (i.e., disaster management). Right now, fog computing is an advanced mechanism to reduce the latency and congestion in IoT networks. It emphasizes processing the data as close as possible to the edge of the networks, instead of sending/receiving the data from the data centre by using large quantity of fog nodes. The virtualization of these fog nodes (i.e., nodes are invisible to the users) in numerous locations across the data centres enabled the fog computing to become more popular. The end users need to purchase the computing resources from the cloud authorities to process their excessive workload. Since computing resources are heterogeneous and resource are constrained and dynamic in nature, allocating these resources to the users becomes an open research issue and must be addressed as the first priority.


ITNOW ◽  
2021 ◽  
Vol 63 (4) ◽  
pp. 18-20
Author(s):  
John Booth

Abstract John Booth MBCS, Data Centre Energy Efficiency and Sustainability Consultant at Carbon3IT, explores the detrimental trajectory of data centre energy use, against a backdrop of COP26, climate change and proposed EU directives.


10.29007/h27x ◽  
2019 ◽  
Author(s):  
Mohammed Alasmar ◽  
George Parisis

In this paper we present our work towards an evaluation platform for data centre transport protocols. We developed a simulation model for NDP1, a modern data transport protocol in data centres, a FatTree network topology and per-packet ECMP load balancing. We also developed a data centre environment that can be used to evaluate and compare data transport protocols, usch as NDP and TCP. We describe how we integrated our model with the INET Framework and present example simulations to showcase the workings of the developed framework. For that, we ran a comprehensive set of experiments and studied different components and parameters of the developed models.


Author(s):  
Shesagiri Taminana ◽  
◽  
Lalitha Bhaskari ◽  
Arwa Mashat ◽  
Dragan Pamučar ◽  
...  

With the Present days increasing demand for the higher performance with the application developers have started considering cloud computing and cloud-based data centres as one of the prime options for hosting the application. Number of parallel research outcomes have for making a data centre secure, the data centre infrastructure must go through the auditing process. During the auditing process, auditors can access VMs, applications and data deployed on the virtual machines. The downside of the data in the VMs can be highly sensitive and during the process of audits, it is highly complex to permits based on the requests and can increase the total time taken to complete the tasks. Henceforth, the demand for the selective and adaptive auditing is the need of the current research. However, these outcomes are criticised for higher time complexity and less accuracy. Thus, this work proposes a predictive method for analysing the characteristics of the VM applications and the characteristics from the auditors and finally granting the access to the virtual machine by building a predictive regression model. The proposed algorithm demonstrates 50% of less time complexity to the other parallel research for making the cloud-based application development industry a safer and faster place.


Author(s):  
Sasikala Chinthakunta ◽  
Shoba Bindu Chigarapalle ◽  
Sudheer Kumar E.

Typically, the analysis of the industrial big data is done at the cloud. If the technology of IIoT is relying on cloud, data from the billions of internet-connected devices are voluminous and demand to be processed within the cloud DCs. Most of the IoT infrastructures—smart driving and car parking systems, smart vehicular traffic management systems, and smart grids—are observed to demand low-latency, real-time services from the service providers. Since cloud includes data storage, processing, and computation only within DCs, huge data traffic generated from the IoT devices probably experience a network bottleneck, high service latency, and poor quality of service (QoS). Hence, the placement of an intermediary node that can perform tasks efficiently and effectively is an unavoidable requirement of IIoT. Fog can be such an intermediary node because of its ability and location to perform tasks at the premise of an industry in a timely manner. This chapter discusses challenges, need, and framework of fog computing, security issues, and solutions of fog computing for IIoT.


2017 ◽  
Vol 14 (4) ◽  
pp. 1-32 ◽  
Author(s):  
Shashank Gupta ◽  
B. B. Gupta

This article introduces a distributed intelligence network of Fog computing nodes and Cloud data centres for smart devices against XSS vulnerabilities in Online Social Network (OSN). The cloud data centres compute the features of JavaScript, injects them in the form of comments and saved them in the script nodes of Document Object Model (DOM) tree. The network of Fog devices re-executes the feature computation and comment injection process in the HTTP response message and compares such comments with those calculated in the cloud data centres. Any divergence observed will simply alarm the signal of injection of XSS worms on the nodes of fog located at the edge of the network. The mitigation of such worms is done by executing the nested context-sensitive sanitization on the malicious variables of JavaScript code embedded in such worms. The prototype of the authors' work was developed in Java development framework and installed on the virtual machines of Cloud data centres (typically located at the core of network) and the nodes of Fog devices (exclusively positioned at the edge of network). Vulnerable OSN-based web applications were utilized for evaluating the XSS worm detection capability of the authors' framework and evaluation results revealed that their work detects the injection of XSS worms with high precision rate and less rate of false positives and false negatives.


Author(s):  
Babak Fakhim ◽  
Srinarayana Nagarathinam ◽  
Simon Wong ◽  
Masud Behnia ◽  
Steve Armfield

Aggregation of small networking hardware has led to an ever increasing power density in data centres. The energy consumption of IT systems is continuing to rise substantially owing to the demands of electronic information and storage requirements. Energy consumption of data centres can be severely and unnecessarily high due to inadequate localised cooling and densely packed rack layouts. However, as heat dissipation in data centres rises by orders of magnitude, inefficiencies such as air recirculation causing hot spots, leading to flow short-circuiting will have a significant impact on the thermal manageability and energy efficiency of the cooling infrastructure. Therefore, the thermal management of high-powered electronic components is a significant challenge for cooling of data centres. In this project, an operational data centre has been studied. Field measurements of temperature have been performed. Numerical analysis of flow and temperature fields is conducted in order to evaluate the thermal behaviour of the data centre. A number of undesirable hot spots have been identified. To rectify the problem, a few practical design solutions to improve the cooling effectiveness have been proposed and examined to ensure a reduced air-conditioning power requirement. Therefore, a better understanding of the cooling issues and the respective proposed solutions can lead to an improved design for future data centres.


Sensors ◽  
2019 ◽  
Vol 19 (16) ◽  
pp. 3612 ◽  
Author(s):  
Algimantas Venčkauskas ◽  
Nerijus Morkevicius ◽  
Vaidas Jukavičius ◽  
Robertas Damaševičius ◽  
Jevgenijus Toldinas ◽  
...  

Development of the Internet of Things (IoT) opens many new challenges. As IoT devices are getting smaller and smaller, the problems of so-called “constrained devices” arise. The traditional Internet protocols are not very well suited for constrained devices comprising localized network nodes with tens of devices primarily communicating with each other (e.g., various sensors in Body Area Network communicating with each other). These devices have very limited memory, processing, and power resources, so traditional security protocols and architectures also do not fit well. To address these challenges the Fog computing paradigm is used in which all constrained devices, or Edge nodes, primarily communicate only with less-constrained Fog node device, which collects all data, processes it and communicates with the outside world. We present a new lightweight secure self-authenticable transfer protocol (SSATP) for communications between Edge nodes and Fog nodes. The primary target of the proposed protocol is to use it as a secure transport for CoAP (Constrained Application Protocol) in place of UDP (User Datagram Protocol) and DTLS (Datagram Transport Layer Security), which are traditional choices in this scenario. SSATP uses modified header fields of standard UDP packets to transfer additional protocol handling and data flow management information as well as user data authentication information. The optional redundant data may be used to provide increased resistance to data losses when protocol is used in unreliable networks. The results of experiments presented in this paper show that SSATP is a better choice than UDP with DTLS in the cases, where the CoAP block transfer mode is used and/or in lossy networks.


2019 ◽  
Vol 111 ◽  
pp. 01043
Author(s):  
Jinkyun Cho ◽  
Beungyong Park ◽  
Yongdae Jeong ◽  
Sangmoon Lee

In this study, an actual 20 MW data centre project was analysed to evaluate the thermal performance of an IT server room during a cooling system outage under six fault conditions. In addition, a method of organizing and systematically managing operational stability and energy efficiency verification was identified for data centre construction in accordance with the commissioning process. It is essential to understand the operational characteristics of data centres and design optimal cooling systems to ensure the reliability of high-density data centres. In particular, it is necessary to consider these physical results and to perform an integrated review of the time required for emergency cooling equipment to operate as well as the back-up system availability time.


Author(s):  
Julia Velkova ◽  
Patrick Brodie

The past decade has seen the accelerated growth and expansion of large-scale data centre operations across the world to support emerging consumer and business data and computation needs. These buildings, as infrastructures responsive to changing global economic and technological terrain, are increasingly modular, and must be built out rapidly. However, these conditions also mean that their paths to obsolescence are shortened, their lifespans dependent on shifting corporate strategies and advances in consumer technology. This paper theorises and empirically explores material, infrastructural abandonment that emerges in this process of data centre construction across different geographical contexts. To do so, we analyse the socio-material construction of an international network of large-scale data centres by global telecom giant Ericsson, and the abrupt abandonment and suspension of one of its nodes in Vaudreuil, Québec in 2017 after only nine months of operation. Employing autoethnography, site visits, and qualitative interviews with data centre architects and staff in Sweden and Canada, we argue that the ruins of abandoned 'cloud' infrastructure represent the disjunction between the 'promise' of digital infrastructure for local communities and the market interests of digital companies. With its focus, the paper takes ruination and discard as perspectives through which to understand the complexity of emergent datafied futures and the socio-technical reshaping of internet infrastructures.


Sign in / Sign up

Export Citation Format

Share Document