A Distributed Optimization Method for the Geographically Distributed Data Centres Problem

Author(s):  
Mohamed Wahbi ◽  
Diarmuid Grimes ◽  
Deepak Mehta ◽  
Kenneth N. Brown ◽  
Barry O’Sullivan
2019 ◽  
Vol 214 ◽  
pp. 07007
Author(s):  
Petr Fedchenkov ◽  
Andrey Shevel ◽  
Sergey Khoruzhnikov ◽  
Oleg Sadov ◽  
Oleg Lazo ◽  
...  

ITMO University (ifmo.ru) is developing the cloud of geographically distributed data centres. The geographically distributed means data centres (DC) located in different places far from each other by hundreds or thousands of kilometres. Usage of the geographically distributed data centres promises a number of advantages for end users such as opportunity to add additional DC and service availability through redundancy and geographical distribution. Services like data transfer, computing, and data storage are provided to users in the form of virtual objects including virtual machines, virtual storage, virtual data transfer link.


2018 ◽  
Vol 7 (3.34) ◽  
pp. 141
Author(s):  
D Ramya ◽  
J Deepa ◽  
P N.Karthikayan

A geographically distributed Data center assures Globalization of data and also security for the organizations. The principles for Disaster recovery is also taken into consideration. The above aspects drive business opportunities to companies that own many sites and Cloud Infrastructures with multiple owners.  The data centers store very critical and confidential documents that multiple organizations share in the cloud infrastructure. Previously different servers with different Operating systems and software applications were used. As it was difficult to maintain, Servers are consolidated which allows sharing of resources at low of cost maintenance [7]. The availability of documents should be increased and down time should be reduced. Thus workload management becomes a challenging among the data centers distributed geographically. In this paper we focus on different approaches used for workload management in Geo-distributed data centers. The algorithms used and also the challenges involved in different approaches are discussed 


Author(s):  
Yunhong Gu ◽  
Robert L. Grossman

Cloud computing has demonstrated that processing very large datasets over commodity clusters can be done simply, given the right programming model and infrastructure. In this paper, we describe the design and implementation of the Sector storage cloud and the Sphere compute cloud. By contrast with the existing storage and compute clouds, Sector can manage data not only within a data centre, but also across geographically distributed data centres. Similarly, the Sphere compute cloud supports user-defined functions (UDFs) over data both within and across data centres. As a special case, MapReduce-style programming can be implemented in Sphere by using a Map UDF followed by a Reduce UDF. We describe some experimental studies comparing Sector/Sphere and Hadoop using the Terasort benchmark. In these studies, Sector is approximately twice as fast as Hadoop. Sector/Sphere is open source.


2018 ◽  
Vol 125 ◽  
pp. 48-67
Author(s):  
Fabrice Guillemin ◽  
Guilherme Thompson

2019 ◽  
Vol 220 ◽  
pp. 01006
Author(s):  
I.Z. Latypov ◽  
D.O. Akat’ev ◽  
V.V. Chistyakov ◽  
M.A. Fadeev ◽  
A.K. Khalturinsky ◽  
...  

The work is devoted to the creation of a telescopic transceiver system that organizes an atmospheric point-to-point communication channel, and its use for quantum communication at sideband frequencies as the “last mile” for data protection in a geographically distributed data centre


Sign in / Sign up

Export Citation Format

Share Document