Type-aware task placement in geo-distributed data centers with low OPEX using data center resizing

Author(s):  
Lin Gu ◽  
Deze Zeng ◽  
Song Guo ◽  
Shui Yu
IEEE Access ◽  
2018 ◽  
Vol 6 ◽  
pp. 61948-61958 ◽  
Author(s):  
Ran Wang ◽  
Yiwen Lu ◽  
Kun Zhu ◽  
Jie Hao ◽  
Ping Wang ◽  
...  

2021 ◽  
Author(s):  
Philipp Kaestli ◽  
Daniel Armbruster ◽  
The EIDA Technical Committee

<p>With the setup of EIDA (the European Integrated Data Archive https://www.orfeus-eu.org/data/eida/) in the framework of ORFEUS, and the implementation of FDSN-standardized web services, seismic waveform data and instrumentation metadata of most seismic networks and data centers in Europe became accessible in a homogeneous way. EIDA has augmented this with the WFcatalog service for waveform quality metadata, and a routing service to find out which data center offers data of which network, region, and type. However, while a distributed data archive has clear advantages for maintenance and quality control of the holdings, it complicates the life of researchers who wish to collect data archived across different data centers. To tackle this, EIDA has implemented the “federator” as a one-stop transparent gateway service to access the entire data holdings of EIDA.</p><p>To its users the federator acts just like a standard FDSN dataselect, station, or EIDA WFcatalog service, except for the fact that it can (due to a fully qualified internal routing cache) directly answer data requests on virtual networks.</p><p>Technically, the federator fulfills a user request by decomposing it into single stream epoch requests targeted at a single data center, collecting them, and re-assemble them to a single result.</p><p>This implementation has several technical advantages:</p><ul><li>It avoids response size limitations of EIDA member services, reducing limitations to those imposed by assembling cache space of the federator instance itself.</li> <li>It allows easy merging of partial responses using request sorting and concatenation, and reducing needs to interpret them. This reduces computational needs of the federator and allows high throughput of parallel user requests.</li> <li>It reduces the variability of requests to end member services. Thus, the federator can implement a reverse loopback cache and protect end node services from delivering redundant information and reducing their load.</li> <li>As partial results are quick, and delivered in small subunits, they can be streamed to the user more or less continuously, avoiding both service timeouts and throughput bottlenecks.</li> </ul><p>The advantage of having a one-stop data access for entire EIDA still comes with some limitations and shortcomings. Having requests which ultimately map to a single data center performed by the federator can be slower by that data center directly. FDSN-defined standard error codes sent by end member services have limited utility as they refer to a part of the request only. Finally, the federator currently does not provide access to restricted data.</p><p>Nevertheless, we believe that the one-stop data access compensates these shortcomings in many use cases.</p><p>Further documentation of the service is available with ORFEUS at http://www.orfeus-eu.org/data/eida/nodes/FEDERATOR/</p>


2020 ◽  
Author(s):  
Peter Baumann

<p>Datacubes form an accepted cornerstone for analysis (and visualization) ready spatio-temporal data offerings. Beyond the multi-dimensional data structure, the paradigm also suggests rich services, abstracting away from the untractable zillions of files and products - actionable datacubes as established by Array Databases enable users to ask "any query, any time" without programming. The principle of location-transparent federations establishes a single, coherent information space.</p><p>The EarthServer federation is a large, growing data center network offering Petabytes of a critical variety, such as radar and optical satellite data, atmospheric data, elevation data, and thematic cubes like global sea ice. Around CODE-DE and DIASs an ecosystem of data has been established that is available to users as a single pool, in particular for efficient distributed data fusion irrespective of data location.</p><p>In our talk we present technology, services, and governance of this unique intercontinental line-up of data centers. A live demo will show dist<br>ributed datacube fusion.</p><p> </p>


2019 ◽  
Vol 8 (4) ◽  
pp. 6594-6597

This work shows a multi-target approach for planning vitality utilization in server farms thinking about customary and environmentally friendly power vitality information sources. Cloud computing is a developing innovation. Cloud computing offers administrations such as IaaS, SaaS, PaaS and it gives computing resources through virtualization over data network. Data center consumes huge amount of electrical energy in which it releases very high amount of carbon-di-oxide. The foremost critical challenge in cloud computing is to implement green cloud computing with the help of optimizing energy utilization. The carbon footprint is lowered while minimizing the operating cost. We know that renewable energies that are produced on-site are highly variable and unpredictable but usage of green energy is very important for the mankind using huge amount of single sourced brown energy is not suggested, so our algorithm which evolves genetically and gives practical solution in order to use renewable energy


2015 ◽  
Vol 64 (7) ◽  
pp. 2049-2059 ◽  
Author(s):  
Lin Gu ◽  
Deze Zeng ◽  
Ahmed Barnawi ◽  
Song Guo ◽  
Ivan Stojmenovic

2018 ◽  
Vol 7 (3.34) ◽  
pp. 141
Author(s):  
D Ramya ◽  
J Deepa ◽  
P N.Karthikayan

A geographically distributed Data center assures Globalization of data and also security for the organizations. The principles for Disaster recovery is also taken into consideration. The above aspects drive business opportunities to companies that own many sites and Cloud Infrastructures with multiple owners.  The data centers store very critical and confidential documents that multiple organizations share in the cloud infrastructure. Previously different servers with different Operating systems and software applications were used. As it was difficult to maintain, Servers are consolidated which allows sharing of resources at low of cost maintenance [7]. The availability of documents should be increased and down time should be reduced. Thus workload management becomes a challenging among the data centers distributed geographically. In this paper we focus on different approaches used for workload management in Geo-distributed data centers. The algorithms used and also the challenges involved in different approaches are discussed 


Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2879
Author(s):  
Marcel Antal ◽  
Andrei-Alexandru Cristea ◽  
Victor-Alexandru Pădurean ◽  
Tudor Cioara ◽  
Ionut Anghel ◽  
...  

Data centers consume lots of energy to execute their computational workload and generate heat that is mostly wasted. In this paper, we address this problem by considering heat reuse in the case of a distributed data center that features IT equipment (i.e., servers) installed in residential homes to be used as a primary source of heat. We propose a workload scheduling solution for distributed data centers based on a constraint satisfaction model to optimally allocate workload on servers to reach and maintain the desired home temperature setpoint by reusing residual heat. We have defined two models to correlate the heat demand with the amount of workload to be executed by the servers: a mathematical model derived from thermodynamic laws calibrated with monitored data and a machine learning model able to predict the amount of workload to be executed by a server to reach a desired ambient temperature setpoint. The proposed solution was validated using the monitored data of an operational distributed data center. The server heat and power demand mathematical model achieve a correlation accuracy of 11.98% while in the case of machine learning models, the best correlation accuracy of 4.74% is obtained for a Gradient Boosting Regressor algorithm. Also, our solution manages to distribute the workload so that the temperature setpoint is met in a reasonable time, while the server power demand is accurately following the heat demand.


2020 ◽  
Author(s):  
Rodrigo A. C. Da Silva ◽  
Nelson L. S. Da Fonseca

This paper summarizes the dissertation ”Energy-aware load balancing in distributed data centers”, which proposed two new algorithms for minimizing energy consumption in cloud data centers. Both algorithms consider hierarchical data center network topologies and requests for the allocation of groups of virtual machines (VMs). The Topology-aware Virtual Machine Placement (TAVMP) algorithm deals with the placement of virtual machines in a single data center. It reduces the blocking of requests and yet maintains acceptable levels of energy consumption. The Topology-aware Virtual Machine Selection (TAVMS) algorithm chooses sets of VM groups for migration between different data centers. Its employment leads to relevant overall energy savings.


Author(s):  
Angelina Rybakova

A data center is a specialized building or room where a company, organization, or government Agency places information and network equipment and then connects to the network. Data centers solve the owner's strategic information and communication tasks. The constant increase in operating costs of data processing cents is driving innovation to improve their efficiency. Today, one of the newest ways to improve the design and functionality of data centers is to integrate building information modeling. Analysis of simulation results and calculated values with existing performance indicators of the data center will help you quickly identify the places of failures or disruptions, as well as form an algorithm for the procedure of intervention in the operation in order to improve efficiency and smooth operation. Integrating the capabilities of information modeling technologies in the design process increases the efficiency of both the overall design of the object and special design indicators for data processing centers. This article describes the basics of integrating information modeling of buildings and data processing centers, the advantages of information modeling approaches, the role of information modeling technologies in the development and improvement of methods for designing data processing centers, as well as ways to implement the information model of the data center. The author proves and explains the necessity of developing an information model of the data center, as well as offers possible implementation tools. In addition to the advantages of the model at the design stage, the author also highlights the possibilities of using data at the subsequent stages of installation and operation.


Sign in / Sign up

Export Citation Format

Share Document