Cloud Computing, Data Sources and Data Centers

Author(s):  
Diego Galar Pascual ◽  
Pasquale Daponte ◽  
Uday Kumar
2021 ◽  
pp. 209660832110173
Author(s):  
Fang Chen ◽  
Zeeshan Shirazi ◽  
Lei Wang

As climate warming intensifies, the frequency and intensity of disasters are also increasing, posing challenges to global sustainable development. The concept of disaster risk reduction (DRR) provides strong impetus for reducing disaster risk and vulnerabilities by employing the scientific and technological developments of recent decades. However, there is a need to enhance the capacities of different communities to use emerging digital infrastructure, not only in promoting DRR but also in ensuring sustainable future development. Limited access to and availability of data are restricting comprehensive understanding of these challenges. In many countries, the key areas for capacity development include collecting information from alternative and emerging data sources and meaningfully integrating it with data from traditional sources. Software and data analysis are becoming widely accessible due to open-source initiatives, while cloud computing technologies and programmes such as CASEarth provide valuable resources for multisource data integration, contributing to information-driven policy and decision-support systems for DRR.


2014 ◽  
Vol 1008-1009 ◽  
pp. 1513-1516
Author(s):  
Hai Na Song ◽  
Xiao Qing Zhang ◽  
Zhong Tang He

Cloud computing environment is regarded as a kind of multi-tenant computing mode. With virtulization as a support technology, cloud computing realizes the integration of multiple workloads in one server through the package and seperation of virtual machines. Aiming at the contradiction between the heterogeneous applications and uniform shared resource pool, using the idea of bin packing, the multidimensional resource scheduling problem is analyzed in this paper. We carry out some example analysis in one-dimensional resource scheduling, two-dimensional resource schduling and three-dimensional resource scheduling. The results shows that the resource utilization of cloud data centers will be improved greatly when the resource sheduling is conducted after reorganizing rationally the heterogeneous demands.


2016 ◽  
Vol 57 ◽  
pp. 421-464 ◽  
Author(s):  
Arnaud Malapert ◽  
Jean-Charles Régin ◽  
Mohamed Rezgui

We introduce an Embarrassingly Parallel Search (EPS) method for solving constraint problems in parallel, and we show that this method matches or even outperforms state-of-the-art algorithms on a number of problems using various computing infrastructures. EPS is a simple method in which a master decomposes the problem into many disjoint subproblems which are then solved independently by workers. Our approach has three advantages: it is an efficient method; it involves almost no communication or synchronization between workers; and its implementation is made easy because the master and the workers rely on an underlying constraint solver, but does not require to modify it. This paper describes the method, and its applications to various constraint problems (satisfaction, enumeration, optimization). We show that our method can be adapted to different underlying solvers (Gecode, Choco2, OR-tools) on different computing infrastructures (multi-core, data centers, cloud computing). The experiments cover unsatisfiable, enumeration and optimization problems, but do not cover first solution search because it makes the results hard to analyze. The same variability can be observed for optimization problems, but at a lesser extent because the optimality proof is required. EPS offers good average performance, and matches or outperforms other available parallel implementations of Gecode as well as some solvers portfolios. Moreover, we perform an in-depth analysis of the various factors that make this approach efficient as well as the anomalies that can occur. Last, we show that the decomposition is a key component for efficiency and load balancing.


2021 ◽  
Vol 12 (1) ◽  
pp. 74-83
Author(s):  
Manjunatha S. ◽  
Suresh L.

Data center is a cost-effective infrastructure for storing large volumes of data and hosting large-scale service applications. Cloud computing service providers are rapidly deploying data centers across the world with a huge number of servers and switches. These data centers consume significant amounts of energy, contributing to high operational costs. Thus, optimizing the energy consumption of servers and networks in data centers can reduce operational costs. In a data center, power consumption is mainly due to servers, networking devices, and cooling systems, and an effective energy-saving strategy is to consolidate the computation and communication into a smaller number of servers and network devices and then power off as many unneeded servers and network devices as possible.


2021 ◽  
Vol 34 (1) ◽  
pp. 66-85
Author(s):  
Yiannis Verginadis ◽  
Dimitris Apostolou ◽  
Salman Taherizadeh ◽  
Ioannis Ledakis ◽  
Gregoris Mentzas ◽  
...  

Fog computing extends multi-cloud computing by enabling services or application functions to be hosted close to their data sources. To take advantage of the capabilities of fog computing, serverless and the function-as-a-service (FaaS) software engineering paradigms allow for the flexible deployment of applications on multi-cloud, fog, and edge resources. This article reviews prominent fog computing frameworks and discusses some of the challenges and requirements of FaaS-enabled applications. Moreover, it proposes a novel framework able to dynamically manage multi-cloud, fog, and edge resources and to deploy data-intensive applications developed using the FaaS paradigm. The proposed framework leverages the FaaS paradigm in a way that improves the average service response time of data-intensive applications by a factor of three regardless of the underlying multi-cloud, fog, and edge resource infrastructure.


Author(s):  
Burak Kantarci ◽  
Hussein T. Mouftah

Cloud computing combines the advantages of several computing paradigms and introduces ubiquity in the provisioning of services such as software, platform, and infrastructure. Data centers, as the main hosts of cloud computing services, accommodate thousands of high performance servers and high capacity storage units. Offloading the local resources increases the energy consumption of the transport network and the data centers although it is advantageous in terms of energy consumption of the end hosts. This chapter presents a detailed survey of the existing mechanisms that aim at designing the Internet backbone with data centers and the objective of energy-efficient delivery of the cloud services. The survey is followed by a case study where Mixed Integer Linear Programming (MILP)-based provisioning models and heuristics are used to guarantee either minimum delayed or maximum power saving cloud services where high performance data centers are assumed to be located at the core nodes of an IP-over-WDM network. The chapter is concluded by summarizing the surveyed schemes with a taxonomy including the cons and pros. The summary is followed by a discussion focusing on the research challenges and opportunities.


Author(s):  
Burak Kantarci ◽  
Hussein T. Mouftah

Cloud computing aims to migrate IT services to distant data centers in order to reduce the dependency of the services on the limited local resources. Cloud computing provides access to distant computing resources via Web services while the end user is not aware of how the IT infrastructure is managed. Besides the novelties and advantages of cloud computing, deployment of a large number of servers and data centers introduces the challenge of high energy consumption. Additionally, transportation of IT services over the Internet backbone accumulates the energy consumption problem of the backbone infrastructure. In this chapter, the authors cover energy-efficient cloud computing studies in the data center involving various aspects such as: reduction of processing, storage, and data center network-related power consumption. They first provide a brief overview of the existing approaches on cool data centers that can be mainly grouped as studies on virtualization techniques, energy-efficient data center network design schemes, and studies that monitor the data center thermal activity by Wireless Sensor Networks (WSNs). The authors also present solutions that aim to reduce energy consumption in data centers by considering the communications aspects over the backbone of large-scale cloud systems.


Author(s):  
Federico Larumbe ◽  
Brunilde Sansò

This chapter addresses a set of optimization problems that arise in cloud computing regarding the location and resource allocation of the cloud computing entities: the data centers, servers, software components, and virtual machines. The first problem is the location of new data centers and the selection of current ones since those decisions have a major impact on the network efficiency, energy consumption, Capital Expenditures (CAPEX), Operational Expenditures (OPEX), and pollution. The chapter also addresses the Virtual Machine Placement Problem: which server should host which virtual machine. The number of servers used, the cost, and energy consumption depend strongly on those decisions. Network traffic between VMs and users, and between VMs themselves, is also an important factor in the Virtual Machine Placement Problem. The third problem presented in this chapter is the dynamic provisioning of VMs to clusters, or auto scaling, to minimize the cost and energy consumption while satisfying the Service Level Agreements (SLAs). This important feature of cloud computing requires predictive models that precisely anticipate workload dimensions. For each problem, the authors describe and analyze models that have been proposed in the literature and in the industry, explain advantages and disadvantages, and present challenging future research directions.


Sign in / Sign up

Export Citation Format

Share Document