scholarly journals Server Power Modeling for Run-time Energy Optimization of Cloud Computing Facilities

2014 ◽  
Vol 62 ◽  
pp. 401-410 ◽  
Author(s):  
Patricia Arroba ◽  
José L. Risco-Martín ◽  
Marina Zapater ◽  
José M. Moya ◽  
José L. Ayala ◽  
...  
2016 ◽  
Vol 2017 (1) ◽  
Author(s):  
Ismat Chaib Draa ◽  
Smail Niar ◽  
Jamel Tayeb ◽  
Emmanuelle Grislin ◽  
Mikael Desertot

2015 ◽  
Vol 733 ◽  
pp. 779-783 ◽  
Author(s):  
Lu Dai ◽  
Jian Hua Li

Resource allocation is a key technology of cloud computing. At present, the most of studies on resource allocation mainly focus on improving the overall performance by balancing the load of data center. This paper will design the experimental platform of resource allocation algorithm, energy optimization and performance analysis, obtain original achievements in scientific research ,for the resource allocation method based on immune algorithm and energy optimization in cloud computing to provide innovative ideas and scientific basis. This research has important significance for further study on resource allocation and energy optimization in cloud computing environment.


Author(s):  
Dazhong Wu ◽  
Xi Liu ◽  
Steve Hebert ◽  
Wolfgang Gentzsch ◽  
Janis Terpenny

Cloud computing is an innovative computing paradigm that can potentially bridge the gap between increasing computing demands in computer aided engineering (CAE) applications and limited scalability, flexibility, and agility in traditional computing paradigms. In light of the benefits of cloud computing, high performance computing (HPC) in the cloud has the potential to enable users to not only accelerate computationally expensive CAE simulations (e.g., finite element analysis), but also to reduce costs by utilizing on-demand and scalable cloud computing resources. The objective of this research is to evaluate the performance of running a large finite element simulation in a public cloud. Specifically, an experiment is performed to identify individual and interactive effects of several factors (e.g., CPU core count, memory size, solver computational rate, and input/output rate) on run time using statistical methods. Our experimental results have shown that the performance of HPC in the cloud is sufficient for the application of a large finite element analysis, and that run time can be optimized by properly selecting a configuration of CPU, memory, and interconnect.


Author(s):  
Matthew J. Walker ◽  
Stephan Diestelhorst ◽  
Andreas Hansson ◽  
Anup K. Das ◽  
Sheng Yang ◽  
...  
Keyword(s):  

2021 ◽  
Vol 19 (4) ◽  
Author(s):  
Amjad Ullah ◽  
Huseyin Dagdeviren ◽  
Resmi C. Ariyattu ◽  
James DesLauriers ◽  
Tamas Kiss ◽  
...  

AbstractAutomated deployment and run-time management of microservices-based applications in cloud computing environments is relatively well studied with several mature solutions. However, managing such applications and tasks in the cloud-to-edge continuum is far from trivial, with no robust, production-level solutions currently available. This paper presents our first attempt to extend an application-level cloud orchestration framework called MiCADO to utilise edge and fog worker nodes. The paper illustrates how MiCADO-Edge can automatically deploy complex sets of interconnected microservices in such multi-layered cloud-to-edge environments. Additionally, it shows how monitoring information can be collected from such services and how complex, user- defined run-time management policies can be enforced on application components running at any layer of the architecture. The implemented solution is demonstrated and evaluated using two realistic case studies from the areas of video processing and secure healthcare data analysis.


Algorithms ◽  
2019 ◽  
Vol 12 (2) ◽  
pp. 48 ◽  
Author(s):  
Ming Zhao ◽  
Ke Zhou

Mobile Edge Computing (MEC) is an innovative technique, which can provide cloud-computing near mobile devices on the edge of networks. Based on the MEC architecture, this paper proposes an ARIMA-BP-based Selective Offloading (ABSO) strategy, which minimizes the energy consumption of mobile devices while meeting the delay requirements. In ABSO, we exploit an ARIMA-BP model for estimating computation capacity of the edge cloud, and then design a Selective Offloading Algorithm for obtaining offloading strategy. Simulation results reveal that the ABSO can apparently decrease the energy consumption of mobile devices in comparison with other offloading methods.


Author(s):  
Mikyung Kang ◽  
Dong-In Kang ◽  
Mira Yun ◽  
Gyung-Leen Park ◽  
Junghoon Lee
Keyword(s):  

Author(s):  
Yu Sun ◽  
Jules White ◽  
Jeff Gray ◽  
Aniruddha Gokhale

Cloud computing provides a platform that enables users to utilize computation, storage, and other computing resources on-demand. As the number of running nodes in the cloud increases, the potential points of failure and the complexity of recovering from error states grows correspondingly. Using the traditional cloud administrative interface to manually detect and recover from errors is tedious, time-consuming, and error prone. This chapter presents an innovative approach to automate cloud error detection and recovery based on a run-time model that monitors and manages the running nodes in a cloud. When administrators identify and correct errors in the model, an inference engine is used to identify the specific state pattern in the model to which they were reacting, and to record their recovery actions. An error detection and recovery pattern can be generated from the inference and applied automatically whenever the same error occurs again.


2012 ◽  
pp. 680-700
Author(s):  
Yu Sun ◽  
Jules White ◽  
Jeff Gray ◽  
Aniruddha Gokhale

Cloud computing provides a platform that enables users to utilize computation, storage, and other computing resources on-demand. As the number of running nodes in the cloud increases, the potential points of failure and the complexity of recovering from error states grows correspondingly. Using the traditional cloud administrative interface to manually detect and recover from errors is tedious, time-consuming, and error prone. This chapter presents an innovative approach to automate cloud error detection and recovery based on a run-time model that monitors and manages the running nodes in a cloud. When administrators identify and correct errors in the model, an inference engine is used to identify the specific state pattern in the model to which they were reacting, and to record their recovery actions. An error detection and recovery pattern can be generated from the inference and applied automatically whenever the same error occurs again.


Sign in / Sign up

Export Citation Format

Share Document