scholarly journals High Performance Energy-Aware Cloud Computing: A Scope of Future Computing

This publication discusses high-performance energyaware cloud (HPEAC) computing state-of-the-art strategies to acknowledgement and categorization of systems and devices, optimization methodologies, and energy / power control techniques in particular. System types involve single machines, clusters, networks, and clouds, while CPUs, GPUs, multiprocessors, and hybrid systems are known to be device types. Objective of Optimization incorporates multiple calculation blends, such as “execution time”, “consumption of energy”& “temperature” with the consideration of limiting power/energy consumption. Control measures usually involve scheduling policies, frequency based policies (DVFS, DFS, DCT), programmatic API’s for limiting the power consumptions (such as” Intel- RAPL”,” NVIDIA- NVML”), standardization of applications, and hybrid techniques. We address energy / power management software and APIs as well as methods and conditions in modern HPEACC systems for forecasting and/or simulating power/energy consumption. Eventually, programming examples are discussed, i.e. programs & tests used in specific works. Based on our study, we point out some areas and there significant issues related to tools & technologies, important for handling energy aware computations in HPEAC computing environment

2018 ◽  
Vol 28 (02) ◽  
pp. 1950029 ◽  
Author(s):  
Tiantian Li ◽  
Tianyu Zhang ◽  
Ge Yu ◽  
Yichuan Zhang ◽  
Jie Song

Fluid scheduling allows tasks to be allocated with fractional processing capacity, which significantly improves the schedulability performance. For dual-criticality systems (DCS), dual-rate fluid-based scheduling has been widely studied, e.g., the state-of-the-art approaches mixed-criticality fluid scheduling (MCF) and MC-Sort. However, most of the existing works on DCS either only focus on the schedulability analysis or minimize the energy consumption treating leakage power as a constant. To this end, this paper considers the effect of temperature on leakage power and proposes a thermal and power aware fluid scheduling strategy, referred to as thermal and energy aware (TA)-MCF which minimizes both the energy consumption and temperature, while ensuring a comparable schedulability ratio compared with the MCF and MC-Sort. Extensive experiments validate the efficiency of TA-MCF.


Author(s):  
Yangping Yu ◽  
Yulei Xie ◽  
Ling Ji ◽  
Jinbo Zhang ◽  
Yanpeng Cai ◽  
...  

Abstract In this study, a new concept concerning comprehensive characteristics of water resources utilization as an index for risk modeling within the water allocation management model is proposed to explore the tolerance of unbalanced allocation problem under water–energy nexus. The model is integrated with interval two-stage stochastic programming for reflecting system uncertainties. These uncertainties are associated with the industrial production feature and the decision-making process. With respect to water–energy nexus, energy proposed is mainly focused on the consumption intensity of water purification and transportation from different water sources. The developed model is applied for industrial water resources allocation management in Henan province, China. Multiple scenarios related to disparate energy consumption control and the comprehensive risk levels are simulated to obtain a reasonable trade-off among system profit, comprehensive risk, and energy consumption. The results indicated that the strict comprehensive risk management or energy consumption control measures could cause damage to system benefit owing to decreasing the flexibility of industrial water resources distributions, and the preliminary energy consumption or the comprehensive risk control would be beneficial to moderate the conflict between industrial sectors and water resources, and accelerate industrial structure transformation in the future.


2019 ◽  
Vol 2019 ◽  
pp. 1-19 ◽  
Author(s):  
Pawel Czarnul ◽  
Jerzy Proficz ◽  
Adam Krzywaniak

The paper presents state of the art of energy-aware high-performance computing (HPC), in particular identification and classification of approaches by system and device types, optimization metrics, and energy/power control methods. System types include single device, clusters, grids, and clouds while considered device types include CPUs, GPUs, multiprocessor, and hybrid systems. Optimization goals include various combinations of metrics such as execution time, energy consumption, and temperature with consideration of imposed power limits. Control methods include scheduling, DVFS/DFS/DCT, power capping with programmatic APIs such as Intel RAPL, NVIDIA NVML, as well as application optimizations, and hybrid methods. We discuss tools and APIs for energy/power management as well as tools and environments for prediction and/or simulation of energy/power consumption in modern HPC systems. Finally, programming examples, i.e., applications and benchmarks used in particular works are discussed. Based on our review, we identified a set of open areas and important up-to-date problems concerning methods and tools for modern HPC systems allowing energy-aware processing.


2017 ◽  
Vol 13 (8) ◽  
pp. 155014771772671
Author(s):  
Xu Liu ◽  
Zhongbao Zhang ◽  
Junning Li ◽  
Sen Su

Virtual network embedding has received a lot of attention from researchers. In this problem, it needs to map a sequence of virtual networks onto the physical network. Generally, the virtual networks have topology, node, and link constraints. Prior studies mainly focus on designing a solution to maximize the revenue by accepting more virtual networks while ignoring the energy cost for the physical network. In this article, to bridge this gap, we design a heuristic energy-aware virtual network embedding algorithm called EA-VNE-C, to coordinate the dynamic electricity price and energy consumption to further optimize the energy cost. Extensive simulations demonstrate that this algorithm significantly reduces the energy cost by up to 14% over the state-of-the-art algorithm while maintaining similar revenue.


Author(s):  
Juan P. Silva ◽  
Ernesto Dufrechou ◽  
Pabl Ezzatti ◽  
Enrique S. Quintana-Ortí ◽  
Alfredo Remón ◽  
...  

The high performance computing community has traditionally focused uniquely on the reduction of execution time, though in the last years, the optimization of energy consumption has become a main issue. A reduction of energy usage without a degradation of performance requires the adoption of energy-efficient hardware platforms accompanied by the development of energy-aware algorithms and computational kernels. The solution of linear systems is a key operation for many scientific and engineering problems. Its relevance has motivated an important amount of work, and consequently, it is possible to find high performance solvers for a wide variety of hardware platforms. In this work, we aim to develop a high performance and energy-efficient linear system solver. In particular, we develop two solvers for a low-power CPU-GPU platform, the NVIDIA Jetson TK1. These solvers implement the Gauss-Huard algorithm yielding an efficient usage of the target hardware as well as an efficient memory access. The experimental evaluation shows that the novel proposal reports important savings in both time and energy-consumption when compared with the state-of-the-art solvers of the platform.


Author(s):  
Mahendra Kumar Gourisaria ◽  
S. S. Patra ◽  
P. M. Khilar

<p>Cloud computing is an emerging field of computation. As the data centers consume large amount of power, it increases the system overheads as well as the carbon dioxide emission increases drastically. The main aim is to maximize the resource utilization by minimizing the power consumption. However, the greatest usages of resources does not mean that there has been a right use of energy.  Various resources which are idle, also consumes a significant amount of energy. So we have to keep minimum resources idle. Current studies have shown that the power consumption due to unused computing resources is nearly 1 to 20%. So, the unused resources have been assigned with some of the tasks to utilize the unused period. In the present paper, it has been suggested that the energy saving with task consolidation which has been saved the energy by minimizing the number of idle resources in a cloud computing environment. It has been achieved far-reaching experiments to quantify the performance of the proposed algorithm. The same has also been compared with the FCFSMaxUtil and Energy aware Task Consolidation (ETC) algorithm. The outcomes have shown that the suggested algorithm surpass the FCFSMaxUtil and ETC algorithm in terms of the CPU utilization and energy consumption.</p>


2021 ◽  
pp. 1-22
Author(s):  
Golnaz Berenjian ◽  
Homayun Motameni ◽  
Mehdi Golsorkhtabaramiri ◽  
Ali Ebrahimnejad

Regarding the ever-increasing development of data and computational centers due to the contribution of high-performance computing systems in such sectors, energy consumption has always been of great importance due to CO2 emissions that can result in adverse effects on the environment. In recent years, the notions such as “energy” and also “Green Computing” have played crucial roles when scheduling parallel tasks in datacenters. The duplication and clustering strategies, as well as Dynamic Voltage and Frequency Scaling (DVFS) techniques, have focused on the reduction of the energy consumption and the optimization of the performance parameters. Concerning scheduling Directed Acyclic Graph (DAG) of a datacenter processors equipped with the technique of DVFS, this paper proposes an energy- and time-aware algorithm based on dual-phase scheduling, called EATSDCDD, to apply the combination of the strategies for duplication and clustering along with the distribution of slack-time among the tasks of a cluster. DVFS and control procedures in the proposed green system are mapped into Petri net-based models, which contribute to designing a multiple decision process. In the first phase, we use an intelligent combined approach of the duplication and clustering strategies to run the immediate tasks of DAG along with monitoring the throughput by concentrating on the reduction of makespan and the energy consumed in the processors. The main idea of the proposed algorithm involves the achievement of a maximum reduction in energy consumption in the second phase. To this end, the slack time was distributed among non-critical dependent tasks. Additionally, we cover the issues of negotiation between consumers and service providers at the rate of μ based on Green Service Level Agreement (GSLA) to achieve a higher saving of the energy. Eventually, a set of data established for conducting the examinations and also different parameters of the constructed random DAG are assessed to examine the efficiency of our proposed algorithm. The obtained results confirms that our algorithm outperforms compared the other algorithms considered in this study.


2015 ◽  
Vol 25 (03) ◽  
pp. 1541005
Author(s):  
Alexandra Vintila Filip ◽  
Ana-Maria Oprescu ◽  
Stefania Costache ◽  
Thilo Kielmann

High-Performance Computing (HPC) systems consume large amounts of energy. As the energy consumption predictions for HPC show increasing numbers, it is important to make users aware of the energy spent for the execution of their applications. Drawing from our experience with exposing cost and performance in public clouds, in this paper we present a generic mechanism to compute fast and accurate estimates for the tradeoffs between the performance (expressed as makespan) and the energy consumption of applications running on HPC clusters. We validate our approach by implementing it in a prototype, called E-BaTS and validating it with a wide variety of HPC bags-of-tasks. Our experiments show that E-BaTS produces conservative estimates with errors below 5%, while requiring at most 12% of the energy and time of an exhaustive search for providing configurations close to the optimal ones in terms of trade-offs between energy consumption and makespan.


2016 ◽  
Vol 2016 ◽  
pp. 1-10 ◽  
Author(s):  
Weiwei Lin ◽  
Wentai Wu ◽  
James Z. Wang

Cloud computing provides on-demand computing and storage services with high performance and high scalability. However, the rising energy consumption of cloud data centers has become a prominent problem. In this paper, we first introduce an energy-aware framework for task scheduling in virtual clusters. The framework consists of a task resource requirements prediction module, an energy estimate module, and a scheduler with a task buffer. Secondly, based on this framework, we propose a virtual machine power efficiency-aware greedy scheduling algorithm (VPEGS). As a heuristic algorithm, VPEGS estimates task energy by considering factors including task resource demands, VM power efficiency, and server workload before scheduling tasks in a greedy manner. We simulated a heterogeneous VM cluster and conducted experiment to evaluate the effectiveness of VPEGS. Simulation results show that VPEGS effectively reduced total energy consumption by more than 20% without producing large scheduling overheads. With the similar heuristic ideology, it outperformed Min-Min and RASA with respect to energy saving by about 29% and 28%, respectively.


Sign in / Sign up

Export Citation Format

Share Document