scholarly journals CPU Energy Meter: A Tool for Energy-Aware Algorithms Engineering

Author(s):  
Dirk Beyer ◽  
Philipp Wendler

Abstract Verification algorithms are among the most resource-intensive computation tasks. Saving energy is important for our living environment and to save cost in data centers. Yet, researchers compare the efficiency of algorithms still in terms of consumption of CPU time (or even wall time). Perhaps one reason for this is that measuring energy consumption of computational processes is not as convenient as measuring the consumed time and there is no sufficient tool support. To close this gap, we contribute CPU Energy Meter, a small tool that takes care of reading the energy values that Intel CPUs track inside the chip. In order to make energy measurements as easy as possible, we integrated CPU Energy Meter into BenchExec, a benchmarking tool that is already used by many researchers and competitions in the domain of formal methods. As evidence for usefulness, we explored the energy consumption of some state-of-the-art verifiers and report some interesting insights, for example, that energy consumption is not necessarily correlated with CPU time.

2018 ◽  
Vol 7 (2.8) ◽  
pp. 550 ◽  
Author(s):  
G Anusha ◽  
P Supraja

Cloud computing is a growing technology now-a-days, which provides various resources to perform complex tasks. These complex tasks can be performed with the help of datacenters. Data centers helps the incoming tasks by providing various resources like CPU, storage, network, bandwidth and memory, which has resulted in the increase of the total number of datacenters in the world. These data centers consume large volume of energy for performing the operations and which leads to high operation costs. Resources are the key cause for the power consumption in data centers along with the air and cooling systems. Energy consumption in data centers is comparative to the resource usage. Excessive amount of energy consumption by datacenters falls out in large power bills. There is a necessity to increase the energy efficiency of such data centers. We have proposed an Energy aware dynamic virtual machine consolidation (EADVMC) model which focuses on pm selection, vm selection, vm placement phases, which results in the reduced energy consumption and the Quality of service (QoS) to a considerable level.


Author(s):  
Bhupesh Kumar Dewangan ◽  
Amit Agarwal ◽  
Venkatadri M. ◽  
Ashutosh Pasricha

Cloud computing is a platform where services are provided through the internet either free of cost or rent basis. Many cloud service providers (CSP) offer cloud services on the rental basis. Due to increasing demand for cloud services, the existing infrastructure needs to be scale. However, the scaling comes at the cost of heavy energy consumption due to the inclusion of a number of data centers, and servers. The extraneous power consumption affects the operating costs, which in turn, affects its users. In addition, CO2 emissions affect the environment as well. Moreover, inadequate allocation of resources like servers, data centers, and virtual machines increases operational costs. This may ultimately lead to customer distraction from the cloud service. In all, an optimal usage of the resources is required. This paper proposes to calculate different multi-objective functions to find the optimal solution for resource utilization and their allocation through an improved Antlion (ALO) algorithm. The proposed method simulated in cloudsim environments, and compute energy consumption for different workloads quantity and it increases the performance of different multi-objectives functions to maximize the resource utilization. It compared with existing frameworks and experiment results shows that the proposed framework performs utmost.


Author(s):  
Mahendra Kumar Gourisaria ◽  
S. S. Patra ◽  
P. M. Khilar

<p>Cloud computing is an emerging field of computation. As the data centers consume large amount of power, it increases the system overheads as well as the carbon dioxide emission increases drastically. The main aim is to maximize the resource utilization by minimizing the power consumption. However, the greatest usages of resources does not mean that there has been a right use of energy.  Various resources which are idle, also consumes a significant amount of energy. So we have to keep minimum resources idle. Current studies have shown that the power consumption due to unused computing resources is nearly 1 to 20%. So, the unused resources have been assigned with some of the tasks to utilize the unused period. In the present paper, it has been suggested that the energy saving with task consolidation which has been saved the energy by minimizing the number of idle resources in a cloud computing environment. It has been achieved far-reaching experiments to quantify the performance of the proposed algorithm. The same has also been compared with the FCFSMaxUtil and Energy aware Task Consolidation (ETC) algorithm. The outcomes have shown that the suggested algorithm surpass the FCFSMaxUtil and ETC algorithm in terms of the CPU utilization and energy consumption.</p>


2017 ◽  
Vol 2017 ◽  
pp. 1-11 ◽  
Author(s):  
Rahul Yadav ◽  
Weizhe Zhang

Mobile cloud computing (MCC) provides various cloud computing services to mobile users. The rapid growth of MCC users requires large-scale MCC data centers to provide them with data processing and storage services. The growth of these data centers directly impacts electrical energy consumption, which affects businesses as well as the environment through carbon dioxide (CO2) emissions. Moreover, large amount of energy is wasted to maintain the servers running during low workload. To reduce the energy consumption of mobile cloud data centers, energy-aware host overload detection algorithm and virtual machines (VMs) selection algorithms for VM consolidation are required during detected host underload and overload. After allocating resources to all VMs, underloaded hosts are required to assume energy-saving mode in order to minimize power consumption. To address this issue, we proposed an adaptive heuristics energy-aware algorithm, which creates an upper CPU utilization threshold using recent CPU utilization history to detect overloaded hosts and dynamic VM selection algorithms to consolidate the VMs from overloaded or underloaded host. The goal is to minimize total energy consumption and maximize Quality of Service, including the reduction of service level agreement (SLA) violations. CloudSim simulator is used to validate the algorithm and simulations are conducted on real workload traces in 10 different days, as provided by PlanetLab.


Energies ◽  
2021 ◽  
Vol 14 (9) ◽  
pp. 2382
Author(s):  
Kaixuan Ji ◽  
Ce Chi ◽  
Fa Zhang ◽  
Antonio Fernández Anta ◽  
Penglei Song ◽  
...  

The energy consumption problem has become a bottleneck hindering further development of data centers. However, the heterogeneity of servers, hybrid cooling modes, and extra energy caused by system state transitions increases the complexity of the energy optimization problem. To deal with such challenges, in this paper, an Energy Aware Task Scheduling strategy (EATS) utilizing marginal cost and task classification method is proposed that cooperatively improves the energy efficiency of servers and cooling systems. An energy consumption model for servers, cooling systems, and state transition is developed, and the energy optimization problem in data centers is formulated. The concept of marginal cost is introduced to guide the task scheduling process. The task classification method is incorporated with the idea of marginal cost to further improve resource utilization and reduce the total energy consumption of data centers. Experiments are conducted using real-world traces, and energy reduction results are compared. Results show that EATS achieves more energy-savings of servers, cooling systems, state transition in comparison to the other two techniques under a various number of servers, cooling modules and task arrival intensities. It is validated that EATS is effective at reducing total energy consumption and improving the resource utilization of data centers.


2020 ◽  
Vol 2020 ◽  
pp. 1-14
Author(s):  
Zhihao Peng ◽  
Behnam Barzegar ◽  
Maryam Yarahmadi ◽  
Homayun Motameni ◽  
Poria Pirouzmand

Energy consumption has been one of the main concerns to support the rapid growth of cloud data centers, as it not only increases the cost of electricity to service providers but also plays an important role in increasing greenhouse gas emissions and thus environmental pollution, and has a negative impact on system reliability and availability. As a result, energy consumption and efficiency metrics have become a vital issue for parallel scheduling applications based on tasks performed at cloud data centers. In this paper, we present a time and energy-aware two-phase scheduling algorithm called best heuristic scheduling (BHS) for directed acyclic graph (DAG) scheduling on cloud data center processors. In the first phase, the algorithm allocates resources to tasks by sorting, based on four heuristic methods and a grasshopper algorithm. It then selects the most appropriate method to perform each task, based on the importance factor determined by the end-user or service provider to achieve a solution designed at the right time. In the second phase, BHS minimizes the makespan and energy consumption according to the importance factor determined by the end-user or service provider and taking into account the start time, setup time, end time, and energy profile of virtual machines. Finally, a test dataset is developed to evaluate the proposed BHS algorithm compared to the multiheuristic resource allocation algorithm (MHRA). The results show that the proposed algorithm facilitates 19.71% more energy storage than the MHRA algorithm. Furthermore, the makespan is reduced by 56.12% in heterogeneous environments.


Author(s):  
Bernardi Pranggono ◽  
Huaglory Tianfield

The presence of computing in our society is enormous, and the trend continues. While organizations around the world rely more and more indispensably on their data centers to protect their fast growing data and information, the energy consumption of data center is becoming a key environmental, social, and political issue. Therefore, it is very important to minimize the energy usage by data centers. Green data centers refer to energy-aware data center design rationale of minimized carbon footprint. Virtualization has become a revolutionizing technology for systematic design and deployment of green data centers. This chapter presents a comprehensive study of green data centers. First, the concept and basic systems configuration of energy-efficient data centers are introduced. Then, an elaborative collection of greenness metrics are discussed for profiling the energy issues of green data centers. Next, the state-of-the-art energy-aware techniques are presented for the design and deployment of green data centers. Finally, the challenges and the future directions are pointed out.


Energies ◽  
2018 ◽  
Vol 11 (8) ◽  
pp. 2053 ◽  
Author(s):  
Damián Fernández-Cerero ◽  
Alejandro Fernández-Montes ◽  
Francisco Velasco

Information technologies must be made aware of the sustainability of cost reduction. Data centers may reach energy consumption levels comparable to many industrial facilities and small-sized towns. Therefore, innovative and transparent energy policies should be applied to improve energy consumption and deliver the best performance. This paper compares, analyzes and evaluates various energy efficiency policies, which shut down underutilized machines, on an extensive set of data-center environments. Data envelopment analysis (DEA) is then conducted for the detection of the best energy efficiency policy and data-center characterization for each case. This analysis evaluates energy consumption and performance indicators for natural DEA and constant returns to scale (CRS). We identify the best energy policies and scheduling strategies for high and low data-center demands and for medium-sized and large data-centers; moreover, this work enables data-center managers to detect inefficiencies and to implement further corrective actions.


Computers ◽  
2019 ◽  
Vol 8 (2) ◽  
pp. 46 ◽  
Author(s):  
Said BEN ALLA ◽  
Hicham BEN ALLA ◽  
Abdellah TOUHAFI ◽  
Abdellah EZZATI

Nowadays, Cloud Computing (CC) has emerged as a new paradigm for hosting and delivering services over the Internet. However, the wider deployment of Cloud and the rapid increase in the capacity, as well as the size of data centers, induces a tremendous rise in electricity consumption, escalating data center ownership costs and increasing carbon footprints. This expanding scale of data centers has made energy consumption an imperative issue. Besides, users’ requirements regarding execution time, deadline, QoS have become more sophisticated and demanding. These requirements often conflict with the objectives of cloud providers, especially in a high-stress environment in which the tasks have very critical deadlines. To address these issues, this paper proposes an efficient Energy-Aware Tasks Scheduling with Deadline-constrained in Cloud Computing (EATSD). The main goal of the proposed solution is to reduce the energy consumption of the cloud resources, consider different users’ priorities and optimize the makespan under the deadlines constraints. Further, the proposed algorithm has been simulated using the CloudSim simulator. The experimental results validate that the proposed approach can effectively achieve good performance by minimizing the makespan, reducing energy consumption and improving resource utilization while meeting deadline constraints.


2020 ◽  
Author(s):  
Rodrigo A. C. Da Silva ◽  
Nelson L. S. Da Fonseca

This paper summarizes the dissertation ”Energy-aware load balancing in distributed data centers”, which proposed two new algorithms for minimizing energy consumption in cloud data centers. Both algorithms consider hierarchical data center network topologies and requests for the allocation of groups of virtual machines (VMs). The Topology-aware Virtual Machine Placement (TAVMP) algorithm deals with the placement of virtual machines in a single data center. It reduces the blocking of requests and yet maintains acceptable levels of energy consumption. The Topology-aware Virtual Machine Selection (TAVMS) algorithm chooses sets of VM groups for migration between different data centers. Its employment leads to relevant overall energy savings.


Sign in / Sign up

Export Citation Format

Share Document