scholarly journals An Efficient Energy-Aware Tasks Scheduling with Deadline-Constrained in Cloud Computing

Computers ◽  
2019 ◽  
Vol 8 (2) ◽  
pp. 46 ◽  
Author(s):  
Said BEN ALLA ◽  
Hicham BEN ALLA ◽  
Abdellah TOUHAFI ◽  
Abdellah EZZATI

Nowadays, Cloud Computing (CC) has emerged as a new paradigm for hosting and delivering services over the Internet. However, the wider deployment of Cloud and the rapid increase in the capacity, as well as the size of data centers, induces a tremendous rise in electricity consumption, escalating data center ownership costs and increasing carbon footprints. This expanding scale of data centers has made energy consumption an imperative issue. Besides, users’ requirements regarding execution time, deadline, QoS have become more sophisticated and demanding. These requirements often conflict with the objectives of cloud providers, especially in a high-stress environment in which the tasks have very critical deadlines. To address these issues, this paper proposes an efficient Energy-Aware Tasks Scheduling with Deadline-constrained in Cloud Computing (EATSD). The main goal of the proposed solution is to reduce the energy consumption of the cloud resources, consider different users’ priorities and optimize the makespan under the deadlines constraints. Further, the proposed algorithm has been simulated using the CloudSim simulator. The experimental results validate that the proposed approach can effectively achieve good performance by minimizing the makespan, reducing energy consumption and improving resource utilization while meeting deadline constraints.

Author(s):  
Mahendra Kumar Gourisaria ◽  
S. S. Patra ◽  
P. M. Khilar

<p>Cloud computing is an emerging field of computation. As the data centers consume large amount of power, it increases the system overheads as well as the carbon dioxide emission increases drastically. The main aim is to maximize the resource utilization by minimizing the power consumption. However, the greatest usages of resources does not mean that there has been a right use of energy.  Various resources which are idle, also consumes a significant amount of energy. So we have to keep minimum resources idle. Current studies have shown that the power consumption due to unused computing resources is nearly 1 to 20%. So, the unused resources have been assigned with some of the tasks to utilize the unused period. In the present paper, it has been suggested that the energy saving with task consolidation which has been saved the energy by minimizing the number of idle resources in a cloud computing environment. It has been achieved far-reaching experiments to quantify the performance of the proposed algorithm. The same has also been compared with the FCFSMaxUtil and Energy aware Task Consolidation (ETC) algorithm. The outcomes have shown that the suggested algorithm surpass the FCFSMaxUtil and ETC algorithm in terms of the CPU utilization and energy consumption.</p>


2017 ◽  
Vol 2017 ◽  
pp. 1-11 ◽  
Author(s):  
Rahul Yadav ◽  
Weizhe Zhang

Mobile cloud computing (MCC) provides various cloud computing services to mobile users. The rapid growth of MCC users requires large-scale MCC data centers to provide them with data processing and storage services. The growth of these data centers directly impacts electrical energy consumption, which affects businesses as well as the environment through carbon dioxide (CO2) emissions. Moreover, large amount of energy is wasted to maintain the servers running during low workload. To reduce the energy consumption of mobile cloud data centers, energy-aware host overload detection algorithm and virtual machines (VMs) selection algorithms for VM consolidation are required during detected host underload and overload. After allocating resources to all VMs, underloaded hosts are required to assume energy-saving mode in order to minimize power consumption. To address this issue, we proposed an adaptive heuristics energy-aware algorithm, which creates an upper CPU utilization threshold using recent CPU utilization history to detect overloaded hosts and dynamic VM selection algorithms to consolidate the VMs from overloaded or underloaded host. The goal is to minimize total energy consumption and maximize Quality of Service, including the reduction of service level agreement (SLA) violations. CloudSim simulator is used to validate the algorithm and simulations are conducted on real workload traces in 10 different days, as provided by PlanetLab.


2021 ◽  
Vol 11 (13) ◽  
pp. 5849
Author(s):  
Nimra Malik ◽  
Muhammad Sardaraz ◽  
Muhammad Tahir ◽  
Babar Shah ◽  
Gohar Ali ◽  
...  

Cloud computing is a rapidly growing technology that has been implemented in various fields in recent years, such as business, research, industry, and computing. Cloud computing provides different services over the internet, thus eliminating the need for personalized hardware and other resources. Cloud computing environments face some challenges in terms of resource utilization, energy efficiency, heterogeneous resources, etc. Tasks scheduling and virtual machines (VMs) are used as consolidation techniques in order to tackle these issues. Tasks scheduling has been extensively studied in the literature. The problem has been studied with different parameters and objectives. In this article, we address the problem of energy consumption and efficient resource utilization in virtualized cloud data centers. The proposed algorithm is based on task classification and thresholds for efficient scheduling and better resource utilization. In the first phase, workflow tasks are pre-processed to avoid bottlenecks by placing tasks with more dependencies and long execution times in separate queues. In the next step, tasks are classified based on the intensities of the required resources. Finally, Particle Swarm Optimization (PSO) is used to select the best schedules. Experiments were performed to validate the proposed technique. Comparative results obtained on benchmark datasets are presented. The results show the effectiveness of the proposed algorithm over that of the other algorithms to which it was compared in terms of energy consumption, makespan, and load balancing.


Author(s):  
Neenu Juneja ◽  
Chamkaur Singh ◽  
Krishan Tuli

Cloud Computing has the facility to transform a large part of information technology into services in which computer resources are virtualized and made available as a utility service. From here comes the importance of scheduling virtual resources to get the maximum utilization of physical resources. The growth in server’s power consumption is increased continuously; and many researchers proposed, if this pattern repeats continuously, then the power consumption cost of a server over its lifespan would be higher than its hardware prices. The power consumption troubles more for clusters, grids, and clouds, which encompass numerous thousand heterogeneous servers. Continuous efforts have been done to reduce the electricity consumption of these massive-scale infrastructures. To identify the challenges and required future enhancements in the field of efficient energy consumption in Cloud Computing, it is necessary to synthesize and categorize the research and development done so far. In this paper, the authors prepare taxonomy of huge energy consumption problems and its related solutions. The authors cover all aspects of energy consumption by Cloud Datacenters and analyze many more research papers to find out the better solution for efficient energy consumption. Keywords: Cloud computing, Collocated virtual machines, Live migration, Load balancing, Resource scheduling


Author(s):  
Mahendra Kumar Gourisaria ◽  
S. S. Patra ◽  
P. M. Khilar

<p>Cloud computing is an emerging field of computation. As the data centers consume large amount of power, it increases the system overheads as well as the carbon dioxide emission increases drastically. The main aim is to maximize the resource utilization by minimizing the power consumption. However, the greatest usages of resources does not mean that there has been a right use of energy.  Various resources which are idle, also consumes a significant amount of energy. So we have to keep minimum resources idle. Current studies have shown that the power consumption due to unused computing resources is nearly 1 to 20%. So, the unused resources have been assigned with some of the tasks to utilize the unused period. In the present paper, it has been suggested that the energy saving with task consolidation which has been saved the energy by minimizing the number of idle resources in a cloud computing environment. It has been achieved far-reaching experiments to quantify the performance of the proposed algorithm. The same has also been compared with the FCFSMaxUtil and Energy aware Task Consolidation (ETC) algorithm. The outcomes have shown that the suggested algorithm surpass the FCFSMaxUtil and ETC algorithm in terms of the CPU utilization and energy consumption.</p>


2018 ◽  
Vol 7 (2.8) ◽  
pp. 550 ◽  
Author(s):  
G Anusha ◽  
P Supraja

Cloud computing is a growing technology now-a-days, which provides various resources to perform complex tasks. These complex tasks can be performed with the help of datacenters. Data centers helps the incoming tasks by providing various resources like CPU, storage, network, bandwidth and memory, which has resulted in the increase of the total number of datacenters in the world. These data centers consume large volume of energy for performing the operations and which leads to high operation costs. Resources are the key cause for the power consumption in data centers along with the air and cooling systems. Energy consumption in data centers is comparative to the resource usage. Excessive amount of energy consumption by datacenters falls out in large power bills. There is a necessity to increase the energy efficiency of such data centers. We have proposed an Energy aware dynamic virtual machine consolidation (EADVMC) model which focuses on pm selection, vm selection, vm placement phases, which results in the reduced energy consumption and the Quality of service (QoS) to a considerable level.


Author(s):  
Burak Kantarci ◽  
Hussein T. Mouftah

Cloud computing combines the advantages of several computing paradigms and introduces ubiquity in the provisioning of services such as software, platform, and infrastructure. Data centers, as the main hosts of cloud computing services, accommodate thousands of high performance servers and high capacity storage units. Offloading the local resources increases the energy consumption of the transport network and the data centers although it is advantageous in terms of energy consumption of the end hosts. This chapter presents a detailed survey of the existing mechanisms that aim at designing the Internet backbone with data centers and the objective of energy-efficient delivery of the cloud services. The survey is followed by a case study where Mixed Integer Linear Programming (MILP)-based provisioning models and heuristics are used to guarantee either minimum delayed or maximum power saving cloud services where high performance data centers are assumed to be located at the core nodes of an IP-over-WDM network. The chapter is concluded by summarizing the surveyed schemes with a taxonomy including the cons and pros. The summary is followed by a discussion focusing on the research challenges and opportunities.


Author(s):  
Burak Kantarci ◽  
Hussein T. Mouftah

Cloud computing aims to migrate IT services to distant data centers in order to reduce the dependency of the services on the limited local resources. Cloud computing provides access to distant computing resources via Web services while the end user is not aware of how the IT infrastructure is managed. Besides the novelties and advantages of cloud computing, deployment of a large number of servers and data centers introduces the challenge of high energy consumption. Additionally, transportation of IT services over the Internet backbone accumulates the energy consumption problem of the backbone infrastructure. In this chapter, the authors cover energy-efficient cloud computing studies in the data center involving various aspects such as: reduction of processing, storage, and data center network-related power consumption. They first provide a brief overview of the existing approaches on cool data centers that can be mainly grouped as studies on virtualization techniques, energy-efficient data center network design schemes, and studies that monitor the data center thermal activity by Wireless Sensor Networks (WSNs). The authors also present solutions that aim to reduce energy consumption in data centers by considering the communications aspects over the backbone of large-scale cloud systems.


Sign in / Sign up

Export Citation Format

Share Document