scholarly journals THE ADVANTAGE OF GENETIC ALGORITHM IN ENERGY-EFFICIENT SCHEDULING FOR HETEROGENEOUS CLOUD COMPUTING

Author(s):  
Hang Zhou ◽  
Samina Kausar ◽  
Ningning Dong

Nowadays Energy Consumption has been a heavy burden on the enterprise cloud computing infrastructure. This paper focuses on the hardware factors in energy consumption. Inspired by DVFS, it proposes a new energy-efficient (EE) model. This paper formulates the scheduling problem and genetic algorithm is applied to obtain higher efficiency value. Simulations are implemented to verify the advantage of genetic algorithm. In addition, the robustness of our strategy is validated by modifying the relevant parameters of the experiment.

Mathematics ◽  
2018 ◽  
Vol 6 (11) ◽  
pp. 220 ◽  
Author(s):  
Tianhua Jiang ◽  
Chao Zhang ◽  
Huiqi Zhu ◽  
Jiuchun Gu ◽  
Guanlong Deng

Under the current environmental pressure, many manufacturing enterprises are urged or forced to adopt effective energy-saving measures. However, environmental metrics, such as energy consumption and CO2 emission, are seldom considered in the traditional production scheduling problems. Recently, the energy-related scheduling problem has been paid increasingly more attention by researchers. In this paper, an energy-efficient job shop scheduling problem (EJSP) is investigated with the objective of minimizing the sum of the energy consumption cost and the completion-time cost. As the classical JSP is well known as a non-deterministic polynomial-time hard (NP-hard) problem, an improved whale optimization algorithm (IWOA) is presented to solve the energy-efficient scheduling problem. The improvement is performed using dispatching rules (DR), a nonlinear convergence factor (NCF), and a mutation operation (MO). The DR is used to enhance the initial solution quality and overcome the drawbacks of the random population. The NCF is adopted to balance the abilities of exploration and exploitation of the algorithm. The MO is employed to reduce the possibility of falling into local optimum to avoid the premature convergence. To validate the effectiveness of the proposed algorithm, extensive simulations have been performed in the experiment section. The computational data demonstrate the promising advantages of the proposed IWOA for the energy-efficient job shop scheduling problem.


2015 ◽  
Vol 2015 ◽  
pp. 1-10
Author(s):  
Xing Liu ◽  
Chaowei Yuan ◽  
Zhen Yang ◽  
Enda Peng

Mobile cloud computing (MCC) combines cloud computing and mobile internet to improve the computational capabilities of resource-constrained mobile devices (MDs). In MCC, mobile users could not only improve the computational capability of MDs but also save operation consumption by offloading the mobile applications to the cloud. However, MCC faces the problem of energy efficiency because of time-varying channels when the offloading is being executed. In this paper, we address the issue of energy-efficient scheduling for wireless uplink in MCC. By introducing Lyapunov optimization, we first propose a scheduling algorithm that can dynamically choose channel to transmit data based on queue backlog and channel statistics. Then, we show that the proposed scheduling algorithm can make a tradeoff between queue backlog and energy consumption in a channel-aware MCC system. Simulation results show that the proposed scheduling algorithm can reduce the time average energy consumption for offloading compared to the existing algorithm.


2016 ◽  
Vol 33 (6) ◽  
pp. 1753-1766 ◽  
Author(s):  
Chin-Fu Kuo ◽  
Yung-Feng Lu ◽  
Bao-Rong Chang

Purpose – The purpose of this paper is to investigate the scheduling problem of real-time jobs executing on a DVS processor. The jobs must complete their executions by their deadlines and the energy consumption also must be minimized. Design/methodology/approach – The two-phase energy-efficient scheduling algorithm is proposed to solve the scheduling problem for real-time jobs. In the off-line phase, the maximum instantaneous total density and instantaneous total density (ITD) are proposed to derive the speed of the processor for each time instance. The derived speeds are saved for run time. In the on-line phase, the authors set the processor speed according to the derived speeds and set a timer to expire at the corresponding end time instance of the used speed. Findings – When the DVS processor executes a job at a proper speed, the energy consumption of the system can be minimized. Research limitations/implications – This paper does not consider jobs with precedence constraints. It can be explored in the further work. Practical implications – The experimental results of the proposed schemes are presented to show the effectiveness. Originality/value – The experimental results show that the proposed scheduling algorithm, ITD, can achieve energy saving and make the processor fully utilized.


2014 ◽  
Vol 986-987 ◽  
pp. 1383-1386
Author(s):  
Zhen Xing Yang ◽  
He Guo ◽  
Yu Long Yu ◽  
Yu Xin Wang

Cloud computing is a new emerging paradigm which delivers an infrastructure, platform and software as services in a pay-as-you-go model. However, with the development of cloud computing, the large-scale data centers consume huge amounts of electrical energy resulting in high operational costs and environment problem. Nevertheless, existing energy-saving algorithms based on live migration don’t consider the migration energy consumption, and most of which are designed for homogeneous cloud environment. In this paper, we take the first step to model energy consumption in heterogeneous cloud environment with migration energy consumption. Based on this energy model, we design energy-saving Best fit decreasing (ESBFD) algorithm and energy-saving first fit decreasing (ESFFD) algorithm. We further provide results of several experiments using traces from PlanetLab in CloudSim. The experiments show that the proposed algorithms can effectively reduce the energy consumption of data center in the heterogeneous cloud environment compared to existing algorithms like NEA, DVFS, ST (Single Threshold) and DT (Double Threshold).


Energies ◽  
2021 ◽  
Vol 14 (21) ◽  
pp. 7446
Author(s):  
Adrian Kampa ◽  
Iwona Paprocka

The aim of this paper is to present a model of energy efficient scheduling for series production systems during operation, including setup and shutdown activities. The flow shop system together with setup, shutdown times and energy consumption are considered. Production tasks enter the system with exponentially distributed interarrival times and are carried out according to the times assumed as predefined. Tasks arriving from one waiting queue are handled in the order set by the Multi Objective Immune Algorithm. Tasks are stored in a finite-capacity buffer if machines are busy, or setup activities are being performed. Whenever a production system is idle, machines are stopped according to shutdown times in order to save energy. A machine requires setup time before executing the first batch of jobs after the idle time. Scientists agree that turning off an idle machine is a common measure that is appropriate for all types of workshops, but usually requires more steps, such as setup and shutdown. Literature analysis shows that there is a research gap regarding multi-objective algorithms, as minimizing energy consumption is not the only factor affecting the total manufacturing cost—there are other factors, such as late delivery cost or early delivery cost with additional storage cost, which make the optimization of the total cost of the production process more complicated. Another goal is to develop previous scheduling algorithms and research framework for energy efficient scheduling. The impact of the input data on the production system performance and energy consumption for series production is investigated in serial, parallel or serial–parallel flows. Parallel flow of upcoming tasks achieves minimum values of makespan criterion. Serial and serial–parallel flows of arriving tasks ensure minimum cost of energy consumption. Parallel flow of arriving tasks ensures minimum values of the costs of tardiness or premature execution. Parallel flow or serial–parallel flow of incoming tasks allows one to implement schedules with tasks that are not delayed.


Author(s):  
Burak Kantarci ◽  
Hussein T. Mouftah

Cloud computing aims to migrate IT services to distant data centers in order to reduce the dependency of the services on the limited local resources. Cloud computing provides access to distant computing resources via Web services while the end user is not aware of how the IT infrastructure is managed. Besides the novelties and advantages of cloud computing, deployment of a large number of servers and data centers introduces the challenge of high energy consumption. Additionally, transportation of IT services over the Internet backbone accumulates the energy consumption problem of the backbone infrastructure. In this chapter, the authors cover energy-efficient cloud computing studies in the data center involving various aspects such as: reduction of processing, storage, and data center network-related power consumption. They first provide a brief overview of the existing approaches on cool data centers that can be mainly grouped as studies on virtualization techniques, energy-efficient data center network design schemes, and studies that monitor the data center thermal activity by Wireless Sensor Networks (WSNs). The authors also present solutions that aim to reduce energy consumption in data centers by considering the communications aspects over the backbone of large-scale cloud systems.


Sign in / Sign up

Export Citation Format

Share Document