Resource Scheduling and Load Balancing Fusion Algorithm with Deep Learning Based on Cloud Computing

Author(s):  
Xiaojing Hou ◽  
Guozeng Zhao

With the wide application of the cloud computing, the contradiction between high energy cost and low efficiency becomes increasingly prominent. In this article, to solve the problem of energy consumption, a resource scheduling and load balancing fusion algorithm with deep learning strategy is presented. Compared with the corresponding evolutionary algorithms, the proposed algorithm can enhance the diversity of the population, avoid the prematurity to some extent, and have a faster convergence speed. The experimental results show that the proposed algorithm has the most optimal ability of reducing energy consumption of data centers.

2020 ◽  
pp. 1042-1057
Author(s):  
Xiaojing Hou ◽  
Guozeng Zhao

With the wide application of the cloud computing, the contradiction between high energy cost and low efficiency becomes increasingly prominent. In this article, to solve the problem of energy consumption, a resource scheduling and load balancing fusion algorithm with deep learning strategy is presented. Compared with the corresponding evolutionary algorithms, the proposed algorithm can enhance the diversity of the population, avoid the prematurity to some extent, and have a faster convergence speed. The experimental results show that the proposed algorithm has the most optimal ability of reducing energy consumption of data centers.


Author(s):  
Burak Kantarci ◽  
Hussein T. Mouftah

Cloud computing aims to migrate IT services to distant data centers in order to reduce the dependency of the services on the limited local resources. Cloud computing provides access to distant computing resources via Web services while the end user is not aware of how the IT infrastructure is managed. Besides the novelties and advantages of cloud computing, deployment of a large number of servers and data centers introduces the challenge of high energy consumption. Additionally, transportation of IT services over the Internet backbone accumulates the energy consumption problem of the backbone infrastructure. In this chapter, the authors cover energy-efficient cloud computing studies in the data center involving various aspects such as: reduction of processing, storage, and data center network-related power consumption. They first provide a brief overview of the existing approaches on cool data centers that can be mainly grouped as studies on virtualization techniques, energy-efficient data center network design schemes, and studies that monitor the data center thermal activity by Wireless Sensor Networks (WSNs). The authors also present solutions that aim to reduce energy consumption in data centers by considering the communications aspects over the backbone of large-scale cloud systems.


Cloud computing becoming one of the most advanced and promising technologies in these days for information technology era. It has also helped to reduce the cost of small and medium enterprises based on cloud provider services. Resource scheduling with load balancing is one of the primary and most important goals of the cloud computing scheduling process. Resource scheduling in cloud is a non-deterministic problem and is responsible for assigning tasks to virtual machines (VMs) by a servers or service providers in a way that increases the resource utilization and performance, reduces response time, and keeps the whole system balanced. So in this paper, we presented a model deep learning based resource scheduling and load balancing using multidimensional queuing load optimization (MQLO) algorithm with the concept of for cloud environment Multidimensional Resource Scheduling and Queuing Network (MRSQN) is used to detect the overloaded server and migrate them to VMs. Here, ANN is used as deep learning concept as a classifier that helps to identify the overloaded or under loaded servers or VMs and balanced them based on their basis parameters such as CPU, memory and bandwidth. In particular, the proposed ANN-based MQLO algorithm has improved the response time as well success rate. The simulation results show that the proposed ANN-based MQLO algorithm has improved the response time compared to the existing algorithms in terms of Average Success Rate, Resource Scheduling Efficiency, Energy Consumption and Response Time.


2021 ◽  
Vol 11 (13) ◽  
pp. 5849
Author(s):  
Nimra Malik ◽  
Muhammad Sardaraz ◽  
Muhammad Tahir ◽  
Babar Shah ◽  
Gohar Ali ◽  
...  

Cloud computing is a rapidly growing technology that has been implemented in various fields in recent years, such as business, research, industry, and computing. Cloud computing provides different services over the internet, thus eliminating the need for personalized hardware and other resources. Cloud computing environments face some challenges in terms of resource utilization, energy efficiency, heterogeneous resources, etc. Tasks scheduling and virtual machines (VMs) are used as consolidation techniques in order to tackle these issues. Tasks scheduling has been extensively studied in the literature. The problem has been studied with different parameters and objectives. In this article, we address the problem of energy consumption and efficient resource utilization in virtualized cloud data centers. The proposed algorithm is based on task classification and thresholds for efficient scheduling and better resource utilization. In the first phase, workflow tasks are pre-processed to avoid bottlenecks by placing tasks with more dependencies and long execution times in separate queues. In the next step, tasks are classified based on the intensities of the required resources. Finally, Particle Swarm Optimization (PSO) is used to select the best schedules. Experiments were performed to validate the proposed technique. Comparative results obtained on benchmark datasets are presented. The results show the effectiveness of the proposed algorithm over that of the other algorithms to which it was compared in terms of energy consumption, makespan, and load balancing.


2014 ◽  
Vol 1008-1009 ◽  
pp. 1513-1516
Author(s):  
Hai Na Song ◽  
Xiao Qing Zhang ◽  
Zhong Tang He

Cloud computing environment is regarded as a kind of multi-tenant computing mode. With virtulization as a support technology, cloud computing realizes the integration of multiple workloads in one server through the package and seperation of virtual machines. Aiming at the contradiction between the heterogeneous applications and uniform shared resource pool, using the idea of bin packing, the multidimensional resource scheduling problem is analyzed in this paper. We carry out some example analysis in one-dimensional resource scheduling, two-dimensional resource schduling and three-dimensional resource scheduling. The results shows that the resource utilization of cloud data centers will be improved greatly when the resource sheduling is conducted after reorganizing rationally the heterogeneous demands.


2008 ◽  
Vol 58 ◽  
pp. 83-89
Author(s):  
Ning Chang Liu ◽  
Zhao Feng Li

In cement industry, many grinding up systems are on operating now. The tradition process of tube mill grinding system is high energy consumption, so it’s low efficiency, especially in the final cement grinding process. The value and advantage of slag is recognized more and more, but it’s difficult to be grinded up. Furthermore, the disadvantage and shortages to grind up clinker compounded with slag to produce cement are obvious and adopted. The best process is to grind up slag, clinker separately. Then, these two kinds of powder are compounded by a mixer. Hereby, it introduces a design of the process to grind up clinker, slag by one roller mill.


Author(s):  
Burak Kantarci ◽  
Hussein T. Mouftah

Cloud computing combines the advantages of several computing paradigms and introduces ubiquity in the provisioning of services such as software, platform, and infrastructure. Data centers, as the main hosts of cloud computing services, accommodate thousands of high performance servers and high capacity storage units. Offloading the local resources increases the energy consumption of the transport network and the data centers although it is advantageous in terms of energy consumption of the end hosts. This chapter presents a detailed survey of the existing mechanisms that aim at designing the Internet backbone with data centers and the objective of energy-efficient delivery of the cloud services. The survey is followed by a case study where Mixed Integer Linear Programming (MILP)-based provisioning models and heuristics are used to guarantee either minimum delayed or maximum power saving cloud services where high performance data centers are assumed to be located at the core nodes of an IP-over-WDM network. The chapter is concluded by summarizing the surveyed schemes with a taxonomy including the cons and pros. The summary is followed by a discussion focusing on the research challenges and opportunities.


Author(s):  
Federico Larumbe ◽  
Brunilde Sansò

This chapter addresses a set of optimization problems that arise in cloud computing regarding the location and resource allocation of the cloud computing entities: the data centers, servers, software components, and virtual machines. The first problem is the location of new data centers and the selection of current ones since those decisions have a major impact on the network efficiency, energy consumption, Capital Expenditures (CAPEX), Operational Expenditures (OPEX), and pollution. The chapter also addresses the Virtual Machine Placement Problem: which server should host which virtual machine. The number of servers used, the cost, and energy consumption depend strongly on those decisions. Network traffic between VMs and users, and between VMs themselves, is also an important factor in the Virtual Machine Placement Problem. The third problem presented in this chapter is the dynamic provisioning of VMs to clusters, or auto scaling, to minimize the cost and energy consumption while satisfying the Service Level Agreements (SLAs). This important feature of cloud computing requires predictive models that precisely anticipate workload dimensions. For each problem, the authors describe and analyze models that have been proposed in the literature and in the industry, explain advantages and disadvantages, and present challenging future research directions.


Sign in / Sign up

Export Citation Format

Share Document