Efficient Workflow Scheduling Algorithm for Cloud Computing System: A Dynamic Priority-Based Approach

2018 ◽  
Vol 43 (12) ◽  
pp. 7945-7960 ◽  
Author(s):  
Indrajeet Gupta ◽  
Madhu Sudan Kumar ◽  
Prasanta K. Jana
Author(s):  
S. Rekha ◽  
C. Kalaiselvi

This paper studies the delay-optimal virtual machine (VM) scheduling problem in cloud computing systems, which have a constant amount of infrastructure resources such as CPU, memory and storage in the resource pool. The cloud computing system provides VMs as services to users. Cloud users request various types of VMs randomly over time and the requested VM-hosting durations vary vastly. A multi-level queue scheduling algorithm partitions the ready queue into several separate queues. The processes are permanently assigned to one queue, generally based on some property of the process, such as memory size, process priority or process type. Each queue has its own scheduling algorithm. Similarly, a process that waits too long in a lower-priority queue may be moved to a higher-priority queue. Multi-level queue scheduling is performed via the use of the Particle Swarm Optimization algorithm (MQPSO). It checks both Shortest-Job-First (SJF) buffering and Min-Min Best Fit (MMBF) scheduling algorithms, i.e., SJF-MMBF, is proposed to determine the solutions. Another scheme that combines the SJF buffering and Extreme Learning Machine (ELM)-based scheduling algorithms, i.e., SJF- ELM, is further proposed to avoid the potential of job starva¬tion in SJF-MMBF. In addition, there must be scheduling among the queues, which is commonly implemented as fixed-priority preemptive scheduling. The simulation results also illustrate that SJF- ELM is optimal in a heavy-loaded and highly dynamic environment and it is efficient in provisioning the average job hosting rate.


Author(s):  
Poria Pirozmand ◽  
Ali Asghar Rahmani Hosseinabadi ◽  
Maedeh Farrokhzad ◽  
Mehdi Sadeghilalimi ◽  
Seyedsaeid Mirkamali ◽  
...  

AbstractThe cloud computing systems are sorts of shared collateral structure which has been in demand from its inception. In these systems, clients are able to access existing services based on their needs and without knowing where the service is located and how it is delivered, and only pay for the service used. Like other systems, there are challenges in the cloud computing system. Because of a wide array of clients and the variety of services available in this system, it can be said that the issue of scheduling and, of course, energy consumption is essential challenge of this system. Therefore, it should be properly provided to users, which minimizes both the cost of the provider and consumer and the energy consumption, and this requires the use of an optimal scheduling algorithm. In this paper, we present a two-step hybrid method for scheduling tasks aware of energy and time called Genetic Algorithm and Energy-Conscious Scheduling Heuristic based on the Genetic Algorithm. The first step involves prioritizing tasks, and the second step consists of assigning tasks to the processor. We prioritized tasks and generated primary chromosomes, and used the Energy-Conscious Scheduling Heuristic model, which is an energy-conscious model, to assign tasks to the processor. As the simulation results show, these results demonstrate that the proposed algorithm has been able to outperform other methods.


2014 ◽  
Vol 915-916 ◽  
pp. 1382-1385 ◽  
Author(s):  
Bai Lin Pan ◽  
Yan Ping Wang ◽  
Han Xi Li ◽  
Jie Qian

With the enlargement of the scope of cloud computing application, the number of users and types also increases accordingly, the special demand for cloud computing resources has also improved. Cloud computing task scheduling and resource allocation are key technologies, mainly responsible for assigning user jobs to the appropriate resources to perform. But the existing scheduling algorithm is not fully consider the user demand for resources is different, and not well provided for different users to meet the requirements of its resources. As the demand for quality of service based on cloud computing and cloud computing original scheduling algorithm, the computing power scheduling algorithm is proposed based on the QoS constraints to research the cloud computing task scheduling and resource allocation problems, improving the overall efficiency of cloud computing system.


2018 ◽  
Vol 7 (2) ◽  
pp. 26 ◽  
Author(s):  
Hirofumi Miyajima ◽  
Norio Shiratori ◽  
Hiromi Miyajima

The use of cloud computing system, which is the basic technology supporting ICT, is expanding. However, as the number of terminals connected to it increases, the limit of the capability is also becoming apparent. The limit of its capacity leads to the delay of significant processing time. As an architecture to improve this, the edge computing system has been proposed. This is known as a new paradigm corresponding the conventional cloud system. In the conventional cloud system, a terminal sends all data to the cloud and the cloud returns the result to the terminal or a thing directly connected to it. On the other hand, in the edge system, a plural of servers called edges are connected directly or to close distance between the cloud and the terminal (or thing). Then, let us consider the case of machine learning that requires big data. The purpose of learning is to find out the relationship (information) lurking in from the collected data. In order to realize this, a system with several parameters is assumed and estimated by repeatedly updating the parameters with learning data. Further, there is the problem of the security for learning data. In other words, users of cloud computing cannot escape the concern about the risk of information leakage. How can we build a cloud computing system to avoid such risks? Secure multiparty computation is known as one method of realizing safe computation. It is called SMC (Secure Multiparty Computation). Many studies on learning methods considering on SMC have also been proposed. Then, what kind of learning method is suitable for edge computing considering on SMC? In this paper, learning method suitable for edge computing considering on SMC is proposed. It is shown using an edge system composed of a client and m servers. Learning data are shared m pieces of subsets for m servers, learning is performed simultaneously in each server and system parameters are updated in the client using their results. The idea of learning method is shown using BP algorithm for neural network. The effectiveness is shown by numerical simulations.


Sign in / Sign up

Export Citation Format

Share Document