Resource Scheduling for Tasks of a Workflow in Cloud Environment

Author(s):  
Kamalesh Karmakar ◽  
Rajib K Das ◽  
Sunirmal Khatua
2015 ◽  
Vol 2015 ◽  
pp. 1-10 ◽  
Author(s):  
Zhijia Chen ◽  
Yuanchang Zhu ◽  
Yanqiang Di ◽  
Shaochong Feng

The resources in cloud environment have features such as large-scale, diversity, and heterogeneity. Moreover, the user requirements for cloud computing resources are commonly characterized by uncertainty and imprecision. Hereby, to improve the quality of cloud computing service, not merely should the traditional standards such as cost and bandwidth be satisfied, but also particular emphasis should be laid on some extended standards such as system friendliness. This paper proposes a dynamic resource scheduling method based on fuzzy control theory. Firstly, the resource requirements prediction model is established. Then the relationships between resource availability and the resource requirements are concluded. Afterwards fuzzy control theory is adopted to realize a friendly match between user needs and resources availability. Results show that this approach improves the resources scheduling efficiency and the quality of service (QoS) of cloud computing.


2014 ◽  
Vol 16 (2) ◽  
pp. 95-101
Author(s):  
A. Aalan Babu ◽  
◽  
S. Roselin Mary

Kybernetes ◽  
2016 ◽  
Vol 45 (10) ◽  
pp. 1524-1541 ◽  
Author(s):  
Junfei Chu ◽  
Jie Wu ◽  
Qingyuan Zhu ◽  
Jiasen Sun

Purpose Resource scheduling is the study of how to effectively measure, evaluate, analyze, and dispatch resources in order to meet the demands of corresponding tasks. Aiming at the problem of resource scheduling in the private cloud environment, the purpose of this paper is to propose a resource scheduling approach from an efficiency priority point of view. Design/methodology/approach To measure the computational efficiencies for the resource nodes in a private cloud environment, the data envelopment analysis (DEA) approach is incorporated and a suitable DEA model is proposed. Then, based on the efficiency scores calculated by the proposed DEA model for the resource nodes, the 0-1 programming technique is introduced to build a simple resource scheduling model. Findings The proposed DEA model not only has the ability of ranking all the decision-making units into different positions but also can handle non-discretionary inputs and undesirable outputs when evaluating the resource nodes. Furthermore, the resource scheduling model can generate for the calculation tasks an optimal resource scheduling scheme that has the highest total computational efficiency. Research limitations/implications The proposed method may also be used in studies of resource scheduling studies in the environments of public clouds and hybrid clouds. Practical implications The proposed approach can achieve the goal of resource scheduling in private cloud computing platforms by attaining the highest total computational efficiency, which is very significant in practice. Originality/value This paper uses an efficiency priority point of view to solve the problem of resource scheduling in private cloud environments.


Cloud computing becoming one of the most advanced and promising technologies in these days for information technology era. It has also helped to reduce the cost of small and medium enterprises based on cloud provider services. Resource scheduling with load balancing is one of the primary and most important goals of the cloud computing scheduling process. Resource scheduling in cloud is a non-deterministic problem and is responsible for assigning tasks to virtual machines (VMs) by a servers or service providers in a way that increases the resource utilization and performance, reduces response time, and keeps the whole system balanced. So in this paper, we presented a model deep learning based resource scheduling and load balancing using multidimensional queuing load optimization (MQLO) algorithm with the concept of for cloud environment Multidimensional Resource Scheduling and Queuing Network (MRSQN) is used to detect the overloaded server and migrate them to VMs. Here, ANN is used as deep learning concept as a classifier that helps to identify the overloaded or under loaded servers or VMs and balanced them based on their basis parameters such as CPU, memory and bandwidth. In particular, the proposed ANN-based MQLO algorithm has improved the response time as well success rate. The simulation results show that the proposed ANN-based MQLO algorithm has improved the response time compared to the existing algorithms in terms of Average Success Rate, Resource Scheduling Efficiency, Energy Consumption and Response Time.


Sign in / Sign up

Export Citation Format

Share Document