Multiconstraint task scheduling in multi-processor system by neural network

Author(s):  
Ruey-Maw Chen ◽  
Yueh-Min Huang
2020 ◽  
Vol 13 (3) ◽  
pp. 261-282
Author(s):  
Mohammad Khalid Pandit ◽  
Roohie Naaz Mir ◽  
Mohammad Ahsan Chishti

PurposeThe intelligence in the Internet of Things (IoT) can be embedded by analyzing the huge volumes of data generated by it in an ultralow latency environment. The computational latency incurred by the cloud-only solution can be significantly brought down by the fog computing layer, which offers a computing infrastructure to minimize the latency in service delivery and execution. For this purpose, a task scheduling policy based on reinforcement learning (RL) is developed that can achieve the optimal resource utilization as well as minimum time to execute tasks and significantly reduce the communication costs during distributed execution.Design/methodology/approachTo realize this, the authors proposed a two-level neural network (NN)-based task scheduling system, where the first-level NN (feed-forward neural network/convolutional neural network [FFNN/CNN]) determines whether the data stream could be analyzed (executed) in the resource-constrained environment (edge/fog) or be directly forwarded to the cloud. The second-level NN ( RL module) schedules all the tasks sent by level 1 NN to fog layer, among the available fog devices. This real-time task assignment policy is used to minimize the total computational latency (makespan) as well as communication costs.FindingsExperimental results indicated that the RL technique works better than the computationally infeasible greedy approach for task scheduling and the combination of RL and task clustering algorithm reduces the communication costs significantly.Originality/valueThe proposed algorithm fundamentally solves the problem of task scheduling in real-time fog-based IoT with best resource utilization, minimum makespan and minimum communication cost between the tasks.


IJARCCE ◽  
2019 ◽  
Vol 8 (5) ◽  
pp. 124-131
Author(s):  
Suhani Kumari ◽  
Himanshu Yadav ◽  
Chetan Agrawal

2014 ◽  
Vol 12 (4) ◽  
pp. 327
Author(s):  
Anurag Agarwal ◽  
Selcuk Colak ◽  
Jason Deane ◽  
Terry Rakes

This paper addresses the task scheduling problem which involves minimizing the makespan in scheduling n tasks on m machines (resources) where the tasks follow a precedence relation and preemption is not allowed. The machines (resources) are all identical and a task needs only one machine for processing. Like most scheduling problems, this one is NP-hard in nature, making it difficult to find exact solutions for larger problems in reasonable computational time. Heuristic and metaheuristic approaches are therefore needed to solve this type of problem. This paper proposes a metaheuristic approach - called NeuroGenetic - which is a combination of an augmented neural network and a genetic algorithm. The augmented neural network approach is itself a hybrid of a heuristic approach and a neural network approach. The NeuroGenetic approach is tested against some popular test problems from the literature, and the results indicate that the NeuroGenetic approach performs significantly better than either the augmented neural network or the genetic algorithms alone.


2020 ◽  
Vol 17 (9) ◽  
pp. 4213-4218
Author(s):  
H. S. Madhusudhan ◽  
T. Satish Kumar ◽  
G. Mahesh

Cloud computing provides on demand service on internet using network of remote servers. The pivotal role for any cloud environment would be to schedule tasks and the virtual machine scheduling have key role in maintaining Quality of Service (QOS) and Service Level Agreement (SLA). Task scheduling is the process of scheduling task (user requests) to certain resources and it is an NP-complete problem. The primary objectives of scheduling algorithms are to minimize makespan and improve resource utilization. In this research work an attempt is made to implement Artificial Neural Network (ANN), which is a methodology in machine learning technique and it is applied to implement task scheduling. It is observed that neural network trained with genetic algorithm will outperforms default genetic algorithm by an average efficiency of 25.56%.


Sign in / Sign up

Export Citation Format

Share Document