Learning Heterogeneous DAG Tasks Scheduling Policies With Efficient Neural Network Evolution

Author(s):  
Weiyu Tan ◽  
Tian Ling ◽  
Zhenyu Zeng ◽  
Ying Tao ◽  
Yuxia Cheng
2019 ◽  
Vol 11 (7) ◽  
pp. 1826 ◽  
Author(s):  
Yuxia Cheng ◽  
Zhiwei Wu ◽  
Kui Liu ◽  
Qing Wu ◽  
Yu Wang

Task scheduling is critical for improving system performance in the distributed heterogeneous computing environment. The Directed Acyclic Graph (DAG) tasks scheduling problem is NP-complete and it is hard to find an optimal schedule. Due to its key importance, the DAG tasks scheduling problem has been extensively studied in the literature. However, many previously proposed traditional heuristic algorithms are usually based on greedy methods and also lack the consideration of scheduling tasks between trusted and untrusted entities, which makes the problem more complicated, but there still exists a large optimization space to be explored. In this paper, we propose a trust-aware adaptive DAG tasks scheduling algorithm using the reinforcement learning and Monte Carlo Tree Search (MCTS) methods. The scheduling problem is defined using the reinforcement learning model. Efficient scheduling state space, action space and reward function are designed to train the policy gradient-based REINFORCE agent. The MCTS method is proposed to determine actual scheduling policies when DAG tasks are simultaneously executed in trusted and untrusted entities. Leveraging the algorithm’s capability of exploring long term reward, the proposed algorithm could achieve good scheduling policies while guaranteeing trusted tasks scheduled within trusted entities. Experimental results showed the effectiveness of the proposed algorithm compared with the classic HEFT/CPOP algorithms.


2021 ◽  
Vol 2 (4) ◽  
pp. 1-8
Author(s):  
Martin Happ ◽  
Matthias Herlich ◽  
Christian Maier ◽  
Jia Lei Du ◽  
Peter Dorfinger

Modeling communication networks to predict performance such as delay and jitter is important for evaluating and optimizing them. In recent years, neural networks have been used to do this, which may have advantages over existing models, for example from queueing theory. One of these neural networks is RouteNet, which is based on graph neural networks. However, it is based on simplified assumptions. One key simplification is the restriction to a single scheduling policy, which describes how packets of different flows are prioritized for transmission. In this paper we propose a solution that supports multiple scheduling policies (Strict Priority, Deficit Round Robin, Weighted Fair Queueing) and can handle mixed scheduling policies in a single communication network. Our solution is based on the RouteNet architecture as part of the "Graph Neural Network Challenge". We achieved a mean absolute percentage error under 1% with our extended model on the evaluation data set from the challenge. This takes neural-network-based delay estimation one step closer to practical use.


Sign in / Sign up

Export Citation Format

Share Document