scholarly journals Solving flow-shop scheduling problem with a reinforcement learning algorithm that generalizes the value function with neural network

2021 ◽  
Vol 60 (3) ◽  
pp. 2787-2800
Author(s):  
Jianfeng Ren ◽  
Chunming Ye ◽  
Feng Yang
Algorithms ◽  
2019 ◽  
Vol 12 (11) ◽  
pp. 222 ◽  
Author(s):  
Han ◽  
Guo ◽  
Su

The scheduling problems in mass production, manufacturing, assembly, synthesis, and transportation, as well as internet services, can partly be attributed to a hybrid flow-shop scheduling problem (HFSP). To solve the problem, a reinforcement learning (RL) method for HFSP is studied for the first time in this paper. HFSP is described and attributed to the Markov Decision Processes (MDP), for which the special states, actions, and reward function are designed. On this basis, the MDP framework is established. The Boltzmann exploration policy is adopted to trade-off the exploration and exploitation during choosing action in RL. Compared with the first-come-first-serve strategy that is frequently adopted when coding in most of the traditional intelligent algorithms, the rule in the RL method is first-come-first-choice, which is more conducive to achieving the global optimal solution. For validation, the RL method is utilized for scheduling in a metal processing workshop of an automobile engine factory. Then, the method is applied to the sortie scheduling of carrier aircraft in continuous dispatch. The results demonstrate that the machining and support scheduling obtained by this RL method are reasonable in result quality, real-time performance and complexity, indicating that this RL method is practical for HFSP.


Sign in / Sign up

Export Citation Format

Share Document