A real-time AGV-scheduling system that combines human decision-making with integer-programming algorithms

Author(s):  
J.E. Krebs ◽  
L.K. Platzman ◽  
C.M. Mitchell
1968 ◽  
Vol 19 (sup1) ◽  
pp. 91-106 ◽  
Author(s):  
L. Bainbridge ◽  
J. Beishon ◽  
J. H. Hemming ◽  
M. Splaine

2014 ◽  
Vol 23 (06) ◽  
pp. 1460023 ◽  
Author(s):  
J. Sukarno Mertoguno

Real-time autonomy is a key element for system which closes the loop between observation, interpretation, planning, and action, commonly found in UxV, robotics, smart vehicle technologies, automated industrial machineries, and autonomic computing. Real-time autonomic cyber system requires timely and accurate decision making and adaptive planning. Autonomic decision making understands its own state and the perceived state of its environment. It is capable of anticipating changes and future states and projecting the effects of actions into future states. Understanding of current state and the knowledge/model of the world are needed for extrapolating actions and deriving action plans. This position paper proposes a hybrid, statistical-formal approach toward achieving realtime autonomy.


1980 ◽  
Author(s):  
Krishna Pattipati ◽  
David Kleinman ◽  
Arye Ephrath

Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 1019
Author(s):  
Shengluo Yang ◽  
Zhigang Xu ◽  
Junyi Wang

Dynamic scheduling problems have been receiving increasing attention in recent years due to their practical implications. To realize real-time and the intelligent decision-making of dynamic scheduling, we studied dynamic permutation flowshop scheduling problem (PFSP) with new job arrival using deep reinforcement learning (DRL). A system architecture for solving dynamic PFSP using DRL is proposed, and the mathematical model to minimize total tardiness cost is established. Additionally, the intelligent scheduling system based on DRL is modeled, with state features, actions, and reward designed. Moreover, the advantage actor-critic (A2C) algorithm is adapted to train the scheduling agent. The learning curve indicates that the scheduling agent learned to generate better solutions efficiently during training. Extensive experiments are carried out to compare the A2C-based scheduling agent with every single action, other DRL algorithms, and meta-heuristics. The results show the well performance of the A2C-based scheduling agent considering solution quality, CPU times, and generalization. Notably, the trained agent generates a scheduling action only in 2.16 ms on average, which is almost instantaneous and can be used for real-time scheduling. Our work can help to build a self-learning, real-time optimizing, and intelligent decision-making scheduling system.


2013 ◽  
Author(s):  
P. Krishna-Rao ◽  
Arye R. Ephrath ◽  
David L. Kleinman

OR ◽  
1968 ◽  
Vol 19 ◽  
pp. 91 ◽  
Author(s):  
L. Bainbridge ◽  
J. Beishon ◽  
J. H. Hemming ◽  
M. Splaine

2013 ◽  
Author(s):  
Scott D. Brown ◽  
Pete Cassey ◽  
Andrew Heathcote ◽  
Roger Ratcliff

Sign in / Sign up

Export Citation Format

Share Document