A distributed deep reinforcement learning–based integrated dynamic bus control system in a connected environment

Author(s):  
Haotian Shi ◽  
Qinghui Nie ◽  
Sicheng Fu ◽  
Xin Wang ◽  
Yang Zhou ◽  
...  
Author(s):  
Marco Boaretto ◽  
Gabriel Chaves Becchi ◽  
Luiza Scapinello Aquino ◽  
Aderson Cleber Pifer ◽  
Helon Vicente Hultmann Ayala ◽  
...  

Processes ◽  
2021 ◽  
Vol 9 (3) ◽  
pp. 487
Author(s):  
Fumitake Fujii ◽  
Akinori Kaneishi ◽  
Takafumi Nii ◽  
Ryu’ichiro Maenishi ◽  
Soma Tanaka

Proportional–integral–derivative (PID) control remains the primary choice for industrial process control problems. However, owing to the increased complexity and precision requirement of current industrial processes, a conventional PID controller may provide only unsatisfactory performance, or the determination of PID gains may become quite difficult. To address these issues, studies have suggested the use of reinforcement learning in combination with PID control laws. The present study aims to extend this idea to the control of a multiple-input multiple-output (MIMO) process that suffers from both physical coupling between inputs and a long input/output lag. We specifically target a thin film production process as an example of such a MIMO process and propose a self-tuning two-degree-of-freedom PI controller for the film thickness control problem. Theoretically, the self-tuning functionality of the proposed control system is based on the actor-critic reinforcement learning algorithm. We also propose a method to compensate for the input coupling. Numerical simulations are conducted under several likely scenarios to demonstrate the enhanced control performance relative to that of a conventional static gain PI controller.


2021 ◽  
Vol 54 (3-4) ◽  
pp. 417-428
Author(s):  
Yanyan Dai ◽  
KiDong Lee ◽  
SukGyu Lee

For real applications, rotary inverted pendulum systems have been known as the basic model in nonlinear control systems. If researchers have no deep understanding of control, it is difficult to control a rotary inverted pendulum platform using classic control engineering models, as shown in section 2.1. Therefore, without classic control theory, this paper controls the platform by training and testing reinforcement learning algorithm. Many recent achievements in reinforcement learning (RL) have become possible, but there is a lack of research to quickly test high-frequency RL algorithms using real hardware environment. In this paper, we propose a real-time Hardware-in-the-loop (HIL) control system to train and test the deep reinforcement learning algorithm from simulation to real hardware implementation. The Double Deep Q-Network (DDQN) with prioritized experience replay reinforcement learning algorithm, without a deep understanding of classical control engineering, is used to implement the agent. For the real experiment, to swing up the rotary inverted pendulum and make the pendulum smoothly move, we define 21 actions to swing up and balance the pendulum. Comparing Deep Q-Network (DQN), the DDQN with prioritized experience replay algorithm removes the overestimate of Q value and decreases the training time. Finally, this paper shows the experiment results with comparisons of classic control theory and different reinforcement learning algorithms.


2021 ◽  
Vol 2113 (1) ◽  
pp. 012030
Author(s):  
Jing Li ◽  
Yanyang Liu ◽  
Xianguo Qing ◽  
Kai Xiao ◽  
Ying Zhang ◽  
...  

Abstract The nuclear reactor control system plays a crucial role in the operation of nuclear power plants. The coordinated control of power control and steam generator level control has become one of the most important control problems in these systems. In this paper, we propose a mathematical model of the coordinated control system, and then transform it into a reinforcement learning model and develop a deep reinforcement learning control algorithm so-called DDPG algorithm to solve the problem. Through simulation experiments, our proposed algorithm has shown an extremely remarkable control performance.


Author(s):  
Michael Helm ◽  
Daniel Cooke ◽  
Klaus Becker ◽  
Larry Pyeatt ◽  
Nelson Rushton

Sign in / Sign up

Export Citation Format

Share Document