A Q-learning based approach to design of intelligent stock trading agents

Author(s):  
J.W. Lee ◽  
E. Hong ◽  
J. Park
Electronics ◽  
2020 ◽  
Vol 9 (9) ◽  
pp. 1384
Author(s):  
Yuyu Yuan ◽  
Wen Wen ◽  
Jincui Yang

In algorithmic trading, adequate training data set is key to making profits. However, stock trading data in units of a day can not meet the great demand for reinforcement learning. To address this problem, we proposed a framework named data augmentation based reinforcement learning (DARL) which uses minute-candle data (open, high, low, close) to train the agent. The agent is then used to guide daily stock trading. In this way, we can increase the instances of data available for training in hundreds of folds, which can substantially improve the reinforcement learning effect. But not all stocks are suitable for this kind of trading. Therefore, we propose an access mechanism based on skewness and kurtosis to select stocks that can be traded properly using this algorithm. In our experiment, we find proximal policy optimization (PPO) is the most stable algorithm to achieve high risk-adjusted returns. Deep Q-learning (DQN) and soft actor critic (SAC) can beat the market in Sharp Ratio.


2019 ◽  
Vol 37 (4) ◽  
Author(s):  
Jagdish Chakole ◽  
Manish Kurhekar

Author(s):  
Jae Won Lee ◽  
Jonghun Park ◽  
Jangmin O ◽  
Jongwoo Lee ◽  
Euyseok Hong
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document