Improving Pairs Trading Strategies via Reinforcement Learning

Author(s):  
Cheng Wang ◽  
Patrik Sandas ◽  
Peter Beling
Complexity ◽  
2019 ◽  
Vol 2019 ◽  
pp. 1-20 ◽  
Author(s):  
Taewook Kim ◽  
Ha Young Kim

Many researchers have tried to optimize pairs trading as the numbers of opportunities for arbitrage profit have gradually decreased. Pairs trading is a market-neutral strategy; it profits if the given condition is satisfied within a given trading window, and if not, there is a risk of loss. In this study, we propose an optimized pairs-trading strategy using deep reinforcement learning—particularly with the deep Q-network—utilizing various trading and stop-loss boundaries. More specifically, if spreads hit trading thresholds and reverse to the mean, the agent receives a positive reward. However, if spreads hit stop-loss thresholds or fail to reverse to the mean after hitting the trading thresholds, the agent receives a negative reward. The agent is trained to select the optimum level of discretized trading and stop-loss boundaries given a spread to maximize the expected sum of discounted future profits. Pairs are selected from stocks on the S&P 500 Index using a cointegration test. We compared our proposed method with traditional pairs-trading strategies which use constant trading and stop-loss boundaries. We find that our proposed model is trained well and outperforms traditional pairs-trading strategies.


2020 ◽  
Vol 538 ◽  
pp. 142-158 ◽  
Author(s):  
Xing Wu ◽  
Haolei Chen ◽  
Jianjia Wang ◽  
Luigi Troiano ◽  
Vincenzo Loia ◽  
...  

2021 ◽  
Author(s):  
Fenghui Yu ◽  
Wai-Ki Ching ◽  
Chufang WU ◽  
Jiawen Gu

2020 ◽  
Vol 34 (02) ◽  
pp. 2128-2135
Author(s):  
Yang Liu ◽  
Qi Liu ◽  
Hongke Zhao ◽  
Zhen Pan ◽  
Chuanren Liu

In recent years, considerable efforts have been devoted to developing AI techniques for finance research and applications. For instance, AI techniques (e.g., machine learning) can help traders in quantitative trading (QT) by automating two tasks: market condition recognition and trading strategies execution. However, existing methods in QT face challenges such as representing noisy high-frequent financial data and finding the balance between exploration and exploitation of the trading agent with AI techniques. To address the challenges, we propose an adaptive trading model, namely iRDPG, to automatically develop QT strategies by an intelligent trading agent. Our model is enhanced by deep reinforcement learning (DRL) and imitation learning techniques. Specifically, considering the noisy financial data, we formulate the QT process as a Partially Observable Markov Decision Process (POMDP). Also, we introduce imitation learning to leverage classical trading strategies useful to balance between exploration and exploitation. For better simulation, we train our trading agent in the real financial market using minute-frequent data. Experimental results demonstrate that our model can extract robust market features and be adaptive in different markets.


2010 ◽  
Vol 78 ◽  
pp. 114-134 ◽  
Author(s):  
SAYAT R. BARONYAN ◽  
İ. İLKAY BODUROĞLU ◽  
EMRAH ŞENER

2016 ◽  
Vol 20 (12) ◽  
pp. 5051-5066 ◽  
Author(s):  
Saeid Fallahpour ◽  
Hasan Hakimian ◽  
Khalil Taheri ◽  
Ehsan Ramezanifar

Computing ◽  
2019 ◽  
Vol 102 (6) ◽  
pp. 1305-1322 ◽  
Author(s):  
Yuming Li ◽  
Pin Ni ◽  
Victor Chang

Sign in / Sign up

Export Citation Format

Share Document