scholarly journals Reinforcement Learning for Options Trading

2021 ◽  
Vol 11 (23) ◽  
pp. 11208
Author(s):  
Wen Wen ◽  
Yuyu Yuan ◽  
Jincui Yang

Reinforcement learning has been applied to various types of financial assets trading, such as stocks, futures, and cryptocurrencies. Options, as a novel kind of derivative, have their characteristics. Because there are too many option contracts for one underlying asset and their price behavior is different. Besides, the validity period of an option contract is relatively short. To apply reinforcement learning to options trading, we propose the options trading reinforcement learning (OTRL) framework. We use options’ underlying asset data to train the reinforcement learning model. Candle data in different time intervals are utilized, respectively. The protective closing strategy is added to the model to prevent unbearable losses. Our experiments demonstrate that the most stable algorithm for obtaining high returns is proximal policy optimization (PPO) with the protective closing strategy. The deep Q network (DQN) can exceed the buy and hold strategy in options trading, as can soft actor critic (SAC). The OTRL framework is verified effectively.

2020 ◽  
Author(s):  
Ben Lonnqvist ◽  
Micha Elsner ◽  
Amelia R. Hunt ◽  
Alasdair D F Clarke

Experiments on the efficiency of human search sometimes reveal large differences between individual participants. We argue that reward-driven task-specific learning may account for some of this variation. In a computational reinforcement learning model of this process, a wide variety of strategies emerge, despite all simulated participants having the same visual acuity. We conduct a visual search experiment, and replicate previous findings that participant preferences about where to search are highly varied, with a distribution comparable to the simulated results. Thus, task-specific learning is an under-explored mechanism by which large inter-participant differences can arise.


Sign in / Sign up

Export Citation Format

Share Document