scholarly journals Option hedging with risk averse reinforcement learning

2020 ◽  
Author(s):  
Edoardo Vittori ◽  
Michele Trapletti ◽  
Marcello Restelli
2002 ◽  
Vol 44-46 ◽  
pp. 951-956 ◽  
Author(s):  
Yael Niv ◽  
Daphna Joel ◽  
Isaac Meilijson ◽  
Eytan Ruppin

Author(s):  
Lorenzo Bisi ◽  
Luca Sabbioni ◽  
Edoardo Vittori ◽  
Matteo Papini ◽  
Marcello Restelli

The use of reinforcement learning in algorithmic trading is of growing interest, since it offers the opportunity of making profit through the development of autonomous artificial traders, that do not depend on hard-coded rules. In such a framework, keeping uncertainty under control is as important as maximizing expected returns. Risk aversion has been addressed in reinforcement learning through measures related to the distribution of returns. However, in trading it is essential to keep under control the risk of portfolio positions in the intermediate steps. In this paper, we define a novel measure of risk, which we call reward volatility, consisting of the variance of the rewards under the state-occupancy measure. This new risk measure is shown to bound the return variance so that reducing the former also constrains the latter. We derive a policy gradient theorem with a new objective function that exploits the mean-volatility relationship. Furthermore, we adapt TRPO, the well-known policy gradient algorithm with monotonic improvement guarantees, in a risk-averse manner. Finally, we test the proposed approach in two financial environments using real market data.


Sign in / Sign up

Export Citation Format

Share Document