optimal trade execution
Recently Published Documents


TOTAL DOCUMENTS

38
(FIVE YEARS 11)

H-INDEX

9
(FIVE YEARS 1)

2021 ◽  
Vol 25 (4) ◽  
pp. 757-810
Author(s):  
Julia Ackermann ◽  
Thomas Kruse ◽  
Mikhail Urusov

Author(s):  
Claudio Bellani ◽  
Damiano Brigo ◽  
Alex Done ◽  
Eyal Neuman

We compare optimal static and dynamic solutions in trade execution. An optimal trade execution problem is considered where a trader is looking at a short-term price predictive signal while trading. When the trader creates an instantaneous market impact, it is shown that transaction costs of optimal adaptive strategies are substantially lower than the corresponding costs of the optimal static strategy. In the same spirit, in the case of transient impact, it is shown that strategies that observe the signal a finite number of times can dramatically reduce the transaction costs and improve the performance of the optimal static strategy.


2021 ◽  
pp. 1-12
Author(s):  
Martin Forde ◽  
Leandro Sánchez-Betancourt ◽  
Benjamin Smith

2021 ◽  
Vol 12 (2) ◽  
pp. 788-822
Author(s):  
Julia Ackermann ◽  
Thomas Kruse ◽  
Mikhail Urusov

2021 ◽  
Author(s):  
Ying Chen ◽  
Ulrich Horst ◽  
Hoang Hai Tran

Author(s):  
Siyu Lin ◽  
Peter A. Beling

In this article, we propose an end-to-end adaptive framework for optimal trade execution based on Proximal Policy Optimization (PPO). We use two methods to account for the time dependencies in the market data based on two different neural network architecture: 1) Long short-term memory (LSTM) networks, 2) Fully-connected networks (FCN) by stacking the most recent limit orderbook (LOB) information as model inputs. The proposed framework can make trade execution decisions based on level-2 limit order book (LOB) information such as bid/ask prices and volumes directly without manually designed attributes as in previous research. Furthermore, we use a sparse reward function, which gives the agent reward signals at the end of each episode as an indicator of its relative performances against the baseline model, rather than implementation shortfall (IS) or a shaped reward function. The experimental results have demonstrated advantages over IS and the shaped reward function in terms of performance and simplicity. The proposed framework has outperformed the industry commonly used baseline models such as TWAP, VWAP, and AC as well as several Deep Reinforcement Learning (DRL) models on most of the 14 US equities in our experiments.


Sign in / Sign up

Export Citation Format

Share Document