Machine Learning and Trade Schedule Optimization

2021 ◽  
pp. 519-542
Author(s):  
Robert L. Kissell
Mathematics ◽  
2020 ◽  
Vol 8 (12) ◽  
pp. 2114
Author(s):  
Juan C. Chimal-Eguia ◽  
Julio C. Rangel-Reyes ◽  
Ricardo T. Paez-Hernandez

The infusion times and drug quantities are two primary variables to optimize when designing a therapeutic schedule. In this work, we test and analyze several extensions to the gradient descent equations in an optimal control algorithm conceived for therapy scheduling optimization. The goal is to provide insights into the best strategies to follow in terms of convergence speed when implementing our method in models for dendritic cell immunotherapy. The method gives a pulsed-like control that models a series of bolus injections and aims to minimize a cost a function, which minimizes tumor size and to keep the tumor under a threshold. Additionally, we introduce a stochastic iteration step in the algorithm, which serves to reduce the number of gradient computations, similar to a stochastic gradient descent scheme in machine learning. Finally, we employ the algorithm to two therapy schedule optimization problems in dendritic cell immunotherapy and contrast our method’s stochastic and non-stochastic optimizations.


In this paper we present a machine learning technique that can be used in conjunction with multi-period trade schedule optimization used in program trading. The technique is based on an artificial neural network (ANN) model that determines a better starting solution for the non-linear optimization routine. This technique provides calculation time improvements that are 30% faster for small baskets (n = 10 stocks), 50% faster for baskets of (n = 100 stocks) and up to 70% faster for large baskets (n ≥ 300 stocks). Unlike many of the industry approaches that use heuristics and numerical approximation, our machine learning approach solves for the exact problem and provides a dramatic improvement in calculation time.


2020 ◽  
Vol 43 ◽  
Author(s):  
Myrthe Faber

Abstract Gilead et al. state that abstraction supports mental travel, and that mental travel critically relies on abstraction. I propose an important addition to this theoretical framework, namely that mental travel might also support abstraction. Specifically, I argue that spontaneous mental travel (mind wandering), much like data augmentation in machine learning, provides variability in mental content and context necessary for abstraction.


2020 ◽  
Author(s):  
Mohammed J. Zaki ◽  
Wagner Meira, Jr
Keyword(s):  

2020 ◽  
Author(s):  
Marc Peter Deisenroth ◽  
A. Aldo Faisal ◽  
Cheng Soon Ong
Keyword(s):  

Author(s):  
Lorenza Saitta ◽  
Attilio Giordana ◽  
Antoine Cornuejols

Author(s):  
Shai Shalev-Shwartz ◽  
Shai Ben-David
Keyword(s):  

2006 ◽  
Author(s):  
Christopher Schreiner ◽  
Kari Torkkola ◽  
Mike Gardner ◽  
Keshu Zhang

Sign in / Sign up

Export Citation Format

Share Document