scholarly journals Portfolio Optimization Under Regime Switching and Transaction Costs: Combining Neural Networks and Dynamic Programs

Author(s):  
Xiaoyue Li ◽  
John M. Mulvey

The contributions of this paper are threefold. First, by combining dynamic programs and neural networks, we provide an efficient numerical method to solve a large multiperiod portfolio allocation problem under regime-switching market and transaction costs. Second, the performance of our combined method is shown to be close to optimal in a stylized case. To our knowledge, this is the first paper to carry out such a comparison. Last, the superiority of the combined method opens up the possibility for more research on financial applications of generic methods, such as neural networks, provided that solutions to simplified subproblems are available via traditional methods. The research on combining fast starts with neural networks began about four years ago. We observed that Professor Weinan E’s approach for solving systems of differential equations by neural networks had much improved performance when starting close to an optimal solution and could stall if the current iterate was far from an optimal solution. As we all know, this behavior is common with Newton- based algorithms. As a consequence, we discovered that combining a system of differential equations with a feedforward neural network could much improve overall computational performance. In this paper, we follow a similar direction for dynamic portfolio optimization within a regime-switching market with transaction costs. It investigates how to improve efficiency by combining dynamic programming with a recurrent neural network. Traditional methods face the curse of dimensionality. In contrast, the running time of our combined approach grows approximately linearly with the number of risky assets. It is inspiring to explore the possibilities of combined methods in financial management, believing a careful linkage of existing dynamic optimization algorithms and machine learning will be an active domain going forward. Relationship of the authors: Professor John M. Mulvey is Xiaoyue Li’s doctoral advisor.

2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Idris Kharroubi ◽  
Thomas Lim ◽  
Xavier Warin

AbstractWe study the approximation of backward stochastic differential equations (BSDEs for short) with a constraint on the gains process. We first discretize the constraint by applying a so-called facelift operator at times of a grid. We show that this discretely constrained BSDE converges to the continuously constrained one as the mesh grid converges to zero. We then focus on the approximation of the discretely constrained BSDE. For that we adopt a machine learning approach. We show that the facelift can be approximated by an optimization problem over a class of neural networks under constraints on the neural network and its derivative. We then derive an algorithm converging to the discretely constrained BSDE as the number of neurons goes to infinity. We end by numerical experiments.


2002 ◽  
Vol 12 (01) ◽  
pp. 31-43 ◽  
Author(s):  
GARY YEN ◽  
HAIMING LU

In this paper, we propose a genetic algorithm based design procedure for a multi-layer feed-forward neural network. A hierarchical genetic algorithm is used to evolve both the neural network's topology and weighting parameters. Compared with traditional genetic algorithm based designs for neural networks, the hierarchical approach addresses several deficiencies, including a feasibility check highlighted in literature. A multi-objective cost function is used herein to optimize the performance and topology of the evolved neural network simultaneously. In the prediction of Mackey–Glass chaotic time series, the networks designed by the proposed approach prove to be competitive, or even superior, to traditional learning algorithms for the multi-layer Perceptron networks and radial-basis function networks. Based upon the chosen cost function, a linear weight combination decision-making approach has been applied to derive an approximated Pareto-optimal solution set. Therefore, designing a set of neural networks can be considered as solving a two-objective optimization problem.


2021 ◽  
Author(s):  
Ruslan Chernyshev ◽  
Mikhail Krinitskiy ◽  
Viktor Stepanenko

<p>This work is devoted to development of neural networks for identification of partial differential equations (PDE) solved in the land surface scheme of INM RAS Earth System model (ESM). Atmospheric and climate models are in the top of the most demanding for supercomputing resources among research applications. Spatial resolution and a multitude of physical parameterizations used in ESMs continuously increase. Most of parameters are still poorly constrained, many of them cannot be measured directly. To optimize model calibration time, using neural networks looks a promising approach. Neural networks are already in wide use in satellite imaginary (Su Jeong Lee, et al, 2015; Krinitskiy M. et al, 2018) and for calibrating parameters of land surface models (Yohei Sawada el al, 2019). Neural networks have demonstrated high efficiency in solving conventional problems of mathematical physics (Lucie P. Aarts el al, 2001; Raissi M. et al, 2020). </p><p>We develop a neural networks for optimizing parameters of nonlinear soil heat and moisture transport equation set. For developing we used Python3 based programming tools implemented on GPUs and Ascend platform, provided by Huawei. Because of using hybrid approach combining neural network and classical thermodynamic equations, the major purpose was finding the way to correctly calculate backpropagation gradient of error function, because model trains and is being validated on the same temperature data, while model output is heat equation parameter, which is typically not known. Neural network model has been runtime trained using reference thermodynamic model calculation with prescribed parameters, every next thermodynamic model step has been used for fitting the neural network until it reaches the loss function tolerance.</p><p>Literature:</p><p>1.     Aarts, L.P., van der Veer, P. “Neural Network Method for Solving Partial Differential Equations”. Neural Processing Letters 14, 261–271 (2001). https://doi.org/10.1023/A:1012784129883</p><p>2.     Raissi, M., P. Perdikaris and G. Karniadakis. “Physics Informed Deep Learning (Part I): Data-driven Solutions of Nonlinear Partial Differential Equations.” ArXiv abs/1711.10561 (2017): n. pag.</p><p>3.     Lee, S.J., Ahn, MH. & Lee, Y. Application of an artificial neural network for a direct estimation of atmospheric instability from a next-generation imager. Adv. Atmos. Sci. 33, 221–232 (2016). https://doi.org/10.1007/s00376-015-5084-9</p><p>4.     Krinitskiy M, Verezemskaya P, Grashchenkov K, Tilinina N, Gulev S, Lazzara M. Deep Convolutional Neural Networks Capabilities for Binary Classification of Polar Mesocyclones in Satellite Mosaics. Atmosphere. 2018; 9(11):426.</p><p>5.     Sawada, Y.. “Machine learning accelerates parameter optimization and uncertainty assessment of a land surface model.” ArXiv abs/1909.04196 (2019): n. pag.</p><p>6.     Shufen Pan et al. Evaluation of global terrestrial evapotranspiration using state-of-the-art approaches in remote sensing, machine learning and land surface modeling. Hydrol. Earth Syst. Sci., 24, 1485–1509 (2020)</p><p>7.     Chaney, Nathaniel & Herman, Jonathan & Ek, M. & Wood, Eric. (2016). Deriving Global Parameter Estimates for the Noah Land Surface Model using FLUXNET and Machine Learning: Improving Noah LSM Parameters. Journal of Geophysical Research: Atmospheres. 121. 10.1002/2016JD024821.</p><p> </p><p> </p>


2012 ◽  
Vol 433-440 ◽  
pp. 2808-2816
Author(s):  
Jian Jin Zheng ◽  
You Shen Xia

This paper presents a new interactive neural network for solving constrained multi-objective optimization problems. The constrained multi-objective optimization problem is reformulated into two constrained single objective optimization problems and two neural networks are designed to obtain the optimal weight and the optimal solution of the two optimization problems respectively. The proposed algorithm has a low computational complexity and is easy to be implemented. Moreover, the proposed algorithm is well applied to the design of digital filters. Computed results illustrate the good performance of the proposed algorithm.


2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Zhenyu Yang ◽  
Mingge Zhang ◽  
Guojing Liu ◽  
Mingyu Li

The recommendation method based on user sessions is mainly to model sessions as sequences in the assumption that user behaviors are independent and identically distributed, and then to use deep semantic information mining through Deep Neural Networks. Nevertheless, user behaviors may be a nonindependent intention at irregular points in time. For example, users may buy painkillers, books, or clothes for different reasons at different times. However, this has not been taken seriously in previous studies. Therefore, we propose a session recommendation method based on Neural Differential Equations in an attempt to predict user behavior forward or backward from any point in time. We used Ordinary Differential Equations to train the Graph Neural Network and could predict forward or backward at any point in time to model the user's nonindependent sessions. We tested for four real datasets and found that our model achieved the expected results and was superior to the existing session-based recommendations.


IAWA Journal ◽  
2009 ◽  
Vol 30 (1) ◽  
pp. 87-94 ◽  
Author(s):  
Luis García Esteban ◽  
Francisco García Fernández ◽  
Paloma de Palacios de Palacios ◽  
Ruth Moreno Romero ◽  
Nieves Navarro Cano

Neural networks are complex mathematical structures inspired on biological neural networks, capable of learning from examples (training group) and extrapolating knowledge to an unknown sample (testing group). The similarity of wood structure in many species, particularly in the case of conifers, means that they cannot be differentiated using traditional methods. The use of neural networks can be an effective tool for identifying similar species with a high percentage of accuracy. This predictive method was used to differentiate Juniperus cedrus and J. phoenicea var. canariensis, both from the Canary Islands. The anatomical features of their wood are so similar that it is not possible to differentiate them using traditional methods. An artificial neural network was used to determine if this method could differentiate the two species with a high degree of probability through the biometry of their anatomy. To achieve the differentiation, a feedforward multilayer percepton network was designed, which attained 98.6% success in the training group and 92.0% success in the testing or unknown group. The proposed neural network is satisfactory for the desired purpose and enables J. cedrus and J. phoenicea var. canariensis to be differentiated with a 92% probability.


Author(s):  
YUEHAW KHOO ◽  
JIANFENG LU ◽  
LEXING YING

The curse of dimensionality is commonly encountered in numerical partial differential equations (PDE), especially when uncertainties have to be modelled into the equations as random coefficients. However, very often the variability of physical quantities derived from PDE can be captured by a few features on the space of the coefficient fields. Based on such observation, we propose using neural network to parameterise the physical quantity of interest as a function of input coefficients. The representability of such quantity using a neural network can be justified by viewing the neural network as performing time evolution to find the solutions to the PDE. We further demonstrate the simplicity and accuracy of the approach through notable examples of PDEs in engineering and physics.


2019 ◽  
Author(s):  
Shangying Wang ◽  
Kai Fan ◽  
Nan Luo ◽  
Yangxiaolu Cao ◽  
Feilun Wu ◽  
...  

AbstractMechanism-based mathematical models are the foundation for diverse applications. It is often critical to explore the massive parametric space for each model. However, for many applications, such as agent-based models, partial differential equations, and stochastic differential equations, this practice can impose a prohibitive computational demand. To overcome this limitation, we present a fundamentally new framework to improve computational efficiency by orders of magnitude. The key concept is to train an artificial neural network using a limited number of simulations generated by a mechanistic model. This number is small enough such that the simulations can be completed in a short time frame but large enough to enable reliable training of the neural network. The trained neural network can then be used to explore the system dynamics of a much larger parametric space. We demonstrate this notion by training neural networks to predict self-organized pattern formation and stochastic gene expression. With this framework, we can predict not only the 1-D distribution in space (for partial differential equation models) and probability density function (for stochastic differential equation models) of variables of interest with high accuracy, but also novel system dynamics not present in the training sets. We further demonstrate that using an ensemble of neural networks enables the self-contained evaluation of the quality of each prediction. Our work can potentially be a platform for faster parametric space screening of biological models with user defined objectives.


Sign in / Sign up

Export Citation Format

Share Document