scholarly journals Forecasting the Market Equity Premium: Does Nonlinearity Matter?

2021 ◽  
Vol 13 (5) ◽  
pp. 9
Author(s):  
Anwen Yin

We propose using the nonlinear method of smoothing splines in conjunction with forecast combination to predict the market equity premium. The smooth splines are flexible enough to capture the possible nonlinear relationship between the equity premium and predictive variables while controlling for complexity, overcoming the difficulties often attached to nonlinear methods such as computational cost, overfitting and interpretation. Our empirical results show that when used with forecast combination, the smoothing spline forecasts outperform many competing methods such as the adaptive combinations, shrinkage estimators and technical indicators, in delivering statistical and economic gains consistently.

Biometrika ◽  
2020 ◽  
Vol 107 (3) ◽  
pp. 723-735
Author(s):  
Cheng Meng ◽  
Xinlian Zhang ◽  
Jingyi Zhang ◽  
Wenxuan Zhong ◽  
Ping Ma

Summary We consider the problem of approximating smoothing spline estimators in a nonparametric regression model. When applied to a sample of size $n$, the smoothing spline estimator can be expressed as a linear combination of $n$ basis functions, requiring $O(n^3)$ computational time when the number $d$ of predictors is two or more. Such a sizeable computational cost hinders the broad applicability of smoothing splines. In practice, the full-sample smoothing spline estimator can be approximated by an estimator based on $q$ randomly selected basis functions, resulting in a computational cost of $O(nq^2)$. It is known that these two estimators converge at the same rate when $q$ is of order $O\{n^{2/(pr+1)}\}$, where $p\in [1,2]$ depends on the true function and $r > 1$ depends on the type of spline. Such a $q$ is called the essential number of basis functions. In this article, we develop a more efficient basis selection method. By selecting basis functions corresponding to approximately equally spaced observations, the proposed method chooses a set of basis functions with great diversity. The asymptotic analysis shows that the proposed smoothing spline estimator can decrease $q$ to around $O\{n^{1/(pr+1)}\}$ when $d\leq pr+1$. Applications to synthetic and real-world datasets show that the proposed method leads to a smaller prediction error than other basis selection methods.


2016 ◽  
Vol 13 (03) ◽  
pp. 1650009 ◽  
Author(s):  
Kai Xu ◽  
Huan Liu ◽  
Yuheng Du ◽  
Xiangyang Zhu

Human controls dozens of muscles for different hand postures in a coordinated manner. Such coordination is referred to as a postural synergy. Postural synergy has enabled an anthropomorphic robotic hand with many actuators to be applied as a prosthetic hand and controlled by two to three channels of biological signals. Principle component analysis (PCA) of the hand postures has become a popular way to extract the postural synergies. However, relatively big errors are often produced while the hand postures are reconstructed using these PCA-synthesized synergies due to the linearity nature of this method. This paper presents a comparative study in which the postural synergies are synthesized using both linear and nonlinear methods. Specifically, the Gaussian process latent variable model (GPLVM), as a nonlinear dimension reduction method, is implemented to produce nonlinear postural synergies and the hand postures can then be reconstructed from the two-dimensional synergy plane. Computational and experimental verifications show that the posture reconstruction errors are greatly reduced using this nonlinear method. The results suggest that the use of nonlinear postural synergies should be considered while applying a dexterous robotic hand as prosthesis. Versatile hand postures could be formed via only two channels of bio-signals.


Author(s):  
K Masood ◽  
M T Mustafa

A smoothing spline-based method and a hyperbolic heat conduction model is applied to regularize the recovery of the initial profile from a parabolic heat conduction model in two-dimensions. An ill-posed inverse problem involving recovery of the initial temperature distribution from measurements of the final temperature distribution is investigated. A hyperbolic heat conduction model is considered instead of a parabolic model and smoothing splines are applied to regularize the recovered initial profile. The comparison of the proposed procedure and parabolic model is presented graphically by examples.


2019 ◽  
Vol 11 (12) ◽  
pp. 50
Author(s):  
Anwen Yin

This paper introduces a two-stage out-of-sample predictive model averaging approach to forecasting the U.S. market equity premium. In the first stage, we combine the break and stable specifications for each candidate model utilizing schemes such as Mallows weights to account for the presence of structural breaks. Next, we combine all previously averaged models by equal weights to address the issue of model uncertainty. Our empirical results show that the double-averaged model can deliver superior statistical and economic gains relative to not only the historical average but also the simple forecast combination when forecasting the equity premium. Moreover, our approach provides an explicit theory-based linkage between forecast combination and structural breaks which distinguishes this study from other closely related works.


2001 ◽  
Vol 11 (01) ◽  
pp. 33-41 ◽  
Author(s):  
CARL de BOOR

A self-contained mathematical derivation of the smoothing spline with weighted roughness measure is offered, along with a discussion of the computational details required for its implementation. The introduction of a weighted roughness measure, even if only piecewise constant, provides additional flexibility in the shaping of the smoothing spline, while not materially increasing the computational cost of its construction or use.


Author(s):  
Candida Mwisomba ◽  
Abdi T. Abdalla ◽  
Idrissa Amour ◽  
Florian Mkemwa ◽  
Baraka Maiseli

Abstract Compressed sensing allows recovery of image signals using a portion of data – a technique that has drastically revolutionized the field of through-the-wall radar imaging (TWRI). This technique can be accomplished through nonlinear methods, including convex programming and greedy iterative algorithms. However, such (nonlinear) methods increase the computational cost at the sensing and reconstruction stages, thus limiting the application of TWRI in delicate practical tasks (e.g. military operations and rescue missions) that demand fast response times. Motivated by this limitation, the current work introduces the use of a numerical optimization algorithm, called Limited Memory Broyden–Fletcher–Goldfarb–Shanno (LBFGS), to the TWRI framework to lower image reconstruction time. LBFGS, a well-known Quasi-Newton algorithm, has traditionally been applied to solve large scale optimization problems. Despite its potential applications, this algorithm has not been extensively applied in TWRI. Therefore, guided by LBFGS and using the Euclidean norm, we employed the regularized least square method to solve the cost function of the TWRI problem. Simulation results show that our method reduces the computational time by 87% relative to the classical method, even under situations of increased number of targets or large data volume. Moreover, the results show that the proposed method remains robust when applied to noisy environment.


2005 ◽  
Vol 12 (6) ◽  
pp. 979-991 ◽  
Author(s):  
J. Miksovsky ◽  
A. Raidl

Abstract. We investigated the usability of the method of local linear models (LLM), multilayer perceptron neural network (MLP NN) and radial basis function neural network (RBF NN) for the construction of temporal and spatial transfer functions between different meteorological quantities, and compared the obtained results both mutually and to the results of multiple linear regression (MLR). The tested methods were applied for the short-term prediction of daily mean temperatures and for the downscaling of NCEP/NCAR reanalysis data, using series of daily mean, minimum and maximum temperatures from 25 European stations as predictands. None of the tested nonlinear methods was recognized to be distinctly superior to the others, but all nonlinear techniques proved to be better than linear regression in the majority of the cases. It is also discussed that the most frequently used nonlinear method, the MLP neural network, may not be the best choice for processing the climatic time series - LLM method or RBF NNs can offer a comparable or slightly better performance and they do not suffer from some of the practical disadvantages of MLPs. Aside from comparing the performance of different methods, we paid attention to geographical and seasonal variations of the results. The forecasting results showed that the nonlinear character of relations between climate variables is well apparent over most of Europe, in contrast to rather weak nonlinearity in the Mediterranean and North Africa. No clear large-scale geographical structure of nonlinearity was identified in the case of downscaling. Nonlinearity also seems to be noticeably stronger in winter than in summer in most locations, for both forecasting and downscaling.


2015 ◽  
Author(s):  
◽  
Sifan Liu

[ACCESS RESTRICTED TO THE UNIVERSITY OF MISSOURI AT REQUEST OF AUTHOR.] There is a well-known Bayesian interpretation of function estimation by spline smoothing using a limit of proper normal priors. This limiting prior has the same form with Partially Informative Normal (PIN), which was introduced in Sun et al. (1999). In this dissertation, we first discuss some properties of PIN. In terms of improper priors, we consider q-vague convergence as the convergence mode. Then, we apply the properties to several extensions of smoothing spline problems. Partial spline model, which contains a non-parametric part as regular smoothing spline together with a linear parametric part, is discussed. We perform simulation studies and applications on yield curves. Specifically, Nelson-Siege (NS) model is considered to construct the linear component. NS partial spline model is used for fitting single yield curve, while partial parallel and non-parallel spline models are used for multiple curves. Then, large p, small n regression problem associated with the generalized univariate smoothing spline, some studies on bin smoothing splines, adaptive smoothing splines and correlated smoothing splines are discussed.


2011 ◽  
Vol 58-60 ◽  
pp. 547-550
Author(s):  
Di Wu ◽  
Zhao Zheng

In real world, high-dimensional data are everywhere, but the nature structure behind them is always featured by only a few parameters. With the rapid development of computer vision, more and more data dimensionality reduction problems are involved, this leads to the rapid development of dimensionality reduction algorithms. Linear method such as LPP [1], NPE [2], nonlinear method such as LLE [3] and improvement version kernel NPE. One particularly simple but effective assumption in face recognition is that the samples from the same class lie on a linear subspace, so lots of nonlinear methods only perform well on some artificial data sets. This paper emphasizes on NPE and SPP [4] come up with recently, and combines these methods, the experiments show the effect of new method outperform some classic unsupervised methods.


Author(s):  
M. R. Osborne ◽  
Tania Prvan

AbstractWe consider a generalisation of the stochastic formulation of smoothing splines, and discuss the smoothness properties of the resulting conditional expectation (generalised smoothing spline), and the sensitivity of the numerical algorithms. One application is to the calculation of smoothing splines with less than the usual order of continuity at the data points.


Sign in / Sign up

Export Citation Format

Share Document