Factor-augmented forecasting regressions with threshold effects

2021 ◽  
Author(s):  
Yayi Yan ◽  
Tingting Cheng

Abstract This paper introduces a factor-augmented forecasting regression model in the presence of threshold effects. We consider least squares estimation of the regression parameters, and establish asymptotic theories for estimators of both slope coefficients and the threshold parameter. Prediction intervals are also constructed for factor-augmented forecasts. Moreover, we develop a likelihood ratio statistic for tests on the threshold parameter and a sup-Wald test statistic for tests on the presence of threshold effects, respectively. Simulation results show that the proposed estimation method and testing procedures work very well in finite samples. Finally, we demonstrate the usefulness of the proposed model through an application to forecasting stock market returns.

2014 ◽  
Vol 891-892 ◽  
pp. 1639-1644 ◽  
Author(s):  
Kazutaka Mukoyama ◽  
Koushu Hanaki ◽  
Kenji Okada ◽  
Akiyoshi Sakaida ◽  
Atsushi Sugeta ◽  
...  

The aim of this study is to develop a statistical estimation method of S-N curve for iron and structural steels by using their static mechanical properties. In this study, firstly, the S-N data for pure iron and structural steels were extracted from "Database on fatigue strength of Metallic Materials" published by the Society of Materials Science, Japan (JSMS) and S-N curve regression model was applied based on the JSMS standard, "Standard Evaluation Method of Fatigue Reliability for Metallic Materials -Standard Regression Method of S-N Curve-". Secondly, correlations between regression parameters and static mechanical properties were investigated. As a result, the relationship between the regression parameters and static mechanical properties (e.g. fatigue limit E and static tensile strength σB) showed strong correlations, respectively. Using these correlations, it is revealed that S-N curve for iron and structural steels can be predicted easily from the static mechanical properties.


Author(s):  
Joko Susanto

This aim of the research is to test whether the decreasing productivity of the workers results in decreasing of the nominal wage of the production worker under the supervisor. Statistical data of BPS was used in this research. The research data is consist of the nominal base and over time wage of the production worker under the supervisor, productivity of workers, and capital intensity. Furthermore, this research used regression analysis with OLS estimation method. This regression analysis was based on the dynamic panel data model. Finally, this study used redundant coefficient test to reduce several insignificant regression parameters in order to get a parsimony model. The results of the research as follow: (1). the decreasing productivity of the workers does not result in decreasing the nominal base wages of the production workers under the supervisor. (2). the decreasing productivity of the workers results in decreasing of the over time wages of the production workers under the supervisor.


Entropy ◽  
2020 ◽  
Vol 22 (11) ◽  
pp. 1294
Author(s):  
Lijuan Huo ◽  
Jin Seo Cho

This study examined the extreme learning machine (ELM) applied to the Wald test statistic for the model specification of the conditional mean, which we call the WELM testing procedure. The omnibus test statistics available in the literature weakly converge to a Gaussian stochastic process under the null that the model is correct, and this makes their application inconvenient. By contrast, the WELM testing procedure is straightforwardly applicable when detecting model misspecification. We applied the WELM testing procedure to the sequential testing procedure formed by a set of polynomial models and estimate an approximate conditional expectation. We then conducted extensive Monte Carlo experiments to evaluate the performance of the sequential WELM testing procedure and verify that it consistently estimates the most parsimonious conditional mean when the set of polynomial models contains a correctly specified model. Otherwise, it consistently rejects all the models in the set.


2019 ◽  
Vol 12 (3) ◽  
pp. 139 ◽  
Author(s):  
Anders Eriksson ◽  
Daniel P. A. Preve ◽  
Jun Yu

This paper introduces a parsimonious and yet flexible semiparametric model to forecast financial volatility. The new model extends a related linear nonnegative autoregressive model previously used in the volatility literature by way of a power transformation. It is semiparametric in the sense that the distributional and functional form of its error component is partially unspecified. The statistical properties of the model are discussed and a novel estimation method is proposed. Simulation studies validate the new method and suggest that it works reasonably well in finite samples. The out-of-sample forecasting performance of the proposed model is evaluated against a number of standard models, using data on S&P 500 monthly realized volatilities. Some commonly used loss functions are employed to evaluate the predictive accuracy of the alternative models. It is found that the new model generally generates highly competitive forecasts.


2014 ◽  
Vol 31 (4) ◽  
pp. 753-777 ◽  
Author(s):  
Haiqiang Chen ◽  
Ying Fang ◽  
Yingxing Li

This paper considers estimation and inference for varying-coefficient models with nonstationary regressors. We propose a nonparametric estimation method using penalized splines, which achieves the same optimal convergence rate as kernel-based methods, but enjoys computation advantages. Utilizing the mixed model representation of penalized splines, we develop a likelihood ratio test statistic for checking the stability of the regression coefficients. We derive both the exact and the asymptotic null distributions of this test statistic. We also demonstrate its optimality by examining its local power performance. These theoretical findings are well supported by simulation studies.


2008 ◽  
Vol 24 (4) ◽  
pp. 829-864 ◽  
Author(s):  
Liangjun Su ◽  
Halbert White

We propose a nonparametric test of conditional independence based on the weighted Hellinger distance between the two conditional densities, f(y|x,z) and f(y|x), which is identically zero under the null. We use the functional delta method to expand the test statistic around the population value and establish asymptotic normality under β-mixing conditions. We show that the test is consistent and has power against alternatives at distance n−1/2h−d/4. The cases for which not all random variables of interest are continuously valued or observable are also discussed. Monte Carlo simulation results indicate that the test behaves reasonably well in finite samples and significantly outperforms some earlier tests for a variety of data generating processes. We apply our procedure to test for Granger noncausality in exchange rates.


2015 ◽  
Vol 32 (6) ◽  
pp. 1434-1482 ◽  
Author(s):  
Meng Huang ◽  
Yixiao Sun ◽  
Halbert White

This paper proposes a nonparametric test for conditional independence that is easy to implement, yet powerful in the sense that it is consistent and achieves n−1/2 local power. The test statistic is based on an estimator of the topological “distance” between restricted and unrestricted probability measures corresponding to conditional independence or its absence. The distance is evaluated using a family of Generically Comprehensively Revealing (GCR) functions, such as the exponential or logistic functions, which are indexed by nuisance parameters. The use of GCR functions makes the test able to detect any deviation from the null. We use a kernel smoothing method when estimating the distance. An integrated conditional moment (ICM) test statistic based on these estimates is obtained by integrating out the nuisance parameters. We simulate the critical values using a conditional simulation approach. Monte Carlo experiments show that the test performs well in finite samples. As an application, we test an implication of the key assumption of unconfoundedness in the context of estimating the returns to schooling.


1994 ◽  
Vol 10 (1) ◽  
pp. 70-90 ◽  
Author(s):  
R.M. de Jong ◽  
H.J. Bierens

In this paper, a consistent model specification test is proposed. Some consistent model specification tests have been discussed in econometrics literature. Those tests are consistent by randomization, display a discontinuity in sample size, or have an asymptotic distribution that depends on the data-generating process and on the model, whereas our test does not have one of those disadvantages. Our test can be viewed upon as a conditional moment test as proposed by Newey but instead of a fixed number of conditional moments, an asymptotically infinite number of moment conditions is employed. The use of an asymptotically infinite number of conditional moments will make it possible to obtain a consistent test. Computation of the test statistic is particularly simple, since in finite samples our statistic is equivalent to a chi-square conditional moment test of a finite number of conditional moments.


Author(s):  
Khaoula Aidi ◽  
Nadeem Shafique Butt ◽  
Mir Masoom Ali ◽  
Mohamed Ibrahim ◽  
Haitham M. Yousof ◽  
...  

A new modified version of the Bagdonavičius-Nikulin goodness-of-fit test statistic is presented for validity for the right censor case under the double Burr type X distribution. The maximum likelihood estimation method in censored data case is used and applied. Simulations via the algorithm of Barzilai-Borwein is performed for assessing the right censored estimation method. Another simulation study is presented for testing the null hypothesis under the modified version of the Bagdonavičius and Nikulin goodness-of-fit statistical test. Four right censored data sets are analyzed under the new modified test statistic for checking the distributional validation.


2022 ◽  
Vol 12 ◽  
Author(s):  
Roman Schefzik ◽  
Leonie Boland ◽  
Bianka Hahn ◽  
Thomas Kirschning ◽  
Holger A. Lindner ◽  
...  

Statistical network analyses have become popular in many scientific disciplines, where an important task is to test for differences between two networks. We describe an overall framework for differential network testing procedures that vary regarding (1) the network estimation method, typically based on specific concepts of association, and (2) the network characteristic employed to measure the difference. Using permutation-based tests, our approach is general and applicable to various overall, node-specific or edge-specific network difference characteristics. The methods are implemented in our freely available R software package DNT, along with an R Shiny application. In a study in intensive care medicine, we compare networks based on parameters representing main organ systems to evaluate the prognosis of critically ill patients in the intensive care unit (ICU), using data from the surgical ICU of the University Medical Centre Mannheim, Germany. We specifically consider both cross-sectional comparisons between a non-survivor and a survivor group and longitudinal comparisons at two clinically relevant time points during the ICU stay: first, at admission, and second, at an event stage prior to death in non-survivors or a matching time point in survivors. The non-survivor and the survivor networks do not significantly differ at the admission stage. However, the organ system interactions of the survivors then stabilize at the event stage, revealing significantly more network edges, whereas those of the non-survivors do not. In particular, the liver appears to play a central role for the observed increased connectivity in the survivor network at the event stage.


Sign in / Sign up

Export Citation Format

Share Document