innovation variance
Recently Published Documents


TOTAL DOCUMENTS

30
(FIVE YEARS 3)

H-INDEX

6
(FIVE YEARS 1)

Mathematics ◽  
2021 ◽  
Vol 9 (8) ◽  
pp. 915
Author(s):  
Mehmet Balcilar ◽  
Riza Demirer ◽  
Festus V. Bekun

This paper introduces a new methodology to estimate time-varying alphas and betas in conditional factor models, which allows substantial flexibility in a time-varying framework. To circumvent problems associated with the previous approaches, we introduce a Bayesian time-varying parameter model where innovations of the state equation have a spike-and-slab mixture distribution. The mixture distribution specifies two states with a specific probability. In the first state, the innovation variance is set close to zero with a certain probability and parameters stay relatively constant. In the second state, the innovation variance is large and the change in parameters is normally distributed with mean zero and a given variance. The latent state is specified with a threshold that governs the state change. We allow a separate threshold for each parameter; thus, the parameters may shift in an unsynchronized manner such that the model moves from one state to another when the change in the parameter exceeds the threshold and vice versa. This approach offers great flexibility and nests a plethora of other time-varying model specifications, allowing us to assess whether the betas of conditional factor models evolve gradually over time or display infrequent, but large, shifts. We apply the proposed methodology to industry portfolios within a five-factor model setting and show that the threshold Capital Asset Pricing Model (CAPM) provides robust beta estimates coupled with smaller pricing errors compared to the alternative approaches. The results have significant implications for the implementation of smart beta strategies that rely heavily on the accuracy and stability of factor betas and yields.


2017 ◽  
Vol 62 (02) ◽  
pp. 345-361
Author(s):  
SOO-BIN JEONG ◽  
BONG-HWAN KIM ◽  
TAE-HWAN KIM ◽  
HYUNG-HO MOON

Spurious rejections of the standard Dickey–Fuller (DF) test caused by a single variance break have been reported and some solutions to correct the problem have been proposed in the literature. Kim et al. (2002) put forward a correctly-sized unit root test robust to a single variance break, called the KLN test. However, there can be more than one break in variance in time series data as documented in Zhou and Perron (2008), so allowing only one break can be too restrictive. In this paper, we show that multiple breaks in variance can generate spurious rejections not only by the standard DF test but also by the KLN test. We then propose a bootstrap-based unit root test that is correctly-sized in the presence of multiple breaks in variance. Simulation experiments demonstrate that the proposed test performs well regardless of the number of breaks and the location of the breaks in innovation variance.


2016 ◽  
Vol 33 (3) ◽  
Author(s):  
Steven Cook

Using local-to-unity detrending, the GLS-based Dickey-Fuller test has been shown to possess higher power than other available unit root tests. As a result, application of this easily implemented test has increased in recent years. In the present study the finite-sample size and power of the GLS-based Dickey-Fuller test is examined in the presence of breaks in innovation variance under the null. In contrast to the original Dickey-Fuller test which has been shown to suffer severe distortion in such circumstances, the GLS-basedtest latter exhibits robustness to all but the most extreme breaks in variance.The results derived show the GLS-based test to be more robust to variance breaks than other modified Dickey-Fuller tests previously considered in the literature.


2016 ◽  
Vol 11 (1) ◽  
pp. 4
Author(s):  
Linda Anderson

Abstract Objective – Measures of trends in Iowa State University library website visits per student/faculty/staff headcount show decreased use. Analysis was conducted to test for a relationship between this decrease and decreasing graduate/undergraduate enrollment ratios and decreasing visits to a popular digital collection. The purpose was to measure the influence of these factors and to produce an adjusted measure of trend which accounts for these factors. Methods – Website transaction log data and enrollment data were modelled with Box and Jenkins time series analysis methods (regression with ARMA errors). Results – A declining graduate to undergraduate enrollment ratio at Iowa State University explained 23% of the innovation variance of library website visits per headcount over the study period, while visits to a popular digital collection also declined, explaining 34% of the innovation variance. Rolling windows analysis showed that the effect of the graduate/undergraduate ratio increased over the study period, while the effect of digital collection visits decreased. In addition, estimates of website usage by graduate students and undergraduates, after accounting for other factors, matched estimates from a survey. Conclusion – A rolling windows metric of mean change adjusted for changes in demographics and other factors allows for a fairer comparison of year-to-year website usage, while also measuring the change in influence of these factors. Adjusting for these influences provides a baseline for studying the effect of interventions, such as website design changes. Box-Jenkins methods of analysis for time series data can provide a more accurate measure than ordinary regression, demonstrated by estimating undergraduate and graduate website usage to corroborate survey data. While overall website usage is decreasing, it is not clear it is decreasing for all groups. Inferences were made about demographic groups with data that is not tied to individuals, thus alleviating privacy concerns.


2014 ◽  
Vol 20 (1) ◽  
pp. 251-275
Author(s):  
Giacomo Carboni

Term premia are shown to provide crucial information for discriminating among alternative sources of change in the economy, namely shifts in the variance of structural shocks and in monetary policy. These sources have been identified as competing explanations for time-varying features of major industrial economies during the 1980s and 1990s. Although hardly distinguishable through the lens of standard dynamic stochastic general equilibrium (DSGE) models, lower nonpolicy shock variances and “tighter” monetary policy regimes imply higher and lower term premia, respectively. As a result, moving to tighter monetary policy alone cannot explain the improved U.S. macroeconomic stability in the 1980s and 1990s: term premia would have shifted downward, a fact inconsistent with the evidence of higher premia from early 80s onward. Conversely, favorable shifts in nonpolicy innovation variance imply movements in term premia that are at least qualitatively consistent with historical patterns.


Sign in / Sign up

Export Citation Format

Share Document