scholarly journals On Mixture GARCH Models: Long, Short Memory and Application in Finance

2021 ◽  
Vol 2 (2) ◽  
pp. 01-07
Author(s):  
Halim Zeghdoudi ◽  
Madjda Amrani

In this work, we study the famous model of volatility; called model of conditional heteroscedastic autoregressive with mixed memory MMGARCH for modeling nonlinear time series. The MMGARCH model has two mixing components, one is a GARCH short memory and the other is GARCH long memory. the main objective of this search for finds the best model between mixtures of the models we made (long memory with long memory, short memory with short memory and short memory with long memory) Also, the existence of its stationary solution is discussed. The Monte Carlo experiments demonstrate we discovered theoretical. In addition, the empirical application of the MMGARCH model (1, 1) to the daily index DOW and NASDAQ illustrates its capabilities; we find that for the mixture between APARCH and EGARCH is superior to any other model tested because it produces the smallest errors.

2019 ◽  
Vol 3 (1) ◽  
pp. 243-256
Author(s):  
Peter M. Robinson

AbstractWe discuss developments and future prospects for statistical modeling and inference for spatial data that have long memory. While a number of contributons have been made, the literature is relatively small and scattered, compared to the literatures on long memory time series on the one hand, and spatial data with short memory on the other. Thus, over several topics, our discussions frequently begin by surveying relevant work in these areas that might be extended in a long memory spatial setting.


2006 ◽  
Vol 3 (4) ◽  
pp. 1603-1627 ◽  
Author(s):  
W. Wang ◽  
P. H. A. J. M. van Gelder ◽  
J. K. Vrijling ◽  
X. Chen

Abstract. The Lo's R/S tests (Lo, 1991), GPH test (Geweke and Porter-Hudak, 1983) and the maximum likelihood estimation method implemented in S-Plus (S-MLE) are evaluated through intensive Mote Carlo simulations for detecting the existence of long-memory. It is shown that, it is difficult to find an appropriate lag q for Lo's test for different AR and ARFIMA processes, which makes the use of Lo's test very tricky. In general, the GPH test outperforms the Lo's test, but for cases where there is strong autocorrelations (e.g., AR(1) processes with φ=0.97 or even 0.99), the GPH test is totally useless, even for time series of large data size. Although S-MLE method does not provide a statistic test for the existence of long-memory, the estimates of d given by S-MLE seems to give a good indication of whether or not the long-memory is present. Data size has a significant impact on the power of all the three methods. Generally, the power of Lo's test and GPH test increases with the increase of data size, and the estimates of d with GPH test and S-MLE converge with the increase of data size. According to the results with the Lo's R/S test (Lo, 1991), GPH test (Geweke and Porter-Hudak, 1983) and the S-MLE method, all daily flow series exhibit long-memory. The intensity of long-memory in daily streamflow processes has only a very weak positive relationship with the scale of watershed.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Jorge Martínez Compains ◽  
Ignacio Rodríguez Carreño ◽  
Ramazan Gençay ◽  
Tommaso Trani ◽  
Daniel Ramos Vilardell

Abstract Johansen’s Cointegration Test (JCT) performs remarkably well in finding stable bivariate cointegration relationships. Nonetheless, the JCT is not necessarily designed to detect such relationships in presence of non-linear patterns such as structural breaks or cycles that fall in the low frequency portion of the spectrum. Seasonal adjustment procedures might not detect such non-linear patterns, and thus, we expose the difficulty in identifying cointegrating relations under the traditional use of JCT. Within several Monte Carlo experiments, we show that wavelets can empower more the JCT framework than the traditional seasonal adjustment methodologies, allowing for identification of hidden cointegrating relationships. Moreover, we confirm these results using seasonally adjusted time series as US consumption and income, gross national product (GNP) and money supply M1 and GNP and M2.


2002 ◽  
Vol 18 (2) ◽  
pp. 278-296 ◽  
Author(s):  
Katsuto Tanaka

The measurement error problem that we consider in this paper is concerned with the situation where time series data of various kinds—short memory, long memory, and random walk processes—are contaminated by white noise. We suggest a unified approach to testing for the existence of such noise. It is found that the power of our test crucially depends on the underlying process.


2014 ◽  
Vol 32 (2) ◽  
pp. 431-457 ◽  
Author(s):  
Jiti Gao ◽  
Peter M. Robinson

A semiparametric model is proposed in which a parametric filtering of a nonstationary time series, incorporating fractionally differencing with short memory correction, removes correlation but leaves a nonparametric deterministic trend. Estimates of the memory parameter and other dependence parameters are proposed, and shown to be consistent and asymptotically normally distributed with parametric rate. Tests with standard asymptotics for I(1) and other hypotheses are thereby justified. Estimation of the trend function is also considered. We include a Monte Carlo study of finite-sample performance.


2012 ◽  
Vol 10 (1) ◽  
pp. 49
Author(s):  
Douglas Gomes dos Santos ◽  
Flávio Augusto Ziegelmann

In this paper, we compare semiparametric additive models with GARCH models in terms of their capability to estimate and forecast volatility during crisis periods. Our Monte Carlo studies indicate a better performance for GARCH models when their functional forms do not differ from that of the specified Data Generating Process (DGP). However, if they differ from the DGP, the results suggest the superiority of additive models. Additionally, we perform an empirical application in three selected periods of high volatility of IBOVESPA returns series, in which both families of models obtain similar results.


Sign in / Sign up

Export Citation Format

Share Document