scholarly journals Volatility Fitting Performance of QGARCH(1,1) Model with Student-t, GED, and SGED Distributions

2020 ◽  
Vol 11 (2) ◽  
pp. 97-104
Author(s):  
Didit Budi Nugroho ◽  
Bintoro Ady Pamungkas ◽  
Hanna Arini Parhusip

The research had two objectives. First, it compared the performance of the Generalized Autoregressive Conditional Heteroscedasticity (1,1) (GARCH) and Quadratic GARCH (1,1) (QGARCH)) models based on the fitting to real data sets. The model assumed that return error follows four different distributions: Normal (Gaussian), Student-t, General Error Distribution (GED), and Skew GED (SGED). Maximum likelihood estimation was usually employed in estimating the GARCH model, but it might not be easily applied to more complicated ones. Second, it provided two ways to evaluate the considered models. The models were estimated using the Generalized Reduced Gradient (GRG) Non-Linear method in Excel’s Solver and the Adaptive Random Walk Metropolis (ARWM) in the Scilab program. The real data in the empirical study were Financial Times Stock Exchange Milano Italia Borsa (FTSEMIB) and Stoxx Europe 600 indices over the daily period from January 2000 to December 2017 to test the conditional variance process and see whether the estimation methods could adapt to the complicated models. The analysis shows that GRG Non-Linear in Excel’s Solver and ARWM methods have close results. It indicates a good estimation ability. Based on the Akaike Information Criterion (AIC), the QGARCH(1,1) model provides a better fitting than the GARCH(1,1) model on each distribution specification. Overall, the QGARCH(1,1) with SGED distribution best fits both data.

Author(s):  
Didit Budi Nugroho ◽  
Anggita M Kusumawati ◽  
Leopoldus R Sasongko

Studi ini membandingkan kinerja pencocokan model volatilitas GARCH(1,1) dan EGARCH(1,1) pada return kurs dan saham. Model mengasumsikan empat distribusi berbeda untuk error dari return: Normal, Skew-Normal (SN), Alpha-Skew Normal (ASN), dan Student-t. Data aset keuangan yang digunakan sebagai analisis perbandingan yaitu data kurs beli US Dollar (USD) dalam periode harian dari Januari 2010 sampai Desember 2017 dan data indeks saham FTSE100 dalam periode harian dari Januari 2000 sampai Desember 2013. Studi ini membandingkan metode Generalized Reduced Gradient (GRG) Non-Linier di Solver Excel dan metode Adaptive Random Walk Metropolis (ARWM) untuk mengestimasi model. Hasil menunjukkan bahwa metode GRG Non Linear Solver Excel memberikan estimasi yang serupa dengan metode ARWM dan tidak melanggar kendala model. Lebih lanjut, berdasarkan nilai Akaike Information Criterion (AIC), kedua data pengamatan menyediakan bukti bahwa model dengan distribusi Student-t adalah yang terbaik, diikuti oleh distribusi SN yang lebih baik daripada model dengan distribusi ASN dan Normal. Nilai AIC telah menyarankan model EGARCH(1,1) berdistribusi Student-t sebagai model pencocokan terbaik untuk kedua data pengamatan.


Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-18
Author(s):  
Ehab M. Almetwally ◽  
Mohamed A. H. Sabry ◽  
Randa Alharbi ◽  
Dalia Alnagar ◽  
Sh. A. M. Mubarak ◽  
...  

This paper introduces the new novel four-parameter Weibull distribution named as the Marshall–Olkin alpha power Weibull (MOAPW) distribution. Some statistical properties of the distribution are examined. Based on Type-I censored and Type-II censored samples, maximum likelihood estimation (MLE), maximum product spacing (MPS), and Bayesian estimation for the MOAPW distribution parameters are discussed. Numerical analysis using real data sets and Monte Carlo simulation are accomplished to compare various estimation methods. This novel model’s supremacy upon some famous distributions is explained using two real data sets and it is shown that the MOAPW model can achieve better fits than other competitive distributions.


Author(s):  
Hisham Mohamed Almongy ◽  
Ehab Mohamed Almetwally ◽  
Amaal Elsayed Mubarak

In this paper, we introduce and study a new extension of Lomax distribution with four-parameter named as the Marshall–Olkin alpha power Lomax (MOAPL) distribution. Some statistical properties of this distribution are discussed. Maximum likelihood estimation (MLE), maximum product spacing (MPS) and least Square (LS) method for the MOAPL distribution parameters are discussed. A numerical study using real data analysis and Monte-Carlo simulation are performed to compare between different methods of estimation. Superiority of the new model over some well-known distributions are illustrated by physics and economics real data sets. The MOAPL model can produce better fits than some well-known distributions as Marshall–Olkin Lomax, alpha power Lomax, Lomax distribution, Marshall–Olkin alpha power exponential, Kumaraswamy-generalized Lomax, exponentiated  Lomax  and power Lomax.


Entropy ◽  
2020 ◽  
Vol 23 (1) ◽  
pp. 62
Author(s):  
Zhengwei Liu ◽  
Fukang Zhu

The thinning operators play an important role in the analysis of integer-valued autoregressive models, and the most widely used is the binomial thinning. Inspired by the theory about extended Pascal triangles, a new thinning operator named extended binomial is introduced, which is a general case of the binomial thinning. Compared to the binomial thinning operator, the extended binomial thinning operator has two parameters and is more flexible in modeling. Based on the proposed operator, a new integer-valued autoregressive model is introduced, which can accurately and flexibly capture the dispersed features of counting time series. Two-step conditional least squares (CLS) estimation is investigated for the innovation-free case and the conditional maximum likelihood estimation is also discussed. We have also obtained the asymptotic property of the two-step CLS estimator. Finally, three overdispersed or underdispersed real data sets are considered to illustrate a superior performance of the proposed model.


2020 ◽  
Vol 70 (4) ◽  
pp. 953-978
Author(s):  
Mustafa Ç. Korkmaz ◽  
G. G. Hamedani

AbstractThis paper proposes a new extended Lindley distribution, which has a more flexible density and hazard rate shapes than the Lindley and Power Lindley distributions, based on the mixture distribution structure in order to model with new distribution characteristics real data phenomena. Its some distributional properties such as the shapes, moments, quantile function, Bonferonni and Lorenz curves, mean deviations and order statistics have been obtained. Characterizations based on two truncated moments, conditional expectation as well as in terms of the hazard function are presented. Different estimation procedures have been employed to estimate the unknown parameters and their performances are compared via Monte Carlo simulations. The flexibility and importance of the proposed model are illustrated by two real data sets.


2020 ◽  
Vol 9 (1) ◽  
pp. 61-81
Author(s):  
Lazhar BENKHELIFA

A new lifetime model, with four positive parameters, called the Weibull Birnbaum-Saunders distribution is proposed. The proposed model extends the Birnbaum-Saunders distribution and provides great flexibility in modeling data in practice. Some mathematical properties of the new distribution are obtained including expansions for the cumulative and density functions, moments, generating function, mean deviations, order statistics and reliability. Estimation of the model parameters is carried out by the maximum likelihood estimation method. A simulation study is presented to show the performance of the maximum likelihood estimates of the model parameters. The flexibility of the new model is examined by applying it to two real data sets.


2018 ◽  
Vol 8 (1) ◽  
pp. 44
Author(s):  
Lutfiah Ismail Al turk

In this paper, a Nonhomogeneous Poisson Process (NHPP) reliability model based on the two-parameter Log-Logistic (LL) distribution is considered. The essential model’s characteristics are derived and represented graphically. The parameters of the model are estimated by the Maximum Likelihood (ML) and Non-linear Least Square (NLS) estimation methods for the case of time domain data. An application to show the flexibility of the considered model are conducted based on five real data sets and using three evaluation criteria. We hope this model will help as an alternative model to other useful reliability models for describing real data in reliability engineering area.


Author(s):  
Dima Waleed Hanna Alrabadi

Purpose This study aims to utilize the mean–variance optimization framework of Markowitz (1952) and the generalized reduced gradient (GRG) nonlinear algorithm to find the optimal portfolio that maximizes return while keeping risk at minimum. Design/methodology/approach This study applies the portfolio optimization concept of Markowitz (1952) and the GRG nonlinear algorithm to a portfolio consisting of the 30 leading stocks from the three different sectors in Amman Stock Exchange over the period from 2009 to 2013. Findings The selected portfolios achieve a monthly return of 5 per cent whilst keeping risk at minimum. However, if the short-selling constraint is relaxed, the monthly return will be 9 per cent. Moreover, the GRG nonlinear algorithm enables to construct a portfolio with a Sharpe ratio of 7.4. Practical implications The results of this study are vital to both academics and practitioners, specifically the Arab and Jordanian investors. Originality/value To the best of the author’s knowledge, this is the first study in Jordan and in the Arab world that constructs optimum portfolios based on the mean–variance optimization framework of Markowitz (1952) and the GRG nonlinear algorithm.


Geophysics ◽  
2009 ◽  
Vol 74 (4) ◽  
pp. J35-J48 ◽  
Author(s):  
Bernard Giroux ◽  
Abderrezak Bouchedda ◽  
Michel Chouteau

We introduce two new traveltime picking schemes developed specifically for crosshole ground-penetrating radar (GPR) applications. The main objective is to automate, at least partially, the traveltime picking procedure and to provide first-arrival times that are closer in quality to those of manual picking approaches. The first scheme is an adaptation of a method based on cross-correlation of radar traces collated in gathers according to their associated transmitter-receiver angle. A detector is added to isolate the first cycle of the radar wave and to suppress secon-dary arrivals that might be mistaken for first arrivals. To improve the accuracy of the arrival times obtained from the crosscorrelation lags, a time-rescaling scheme is implemented to resize the radar wavelets to a common time-window length. The second method is based on the Akaike information criterion(AIC) and continuous wavelet transform (CWT). It is not tied to the restrictive criterion of waveform similarity that underlies crosscorrelation approaches, which is not guaranteed for traces sorted in common ray-angle gathers. It has the advantage of being automated fully. Performances of the new algorithms are tested with synthetic and real data. In all tests, the approach that adds first-cycle isolation to the original crosscorrelation scheme improves the results. In contrast, the time-rescaling approach brings limited benefits, except when strong dispersion is present in the data. In addition, the performance of crosscorrelation picking schemes degrades for data sets with disparate waveforms despite the high signal-to-noise ratio of the data. In general, the AIC-CWT approach is more versatile and performs well on all data sets. Only with data showing low signal-to-noise ratios is the AIC-CWT superseded by the modified crosscorrelation picker.


Endocrinology ◽  
2019 ◽  
Vol 160 (10) ◽  
pp. 2395-2400 ◽  
Author(s):  
David J Handelsman ◽  
Lam P Ly

Abstract Hormone assay results below the assay detection limit (DL) can introduce bias into quantitative analysis. Although complex maximum likelihood estimation methods exist, they are not widely used, whereas simple substitution methods are often used ad hoc to replace the undetectable (UD) results with numeric values to facilitate data analysis with the full data set. However, the bias of substitution methods for steroid measurements is not reported. Using a large data set (n = 2896) of serum testosterone (T), DHT, estradiol (E2) concentrations from healthy men, we created modified data sets with increasing proportions of UD samples (≤40%) to which we applied five different substitution methods (deleting UD samples as missing and substituting UD sample with DL, DL/√2, DL/2, or 0) to calculate univariate descriptive statistics (mean, SD) or bivariate correlations. For all three steroids and for univariate as well as bivariate statistics, bias increased progressively with increasing proportion of UD samples. Bias was worst when UD samples were deleted or substituted with 0 and least when UD samples were substituted with DL/√2, whereas the other methods (DL or DL/2) displayed intermediate bias. Similar findings were replicated in randomly drawn small subsets of 25, 50, and 100. Hence, we propose that in steroid hormone data with ≤40% UD samples, substituting UD with DL/√2 is a simple, versatile, and reasonably accurate method to minimize left censoring bias, allowing for data analysis with the full data set.


Sign in / Sign up

Export Citation Format

Share Document