scholarly journals Comparison of estimation methods for the density of autoregressive parameter in aggregated AR(1) processes

2021 ◽  
Vol 47 ◽  
Author(s):  
Dmitrij Celov ◽  
Remigijus Leipus ◽  
Virmantas Kvedaras

The article investigates the properties of two alternative disaggregation methods. First one, proposed in Chong (2006), is based on the assumption of polynomial autoregressive parameter density. Second one, proposed in Leipus et al. (2006), uses the approximation of the density by the means of Gegenbauer polynomials. Examining results of Monte-Carlo simulations it is shown that none of the methods was found to outperform another. Chong’s method is narrowed by the class of polynomial densities, and the secondmethod is not effective in the presence of common innovations.Bothmethodswork correctly under assumptions proposed in the corresponding articles.

Author(s):  
Mazen Nassar ◽  
Ahmed Z. Afify ◽  
Mohammed Shakhatreh

This paper addresses the estimation of the unknown parameters of the alphapower exponential distribution (Mahdavi and Kundu, 2017) using nine frequentist estimation methods. We discuss the nite sample properties of the parameterestimates of the alpha power exponential distribution via Monte Carlo simulations. The potentiality of the distribution is analyzed by means of two real datasets from the elds of engineering and medicine. Finally, we use the maximumlikelihood method to derive the estimates of the distribution parameters undercompeting risks data and analyze one real data set.


2021 ◽  
pp. 1-30
Author(s):  
Angelo Mele ◽  
Lingjiong Zhu

Abstract We develop approximate estimation methods for exponential random graph models (ERGMs), whose likelihood is proportional to an intractable normalizing constant. The usual approach approximates this constant with Monte Carlo simulations, however convergence may be exponentially slow. We propose a deterministic method, based on a variational mean-field approximation of the ERGM's normalizing constant. We compute lower and upper bounds for the approximation error for any network size, adapting nonlinear large deviations results. This translates into bounds on the distance between true likelihood and mean-field likelihood. Monte Carlo simulations suggest that in practice our deterministic method performs better than our conservative theoretical approximation bounds imply, for a large class of models.


2007 ◽  
Vol 11 (2) ◽  
pp. 851-862 ◽  
Author(s):  
W. Wang ◽  
P. H. A. J. M. Van Gelder ◽  
J. K. Vrijling ◽  
X. Chen

Abstract. The Lo's modified rescaled adjusted range test (R/S test) (Lo, 1991), GPH test (Geweke and Porter-Hudak, 1983) and two approximate maximum likelihood estimation methods, i.e., Whittle's estimator (W-MLE) and another one implemented in S-Plus (S-MLE) based on the algorithm of Haslett and Raftery (1989) are evaluated through intensive Monte Carlo simulations for detecting the existence of long-memory. It is shown that it is difficult to find an appropriate lag q for Lo's test for different short-memory autoregressive (AR) and fractionally integrated autoregressive and moving average (ARFIMA) processes, which makes the use of Lo's test very tricky. In general, the GPH test outperforms the Lo's test, but for cases where a strong short-range dependence exists (e.g., AR(1) processes with φ=0.95 or even 0.99), the GPH test gets useless, even for time series of large data size. On the other hand, the estimates of d given by S-MLE and W-MLE seem to give a good indication of whether or not the long-memory is present. The simulation results show that data size has a significant impact on the power of all the four methods because the availability of larger samples allows one to inspect the asymptotical properties better. Generally, the power of Lo's test and GPH test increases with increasing data size, and the estimates of d with GPH method, S-MLE method and W-MLE method converge with increasing data size. If no large enough data set is available, we should be aware of the possible bias of the estimates. The four methods are applied to daily average discharge series recorded at 31 gauging stations with different drainage areas in eight river basins in Europe, Canada and USA to detect the existence of long-memory. The results show that the presence of long-memory in 29 daily series is confirmed by at least three methods, whereas the other two series are indicated to be long-memory processes with two methods. The intensity of long-memory in daily streamflow processes has only a very weak positive relationship with the scale of watershed.


Author(s):  
Matthew T. Johnson ◽  
Ian M. Anderson ◽  
Jim Bentley ◽  
C. Barry Carter

Energy-dispersive X-ray spectrometry (EDS) performed at low (≤ 5 kV) accelerating voltages in the SEM has the potential for providing quantitative microanalytical information with a spatial resolution of ∼100 nm. In the present work, EDS analyses were performed on magnesium ferrite spinel [(MgxFe1−x)Fe2O4] dendrites embedded in a MgO matrix, as shown in Fig. 1. spatial resolution of X-ray microanalysis at conventional accelerating voltages is insufficient for the quantitative analysis of these dendrites, which have widths of the order of a few hundred nanometers, without deconvolution of contributions from the MgO matrix. However, Monte Carlo simulations indicate that the interaction volume for MgFe2O4 is ∼150 nm at 3 kV accelerating voltage and therefore sufficient to analyze the dendrites without matrix contributions.Single-crystal {001}-oriented MgO was reacted with hematite (Fe2O3) powder for 6 h at 1450°C in air and furnace cooled. The specimen was then cleaved to expose a clean cross-section suitable for microanalysis.


1979 ◽  
Vol 40 (C7) ◽  
pp. C7-63-C7-64
Author(s):  
A. J. Davies ◽  
J. Dutton ◽  
C. J. Evans ◽  
A. Goodings ◽  
P.K. Stewart

Sign in / Sign up

Export Citation Format

Share Document