monte carlo experiments
Recently Published Documents


TOTAL DOCUMENTS

202
(FIVE YEARS 47)

H-INDEX

27
(FIVE YEARS 2)

PLoS ONE ◽  
2022 ◽  
Vol 17 (1) ◽  
pp. e0260836
Author(s):  
Daisuke Murakami ◽  
Tomoko Matsui

In the era of open data, Poisson and other count regression models are increasingly important. Still, conventional Poisson regression has remaining issues in terms of identifiability and computational efficiency. Especially, due to an identification problem, Poisson regression can be unstable for small samples with many zeros. Provided this, we develop a closed-form inference for an over-dispersed Poisson regression including Poisson additive mixed models. The approach is derived via mode-based log-Gaussian approximation. The resulting method is fast, practical, and free from the identification problem. Monte Carlo experiments demonstrate that the estimation error of the proposed method is a considerably smaller estimation error than the closed-form alternatives and as small as the usual Poisson regressions. For counts with many zeros, our approximation has better estimation accuracy than conventional Poisson regression. We obtained similar results in the case of Poisson additive mixed modeling considering spatial or group effects. The developed method was applied for analyzing COVID-19 data in Japan. This result suggests that influences of pedestrian density, age, and other factors on the number of cases change over periods.


Stats ◽  
2021 ◽  
Vol 5 (1) ◽  
pp. 26-51
Author(s):  
Paul Doukhan ◽  
Joseph Rynkiewicz ◽  
Yahia Salhi

This article proposes an optimal and robust methodology for model selection. The model of interest is a parsimonious alternative framework for modeling the stochastic dynamics of mortality improvement rates introduced recently in the literature. The approach models mortality improvements using a random field specification with a given causal structure instead of the commonly used factor-based decomposition framework. It captures some well-documented stylized facts of mortality behavior including: dependencies among adjacent cohorts, the cohort effects, cross-generation correlations, and the conditional heteroskedasticity of mortality. Such a class of models is a generalization of the now widely used AR-ARCH models for univariate processes. A the framework is general, it was investigated and illustrated a simple variant called the three-level memory model. However, it is not clear which is the best parameterization to use for specific mortality uses. In this paper, we investigate the optimal model choice and parameter selection among potential and candidate models. More formally, we propose a methodology well-suited to such a random field able to select thebest model in the sense that the model is not only correct but also most economical among all thecorrectmodels. Formally, we show that a criterion based on a penalization of the log-likelihood, e.g., the using of the Bayesian Information Criterion, is consistent. Finally, we investigate the methodology based on Monte-Carlo experiments as well as real-world datasets.


Mathematics ◽  
2021 ◽  
Vol 9 (23) ◽  
pp. 3108
Author(s):  
Ahmed M. T. Abd El-Bar ◽  
Willams B. F. da Silva ◽  
Abraão D. C. Nascimento

In this article, two new families of distributions are proposed: the generalized log-Lindley-G (GLL-G) and its counterpart, the GLL*-G. These families can be justified by their relation to the log-Lindley model, an important assumption for describing social and economic phenomena. Specific GLL models are introduced and studied. We show that the GLL density is rewritten as a two-member linear combination of the exponentiated G-densities and that, consequently, many of its mathematical properties arise directly, such as moment-based expressions. A maximum likelihood estimation procedure for the GLL parameters is provided and the behavior of the resulting estimates is evaluated by Monte Carlo experiments. An application to repairable data is made. The results argue for the use of the exponential law as the basis for the GLL-G family.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Jorge Martínez Compains ◽  
Ignacio Rodríguez Carreño ◽  
Ramazan Gençay ◽  
Tommaso Trani ◽  
Daniel Ramos Vilardell

Abstract Johansen’s Cointegration Test (JCT) performs remarkably well in finding stable bivariate cointegration relationships. Nonetheless, the JCT is not necessarily designed to detect such relationships in presence of non-linear patterns such as structural breaks or cycles that fall in the low frequency portion of the spectrum. Seasonal adjustment procedures might not detect such non-linear patterns, and thus, we expose the difficulty in identifying cointegrating relations under the traditional use of JCT. Within several Monte Carlo experiments, we show that wavelets can empower more the JCT framework than the traditional seasonal adjustment methodologies, allowing for identification of hidden cointegrating relationships. Moreover, we confirm these results using seasonally adjusted time series as US consumption and income, gross national product (GNP) and money supply M1 and GNP and M2.


2021 ◽  
Vol 2068 (1) ◽  
pp. 012003
Author(s):  
Ayari Samia ◽  
Mohamed Boutahar

Abstract The purpose of this paper is estimating the dependence function of multivariate extreme values copulas. Different nonparametric estimators are developed in the literature assuming that marginal distributions are known. However, this assumption is unrealistic in practice. To overcome the drawbacks of these estimators, we substituted the extreme value marginal distribution by the empirical distribution function. Monte Carlo experiments are carried out to compare the performance of the Pickands, Deheuvels, Hall-Tajvidi, Zhang and Gudendorf-Segers estimators. Empirical results showed that the empirical distribution function improved the estimators’ performance for different sample sizes.


2021 ◽  
pp. 008117502110463
Author(s):  
Ryan P. Thombs ◽  
Xiaorui Huang ◽  
Jared Berry Fitzgerald

Modeling asymmetric relationships is an emerging subject of interest among sociologists. York and Light advanced a method to estimate asymmetric models with panel data, which was further developed by Allison. However, little attention has been given to the large- N, large- T case, wherein autoregression, slope heterogeneity, and cross-sectional dependence are important issues to consider. The authors fill this gap by conducting Monte Carlo experiments comparing the bias and power of the fixed-effects estimator to a set of heterogeneous panel estimators. The authors find that dynamic misspecification can produce substantial biases in the coefficients. Furthermore, even when the dynamics are correctly specified, the fixed-effects estimator will produce inconsistent and unstable estimates of the long-run effects in the presence of slope heterogeneity. The authors demonstrate these findings by testing for directional asymmetry in the economic development–CO2 emissions relationship, a key question in macro sociology, using data for 66 countries from 1971 to 2015. The authors conclude with a set of methodological recommendations on modeling directional asymmetry.


2021 ◽  
Vol 12 (3) ◽  
pp. 997-1013
Author(s):  
Pascal Yiou ◽  
Nicolas Viovy

Abstract. Estimating the risk of forest collapse due to extreme climate events is one of the challenges of adapting to climate change. We adapt a concept from ruin theory, which is widely used in econometrics and the insurance industry, to design a growth–ruin model for trees which accounts for climate hazards that can jeopardize tree growth. This model is an elaboration of a classical Cramer–Lundberg ruin model that is used in the insurance industry. The model accounts for the interactions between physiological parameters of trees and the occurrence of climate hazards. The physiological parameters describe interannual growth rates and how trees react to hazards. The hazard parameters describe the probability distributions of the occurrence and intensity of climate events. We focus on a drought–heatwave hazard. The goal of the paper is to determine the dependence of the forest ruin and average growth probability distributions on physiological and hazard parameters. Using extensive Monte Carlo experiments, we show the existence of a threshold in the frequency of hazards beyond which forest ruin becomes certain to occur within a centennial horizon. We also detect a small effect of the strategies used to cope with hazards. This paper is a proof of concept for the quantification of forest collapse under climate change.


Author(s):  
Jack P. C. Kleijnen ◽  
Wim C. M. van Beers

Kriging or Gaussian process models are popular metamodels (surrogate models or emulators) of simulation models; these metamodels give predictors for input combinations that are not simulated. To validate these metamodels for computationally expensive simulation models, the analysts often apply computationally efficient cross-validation. In this paper, we derive new statistical tests for so-called leave-one-out cross-validation. Graphically, we present these tests as scatterplots augmented with confidence intervals that use the estimated variances of the Kriging predictors. To estimate the true variances of these predictors, we might use bootstrapping. Like other statistical tests, our tests—with or without bootstrapping—have type I and type II error probabilities; to estimate these probabilities, we use Monte Carlo experiments. We also use such experiments to investigate statistical convergence. To illustrate the application of our tests, we use (i) an example with two inputs and (ii) the popular borehole example with eight inputs. Summary of Contribution: Simulation models are very popular in operations research (OR) and are also known as computer simulations or computer experiments. A popular topic is design and analysis of computer experiments. This paper focuses on Kriging methods and cross-validation methods applied to simulation models; these methods and models are often applied in OR. More specifically, the paper provides the following; (1) the basic variant of a new statistical test for leave-one–out cross-validation; (2) a bootstrap method for the estimation of the true variance of the Kriging predictor; and (3) Monte Carlo experiments for the evaluation of the consistency of the Kriging predictor, the convergence of the Studentized prediction error to the standard normal variable, and the convergence of the expected experimentwise type I error rate to the prespecified nominal value. The new statistical test is illustrated through examples, including the popular borehole model.


2021 ◽  
Vol 2 (2) ◽  
pp. 01-07
Author(s):  
Halim Zeghdoudi ◽  
Madjda Amrani

In this work, we study the famous model of volatility; called model of conditional heteroscedastic autoregressive with mixed memory MMGARCH for modeling nonlinear time series. The MMGARCH model has two mixing components, one is a GARCH short memory and the other is GARCH long memory. the main objective of this search for finds the best model between mixtures of the models we made (long memory with long memory, short memory with short memory and short memory with long memory) Also, the existence of its stationary solution is discussed. The Monte Carlo experiments demonstrate we discovered theoretical. In addition, the empirical application of the MMGARCH model (1, 1) to the daily index DOW and NASDAQ illustrates its capabilities; we find that for the mixture between APARCH and EGARCH is superior to any other model tested because it produces the smallest errors.


Author(s):  
Patrick Minford ◽  
Zhirong Ou ◽  
Zheyi Zhu

AbstractWe revisit the evidence on consumer risk-pooling and uncovered interest parity. Widely used single-equation tests are strongly biased against both. Using the full-model, Indirect Inference test, which is unbiased and has Goldilocks power according to Monte Carlo experiments, we find that both the risk-pooling hypothesis and its weaker UIP version are generally accepted as part of a full world DSGE model. The fact that the risk-pooling hypothesis, with its implication of strong cross-border consumer linkage, has passed this test with generally the highest p-value, suggests that it deserves serious attention from policy-makers looking for a relevant model with which to discuss international monetary and other business cycle policies.


Sign in / Sign up

Export Citation Format

Share Document