Exact Likelihood Function Forms for an ARFIMA Process

Author(s):  
Jeffrey S. Pai ◽  
Nalini Ravishanker
Keyword(s):  
Author(s):  
Antara Dasgupta ◽  
Renaud Hostache ◽  
RAAJ Ramasankaran ◽  
Guy J.‐P Schumann ◽  
Stefania Grimaldi ◽  
...  

Author(s):  
Edward P. Herbst ◽  
Frank Schorfheide

Dynamic stochastic general equilibrium (DSGE) models have become one of the workhorses of modern macroeconomics and are extensively used for academic research as well as forecasting and policy analysis at central banks. This book introduces readers to state-of-the-art computational techniques used in the Bayesian analysis of DSGE models. The book covers Markov chain Monte Carlo techniques for linearized DSGE models, novel sequential Monte Carlo methods that can be used for parameter inference, and the estimation of nonlinear DSGE models based on particle filter approximations of the likelihood function. The theoretical foundations of the algorithms are discussed in depth, and detailed empirical applications and numerical illustrations are provided. The book also gives invaluable advice on how to tailor these algorithms to specific applications and assess the accuracy and reliability of the computations. The book is essential reading for graduate students, academic researchers, and practitioners at policy institutions.


Author(s):  
T. V. Oblakova

The paper is studying the justification of the Pearson criterion for checking the hypothesis on the uniform distribution of the general totality. If the distribution parameters are unknown, then estimates of the theoretical frequencies are used [1, 2, 3]. In this case the quantile of the chi-square distribution with the number of degrees of freedom, reduced by the number of parameters evaluated, is used to determine the upper threshold of the main hypothesis acceptance [7]. However, in the case of a uniform law, the application of Pearson's criterion does not extend to complex hypotheses, since the likelihood function does not allow differentiation with respect to parameters, which is used in the proof of the theorem mentioned [7, 10, 11].A statistical experiment is proposed in order to study the distribution of Pearson statistics for samples from a uniform law. The essence of the experiment is that at first a statistically significant number of one-type samples from a given uniform distribution is modeled, then for each sample Pearson statistics are calculated, and then the law of distribution of the totality of these statistics is studied. Modeling and processing of samples were performed in the Mathcad 15 package using the built-in random number generator and array processing facilities.In all the experiments carried out, the hypothesis that the Pearson statistics conform to the chi-square law was unambiguously accepted (confidence level 0.95). It is also statistically proved that the number of degrees of freedom in the case of a complex hypothesis need not be corrected. That is, the maximum likelihood estimates of the uniform law parameters implicitly used in calculating Pearson statistics do not affect the number of degrees of freedom, which is thus determined by the number of grouping intervals only.


2017 ◽  
Author(s):  
Darren Rhodes

Time is a fundamental dimension of human perception, cognition and action, as the perception and cognition of temporal information is essential for everyday activities and survival. Innumerable studies have investigated the perception of time over the last 100 years, but the neural and computational bases for the processing of time remains unknown. First, we present a brief history of research and the methods used in time perception and then discuss the psychophysical approach to time, extant models of time perception, and advancing inconsistencies between each account that this review aims to bridge the gap between. Recent work has advocated a Bayesian approach to time perception. This framework has been applied to both duration and perceived timing, where prior expectations about when a stimulus might occur in the future (prior distribution) are combined with current sensory evidence (likelihood function) in order to generate the perception of temporal properties (posterior distribution). In general, these models predict that the brain uses temporal expectations to bias perception in a way that stimuli are ‘regularized’ i.e. stimuli look more like what has been seen before. Evidence for this framework has been found using human psychophysical testing (experimental methods to quantify behaviour in the perceptual system). Finally, an outlook for how these models can advance future research in temporal perception is discussed.


Author(s):  
Eduardo de Freitas Costa ◽  
Silvana Schneider ◽  
Giulia Bagatini Carlotto ◽  
Tainá Cabalheiro ◽  
Mauro Ribeiro de Oliveira Júnior

AbstractThe dynamics of the wild boar population has become a pressing issue not only for ecological purposes, but also for agricultural and livestock production. The data related to the wild boar dispersal distance can have a complex structure, including excess of zeros and right-censored observations, thus being challenging for modeling. In this sense, we propose two different zero-inflated-right-censored regression models, assuming Weibull and gamma distributions. First, we present the construction of the likelihood function, and then, we apply both models to simulated datasets, demonstrating that both regression models behave well. The simulation results point to the consistency and asymptotic unbiasedness of the developed methods. Afterwards, we adjusted both models to a simulated dataset of wild boar dispersal, including excess of zeros, right-censored observations, and two covariates: age and sex. We showed that the models were useful to extract inferences about the wild boar dispersal, correctly describing the data mimicking a situation where males disperse more than females, and age has a positive effect on the dispersal of the wild boars. These results are useful to overcome some limitations regarding inferences in zero-inflated-right-censored datasets, especially concerning the wild boar’s population. Users will be provided with an R function to run the proposed models.


Author(s):  
Roman Flury ◽  
Reinhard Furrer

AbstractWe discuss the experiences and results of the AppStatUZH team’s participation in the comprehensive and unbiased comparison of different spatial approximations conducted in the Competition for Spatial Statistics for Large Datasets. In each of the different sub-competitions, we estimated parameters of the covariance model based on a likelihood function and predicted missing observations with simple kriging. We approximated the covariance model either with covariance tapering or a compactly supported Wendland covariance function.


Author(s):  
Farshad BahooToroody ◽  
Saeed Khalaj ◽  
Leonardo Leoni ◽  
Filippo De Carlo ◽  
Gianpaolo Di Bona ◽  
...  

Geosynthetics are extensively utilized to improve the stability of geotechnical structures and slopes in urban areas. Among all existing geosynthetics, geotextiles are widely used to reinforce unstable slopes due to their capabilities in facilitating reinforcement and drainage. To reduce settlement and increase the bearing capacity and slope stability, the classical use of geotextiles in embankments has been suggested. However, several catastrophic events have been reported, including failures in slopes in the absence of geotextiles. Many researchers have studied the stability of geotextile-reinforced slopes (GRSs) by employing different methods (analytical models, numerical simulation, etc.). The presence of source-to-source uncertainty in the gathered data increases the complexity of evaluating the failure risk in GRSs since the uncertainty varies among them. Consequently, developing a sound methodology is necessary to alleviate the risk complexity. Our study sought to develop an advanced risk-based maintenance (RBM) methodology for prioritizing maintenance operations by addressing fluctuations that accompany event data. For this purpose, a hierarchical Bayesian approach (HBA) was applied to estimate the failure probabilities of GRSs. Using Markov chain Monte Carlo simulations of likelihood function and prior distribution, the HBA can incorporate the aforementioned uncertainties. The proposed method can be exploited by urban designers, asset managers, and policymakers to predict the mean time to failures, thus directly avoiding unnecessary maintenance and safety consequences. To demonstrate the application of the proposed methodology, the performance of nine reinforced slopes was considered. The results indicate that the average failure probability of the system in an hour is 2.8×10−5 during its lifespan, which shows that the proposed evaluation method is more realistic than the traditional methods.


Entropy ◽  
2021 ◽  
Vol 23 (3) ◽  
pp. 312
Author(s):  
Ilze A. Auzina ◽  
Jakub M. Tomczak

Many real-life processes are black-box problems, i.e., the internal workings are inaccessible or a closed-form mathematical expression of the likelihood function cannot be defined. For continuous random variables, likelihood-free inference problems can be solved via Approximate Bayesian Computation (ABC). However, an optimal alternative for discrete random variables is yet to be formulated. Here, we aim to fill this research gap. We propose an adjusted population-based MCMC ABC method by re-defining the standard ABC parameters to discrete ones and by introducing a novel Markov kernel that is inspired by differential evolution. We first assess the proposed Markov kernel on a likelihood-based inference problem, namely discovering the underlying diseases based on a QMR-DTnetwork and, subsequently, the entire method on three likelihood-free inference problems: (i) the QMR-DT network with the unknown likelihood function, (ii) the learning binary neural network, and (iii) neural architecture search. The obtained results indicate the high potential of the proposed framework and the superiority of the new Markov kernel.


Sign in / Sign up

Export Citation Format

Share Document