intractable likelihood
Recently Published Documents


TOTAL DOCUMENTS

13
(FIVE YEARS 6)

H-INDEX

3
(FIVE YEARS 2)

Author(s):  
Yang Zeng

Abstract Due to the flexibility and feasibility of addressing ill-posed problems, the Bayesian method has been widely used in inverse heat conduction problems (IHCPs). However, in the real science and engineering IHCPs, the likelihood function of the Bayesian method is commonly computationally expensive or analytically unavailable. In this study, in order to circumvent this intractable likelihood function, the approximate Bayesian computation (ABC) is expanded to the IHCPs. In ABC, the high dimensional observations in the intractable likelihood function are equalized by their low dimensional summary statistics. Thus, the performance of the ABC depends on the selection of summary statistics. In this study, a machine learning-based ABC (ML-ABC) is proposed to address the complicated selections of the summary statistics. The Auto-Encoder (AE) is a powerful Machine Learning (ML) framework which can compress the observations into very low dimensional summary statistics with little information loss. In addition, in order to accelerate the calculation of the proposed framework, another neural network (NN) is utilized to construct the mapping between the unknowns and the summary statistics. With this mapping, given arbitrary unknowns, the summary statistics can be obtained efficiently without solving the time-consuming forward problem with numerical method. Furthermore, an adaptive nested sampling method (ANSM) is developed to further improve the efficiency of sampling. The performance of the proposed method is demonstrated with two IHCP cases.


2021 ◽  
Author(s):  
George Karabatsos

Abstract Approximate Bayesian Computation (ABC) can provide inferences from the (approximate) posterior distribution based on intractable likelihoods. The quality of ABC inferences relies on the choice of tolerance for the distance between the observed data summary statistics, and the pseudo-data summary statistics simulated from the likelihood, used within the context of an algorithm which samples from the approximate posterior. However, the ABC literature does not provide an automatic method to select the best tolerance level for the given dataset at hand, and in ABC practice finding the best tolerance level can be time consuming. This note introduces a fast automatic estimator of the tolerance, based on the parametric bootstrap. After the tolerance estimate is calculated, it can then be input into any suitable importance sampling or MCMC algorithm to approximate from the target approximate posterior distribution. This tolerance estimator is illustrated through ABC analyses of simulated and real datasets involving several intractable likelihood models. This includes the analysis of a real 23,000-node network dataset involving stochastic search model selection.


2021 ◽  
Author(s):  
Ngoc Tran ◽  
Johannes Buck ◽  
Claudia Kluppelberg

<p>Causal inference for extreme aims to discover cause and effect relation between large observed values of random variables. This is a fundamental problem to in many applications, from finance, engineering risks, nutrition to hydrology, to name a few. Unique challenges to<br>extreme values are lack of data and lack of model smoothness due to the max operator. Existing methods in extreme value statistics for dimensions d ≥ 3 are limited due to an intractable likelihood, while techniques for learning Bayesian networks require a large amount of data to fit nonlinear models. This talk showcases the max-linear model and new algorithms for fitting them. Our method performs well on real data, recovering a directed graph for both the Danube and the Lower Colorado with high accuracy purely through extreme measurements. We also compare our method to state-of-the-art algorithms for causal inference for nonlinear models, and outline open problems in hydrology, extreme value statistics and machine learning.</p>


Test ◽  
2021 ◽  
Author(s):  
Stefano Cabras ◽  
María Eugenia Castellanos ◽  
Oliver Ratmann

2019 ◽  
Vol 6 (1) ◽  
pp. 379-403 ◽  
Author(s):  
Mark A. Beaumont

Many of the statistical models that could provide an accurate, interesting, and testable explanation for the structure of a data set turn out to have intractable likelihood functions. The method of approximate Bayesian computation (ABC) has become a popular approach for tackling such models. This review gives an overview of the method and the main issues and challenges that are the subject of current research.


2018 ◽  
Author(s):  
Nathan J. Evans

Evidence accumulation models (EAMs) have become the dominant models of rapid decision-making. Several variants of these models have been proposed, ranging from the simple linear ballistic accumulator (LBA) to the more complex leaky-competing accumulator (LCA), and further extensions that include time-varying rates of evidence accumulation or decision thresholds. Although applications of the simpler variants have been widespread, applications of the more complex models have been fewer, largely due to their intractable likelihood function and the computational cost of mass simulation. Here, I present a framework for efficiently fitting complex EAMs, which uses a new, efficient method of simulating these models. I find that the majority of simulation time is taken up by random number generation (RNG) from the normal distribution, needed for the stochastic noise of the differential equation. To reduce this inefficiency, I propose using the well-known concept within computer science of “look-up tables” (LUTs) as an approximation to the inverse cumulative density function (iCDF) method of RNG, which I call “LUT-iCDF”. I show that when using an appropriately sized LUT, simulations using LUT-iCDF closely match those from the standard RNG method in R. My framework – which I provide a detailed tutorial on how to implement – includes C code for 12 different variants of EAMs using the LUT-iCDF method, and should make the implementation of complex EAMs easier and faster.


Author(s):  
Ruocheng Guo ◽  
Jundong Li ◽  
Huan Liu

Copious sequential event data has consistently increased in various high-impact domains such as social media and sharing economy. When events start to take place in a sequential fashion, an important question arises: "what type of event will happen at what time in the near future?" To answer the question, a class of mathematical models called the marked temporal point process is often exploited as it can model the timing and properties of events seamlessly in a joint framework. Recently, various recurrent neural network (RNN) models are proposed to enhance the predictive power of mark temporal point process. However, existing marked temporal point models are fundamentally based on the Maximum Likelihood Estimation (MLE) framework for the training, and inevitably suffer from the problem resulted from the intractable likelihood function. Surprisingly, little attention has been paid to address this issue. In this work, we propose INITIATOR - a novel training framework based on noise-contrastive estimation to resolve this problem. Theoretically, we show the exists a strong connection between the proposed INITIATOR and the exact MLE. Experimentally, the efficacy of INITIATOR is demonstrated over the state-of-the-art approaches on several real-world datasets from various areas.


2017 ◽  
Vol 26 (4) ◽  
pp. 873-882 ◽  
Author(s):  
Minh-Ngoc Tran ◽  
David J. Nott ◽  
Robert Kohn

Sign in / Sign up

Export Citation Format

Share Document