normalising constant
Recently Published Documents


TOTAL DOCUMENTS

7
(FIVE YEARS 1)

H-INDEX

4
(FIVE YEARS 0)

2014 ◽  
Vol 46 (1) ◽  
pp. 279-306 ◽  
Author(s):  
Alexandros Beskos ◽  
Dan O. Crisan ◽  
Ajay Jasra ◽  
Nick Whiteley

In this paper we develop a collection of results associated to the analysis of the sequential Monte Carlo (SMC) samplers algorithm, in the context of high-dimensional independent and identically distributed target probabilities. The SMC samplers algorithm can be designed to sample from a single probability distribution, using Monte Carlo to approximate expectations with respect to this law. Given a target density in d dimensions our results are concerned with d → ∞, while the number of Monte Carlo samples, N, remains fixed. We deduce an explicit bound on the Monte-Carlo error for estimates derived using the SMC sampler and the exact asymptotic relative -error of the estimate of the normalising constant associated to the target. We also establish marginal propagation of chaos properties of the algorithm. These results are deduced when the cost of the algorithm is O(Nd2).


2014 ◽  
Vol 46 (01) ◽  
pp. 279-306 ◽  
Author(s):  
Alexandros Beskos ◽  
Dan O. Crisan ◽  
Ajay Jasra ◽  
Nick Whiteley

In this paper we develop a collection of results associated to the analysis of the sequential Monte Carlo (SMC) samplers algorithm, in the context of high-dimensional independent and identically distributed target probabilities. The SMC samplers algorithm can be designed to sample from a single probability distribution, using Monte Carlo to approximate expectations with respect to this law. Given a target density inddimensions our results are concerned withd→ ∞, while the number of Monte Carlo samples,N, remains fixed. We deduce an explicit bound on the Monte-Carlo error for estimates derived using the SMC sampler and the exact asymptotic relative-error of the estimate of the normalising constant associated to the target. We also establish marginal propagation of chaos properties of the algorithm. These results are deduced when the cost of the algorithm isO(Nd2).


1996 ◽  
Vol 28 (2) ◽  
pp. 339-339
Author(s):  
Francisco Montes ◽  
Jorge Mateu

Parameter estimation for a two-dimensional point pattern is difficult because most of the available stochastic models have intractable likelihoods ([2]). An exception is the class of Gibbs or Markov point processes ([1], [5]), where the likelihood typically forms an exponential family and is given explicitly up to a normalising constant. However, the latter is not known analytically, so parameter estimates must be based on approximations ([3], [6], [7]). In this paper we present comparisons amongst the different techniques available in the literature to obtain an approximation of the maximum likelihood estimate (MLE). Two stochastic methods are specifically illustrated: a Newton-Raphson algorithm ([7]) and the Robbins-Monro procedure ([8]). We use a very simple point process model, the Strauss process ([4]), to test and compare those approximations.


Sign in / Sign up

Export Citation Format

Share Document