scholarly journals Extended Laplace principle for empirical measures of a Markov chain

2019 ◽  
Vol 51 (01) ◽  
pp. 136-167 ◽  
Author(s):  
Stephan Eckstein

AbstractWe consider discrete-time Markov chains with Polish state space. The large deviations principle for empirical measures of a Markov chain can equivalently be stated in Laplace principle form, which builds on the convex dual pair of relative entropy (or Kullback– Leibler divergence) and cumulant generating functional f ↦ ln ʃ exp (f). Following the approach by Lacker (2016) in the independent and identically distributed case, we generalize the Laplace principle to a greater class of convex dual pairs. We present in depth one application arising from this extension, which includes large deviation results and a weak law of large numbers for certain robust Markov chains—similar to Markov set chains—where we model robustness via the first Wasserstein distance. The setting and proof of the extended Laplace principle are based on the weak convergence approach to large deviations by Dupuis and Ellis (2011).

2020 ◽  
Vol 52 (1) ◽  
pp. 61-101
Author(s):  
Daniel Lacker

AbstractThis work is devoted to a vast extension of Sanov’s theorem, in Laplace principle form, based on alternatives to the classical convex dual pair of relative entropy and cumulant generating functional. The abstract results give rise to a number of probabilistic limit theorems and asymptotics. For instance, widely applicable non-exponential large deviation upper bounds are derived for empirical distributions and averages of independent and identically distributed samples under minimal integrability assumptions, notably accommodating heavy-tailed distributions. Other interesting manifestations of the abstract results include new results on the rate of convergence of empirical measures in Wasserstein distance, uniform large deviation bounds, and variational problems involving optimal transport costs, as well as an application to error estimates for approximate solutions of stochastic optimization problems. The proofs build on the Dupuis–Ellis weak convergence approach to large deviations as well as the duality theory for convex risk measures.


1990 ◽  
Vol 27 (1) ◽  
pp. 44-59 ◽  
Author(s):  
James A Bucklew ◽  
Peter Ney ◽  
John S. Sadowsky

Importance sampling is a Monte Carlo simulation technique in which the simulation distribution is different from the true underlying distribution. In order to obtain an unbiased Monte Carlo estimate of the desired parameter, simulated events are weighted to reflect their true relative frequency. In this paper, we consider the estimation via simulation of certain large deviations probabilities for time-homogeneous Markov chains. We first demonstrate that when the simulation distribution is also a homogeneous Markov chain, the estimator variance will vanish exponentially as the sample size n tends to∞. We then prove that the estimator variance is asymptotically minimized by the same exponentially twisted Markov chain which arises in large deviation theory, and furthermore, this optimization is unique among uniformly recurrent homogeneous Markov chain simulation distributions.


Entropy ◽  
2018 ◽  
Vol 20 (8) ◽  
pp. 573 ◽  
Author(s):  
Rodrigo Cofré ◽  
Cesar Maldonado ◽  
Fernando Rosas

We consider the maximum entropy Markov chain inference approach to characterize the collective statistics of neuronal spike trains, focusing on the statistical properties of the inferred model. To find the maximum entropy Markov chain, we use the thermodynamic formalism, which provides insightful connections with statistical physics and thermodynamics from which large deviations properties arise naturally. We provide an accessible introduction to the maximum entropy Markov chain inference problem and large deviations theory to the community of computational neuroscience, avoiding some technicalities while preserving the core ideas and intuitions. We review large deviations techniques useful in spike train statistics to describe properties of accuracy and convergence in terms of sampling size. We use these results to study the statistical fluctuation of correlations, distinguishability, and irreversibility of maximum entropy Markov chains. We illustrate these applications using simple examples where the large deviation rate function is explicitly obtained for maximum entropy models of relevance in this field.


1990 ◽  
Vol 27 (01) ◽  
pp. 44-59 ◽  
Author(s):  
James A Bucklew ◽  
Peter Ney ◽  
John S. Sadowsky

Importance sampling is a Monte Carlo simulation technique in which the simulation distribution is different from the true underlying distribution. In order to obtain an unbiased Monte Carlo estimate of the desired parameter, simulated events are weighted to reflect their true relative frequency. In this paper, we consider the estimation via simulation of certain large deviations probabilities for time-homogeneous Markov chains. We first demonstrate that when the simulation distribution is also a homogeneous Markov chain, the estimator variance will vanish exponentially as the sample size n tends to∞. We then prove that the estimator variance is asymptotically minimized by the same exponentially twisted Markov chain which arises in large deviation theory, and furthermore, this optimization is unique among uniformly recurrent homogeneous Markov chain simulation distributions.


2000 ◽  
Vol 128 (3) ◽  
pp. 561-569 ◽  
Author(s):  
NEIL O'CONNELL

Sanov's Theorem states that the sequence of empirical measures associated with a sequence of i.d.d. random variables satisfies the large deviation principle (LDP) in the weak topology with rate function given by a relative entropy. We present a derivative which allows one to establish LDPs for symmetric functions of many i.d.d. random variables under the condition that (i) a law of large numbers holds whatever the underlying distribution and (ii) the functions are uniformly Lipschitz. The heuristic (of the title) is that the LDP follows from (i) provided the functions are ‘sufficiently smooth’. As an application, we obtain large deviations results for the stochastic bin-packing problem.


1988 ◽  
Vol 25 (1) ◽  
pp. 106-119 ◽  
Author(s):  
Richard Arratia ◽  
Pricilla Morris ◽  
Michael S. Waterman

A derivation of a law of large numbers for the highest-scoring matching subsequence is given. Let Xk, Yk be i.i.d. q=(q(i))i∊S letters from a finite alphabet S and v=(v(i))i∊S be a sequence of non-negative real numbers assigned to the letters of S. Using a scoring system similar to that of the game Scrabble, the score of a word w=i1 · ·· im is defined to be V(w)=v(i1) + · ·· + v(im). Let Vn denote the value of the highest-scoring matching contiguous subsequence between X1X2 · ·· Xn and Y1Y2· ·· Yn. In this paper, we show that Vn/K log(n) → 1 a.s. where K ≡ K(q,v). The method employed here involves ‘stuttering’ the letters to construct a Markov chain and applying previous results for the length of the longest matching subsequence. An explicit form for β ∊Pr(S), where β (i) denotes the proportion of letter i found in the highest-scoring word, is given. A similar treatment for Markov chains is also included.Implicit in these results is a large-deviation result for the additive functional, H ≡ Σn < τv(Xn), for a Markov chain stopped at the hitting time τ of some state. We give this large deviation result explicitly, for Markov chains in discrete time and in continuous time.


Filomat ◽  
2018 ◽  
Vol 32 (2) ◽  
pp. 473-487 ◽  
Author(s):  
A. Haseena ◽  
M. Suvinthra ◽  
N. Annapoorani

A Freidlin-Wentzell type large deviation principle is derived for a class of It? type stochastic integrodifferential equations driven by a finite number of multiplicative noises of the Gaussian type. The weak convergence approach is used here to prove the Laplace principle, equivalently large deviation principle.


2020 ◽  
Vol 26 (2) ◽  
pp. 309-314
Author(s):  
Zhenxia Liu ◽  
Yurong Zhu

AbstractWe continue our investigation on general large deviation principles (LDPs) for longest runs. Previously, a general LDP for the longest success run in a sequence of independent Bernoulli trails was derived in [Z. Liu and X. Yang, A general large deviation principle for longest runs, Statist. Probab. Lett. 110 2016, 128–132]. In the present note, we establish a general LDP for the longest success run in a two-state (success or failure) Markov chain which recovers the previous result in the aforementioned paper. The main new ingredient is to implement suitable estimates of the distribution function of the longest success run recently established in [Z. Liu and X. Yang, On the longest runs in Markov chains, Probab. Math. Statist. 38 2018, 2, 407–428].


Sign in / Sign up

Export Citation Format

Share Document