scholarly journals Large deviations for the leaves in some random trees

2009 ◽  
Vol 41 (3) ◽  
pp. 845-873 ◽  
Author(s):  
Wlodek Bryc ◽  
David Minda ◽  
Sunder Sethuraman

Large deviation principles and related results are given for a class of Markov chains associated to the ‘leaves' in random recursive trees and preferential attachment random graphs, as well as the ‘cherries’ in Yule trees. In particular, the method of proof, combining analytic and Dupuis–Ellis-type path arguments, allows for an explicit computation of the large deviation pressure.

2009 ◽  
Vol 41 (03) ◽  
pp. 845-873 ◽  
Author(s):  
Wlodek Bryc ◽  
David Minda ◽  
Sunder Sethuraman

Large deviation principles and related results are given for a class of Markov chains associated to the ‘leaves' in random recursive trees and preferential attachment random graphs, as well as the ‘cherries’ in Yule trees. In particular, the method of proof, combining analytic and Dupuis–Ellis-type path arguments, allows for an explicit computation of the large deviation pressure.


2006 ◽  
Vol 06 (04) ◽  
pp. 487-520 ◽  
Author(s):  
FUQING GAO ◽  
JICHENG LIU

We prove large deviation principles for solutions of small perturbations of SDEs in Hölder norms and Sobolev norms, where the SDEs have non-Markovian coefficients. As an application, we obtain a large deviation principle for solutions of anticipating SDEs in terms of (r, p) capacities on the Wiener space.


2020 ◽  
Vol 26 (2) ◽  
pp. 309-314
Author(s):  
Zhenxia Liu ◽  
Yurong Zhu

AbstractWe continue our investigation on general large deviation principles (LDPs) for longest runs. Previously, a general LDP for the longest success run in a sequence of independent Bernoulli trails was derived in [Z. Liu and X. Yang, A general large deviation principle for longest runs, Statist. Probab. Lett. 110 2016, 128–132]. In the present note, we establish a general LDP for the longest success run in a two-state (success or failure) Markov chain which recovers the previous result in the aforementioned paper. The main new ingredient is to implement suitable estimates of the distribution function of the longest success run recently established in [Z. Liu and X. Yang, On the longest runs in Markov chains, Probab. Math. Statist. 38 2018, 2, 407–428].


1990 ◽  
Vol 27 (1) ◽  
pp. 44-59 ◽  
Author(s):  
James A Bucklew ◽  
Peter Ney ◽  
John S. Sadowsky

Importance sampling is a Monte Carlo simulation technique in which the simulation distribution is different from the true underlying distribution. In order to obtain an unbiased Monte Carlo estimate of the desired parameter, simulated events are weighted to reflect their true relative frequency. In this paper, we consider the estimation via simulation of certain large deviations probabilities for time-homogeneous Markov chains. We first demonstrate that when the simulation distribution is also a homogeneous Markov chain, the estimator variance will vanish exponentially as the sample size n tends to∞. We then prove that the estimator variance is asymptotically minimized by the same exponentially twisted Markov chain which arises in large deviation theory, and furthermore, this optimization is unique among uniformly recurrent homogeneous Markov chain simulation distributions.


2019 ◽  
Vol 51 (01) ◽  
pp. 136-167 ◽  
Author(s):  
Stephan Eckstein

AbstractWe consider discrete-time Markov chains with Polish state space. The large deviations principle for empirical measures of a Markov chain can equivalently be stated in Laplace principle form, which builds on the convex dual pair of relative entropy (or Kullback– Leibler divergence) and cumulant generating functional f ↦ ln ʃ exp (f). Following the approach by Lacker (2016) in the independent and identically distributed case, we generalize the Laplace principle to a greater class of convex dual pairs. We present in depth one application arising from this extension, which includes large deviation results and a weak law of large numbers for certain robust Markov chains—similar to Markov set chains—where we model robustness via the first Wasserstein distance. The setting and proof of the extended Laplace principle are based on the weak convergence approach to large deviations by Dupuis and Ellis (2011).


Author(s):  
Rodrigo Cofré ◽  
Cesar Maldonado ◽  
Fernando Rosas

We consider the maximum entropy Markov chain inference approach to characterize the collective statistics of neuronal spike trains, focusing on the statistical properties of the inferred model. We review large deviations techniques useful in this context to describe properties of accuracy and convergence in terms of sampling size. We use these results to study the statistical fluctuation of correlations, distinguishability and irreversibility of maximum entropy Markov chains. We illustrate these applications using simple examples where the large deviation rate function is explicitly obtained for maximum entropy models of relevance in this field.


1997 ◽  
Vol 34 (3) ◽  
pp. 753-766 ◽  
Author(s):  
Neil O'connell

In this paper we describe how the joint large deviation properties of traffic streams are altered when the traffic passes through a shared buffer according to a FCFS service policy with stochastic service capacity. We also consider the stationary case, proving large deviation principles for the state of the system in equilibrium and for departures from an equilibrium system.


Entropy ◽  
2018 ◽  
Vol 20 (8) ◽  
pp. 573 ◽  
Author(s):  
Rodrigo Cofré ◽  
Cesar Maldonado ◽  
Fernando Rosas

We consider the maximum entropy Markov chain inference approach to characterize the collective statistics of neuronal spike trains, focusing on the statistical properties of the inferred model. To find the maximum entropy Markov chain, we use the thermodynamic formalism, which provides insightful connections with statistical physics and thermodynamics from which large deviations properties arise naturally. We provide an accessible introduction to the maximum entropy Markov chain inference problem and large deviations theory to the community of computational neuroscience, avoiding some technicalities while preserving the core ideas and intuitions. We review large deviations techniques useful in spike train statistics to describe properties of accuracy and convergence in terms of sampling size. We use these results to study the statistical fluctuation of correlations, distinguishability, and irreversibility of maximum entropy Markov chains. We illustrate these applications using simple examples where the large deviation rate function is explicitly obtained for maximum entropy models of relevance in this field.


Author(s):  
Nikolai Leonenko ◽  
Claudio Macci ◽  
Barbara Pacchiarotti

We consider a class of tempered subordinators, namely a class of subordinators with one-dimensional marginal tempered distributions which belong to a family studied in [3]. The main contribution in this paper is a non-central moderate deviations result. More precisely we mean a class of large deviation principles that fill the gap between the (trivial) weak convergence of some non-Gaussian identically distributed random variables to their common law, and the convergence of some other related random variables to a constant. Some other minor results concern large deviations for the inverse of the tempered subordinators considered in this paper; actually, in some results, these inverse processes appear as random time-changes of other independent processes.


Sign in / Sign up

Export Citation Format

Share Document