On the almost sure convergence for the joint version of maxima and minima of stationary sequences

2019 ◽  
Vol 154 ◽  
pp. 108540 ◽  
Author(s):  
Zacarias Panga ◽  
Luísa Pereira
2012 ◽  
Vol 2012 ◽  
pp. 1-14 ◽  
Author(s):  
Przemyslaw Matula ◽  
Iwona Stepien

We study weak convergence of product of sums of stationary sequences of associated random variables to the log-normal law. The almost sure version of this result is also presented. The obtained theorems extend and generalize some of the results known so far for independent or associated random variables.


2011 ◽  
Vol 48 (02) ◽  
pp. 366-388 ◽  
Author(s):  
Eckhard Schlemm

We consider the first passage percolation problem on the random graph with vertex set N x {0, 1}, edges joining vertices at a Euclidean distance equal to unity, and independent exponential edge weights. We provide a central limit theorem for the first passage times l n between the vertices (0, 0) and (n, 0), thus extending earlier results about the almost-sure convergence of l n / n as n → ∞. We use generating function techniques to compute the n-step transition kernels of a closely related Markov chain which can be used to explicitly calculate the asymptotic variance in the central limit theorem.


2021 ◽  
Vol 172 ◽  
pp. 109045
Author(s):  
Luca Pratelli ◽  
Pietro Rigo

2021 ◽  
Vol 36 ◽  
Author(s):  
Sergio Valcarcel Macua ◽  
Ian Davies ◽  
Aleksi Tukiainen ◽  
Enrique Munoz de Cote

Abstract We propose a fully distributed actor-critic architecture, named diffusion-distributed-actor-critic Diff-DAC, with application to multitask reinforcement learning (MRL). During the learning process, agents communicate their value and policy parameters to their neighbours, diffusing the information across a network of agents with no need for a central station. Each agent can only access data from its local task, but aims to learn a common policy that performs well for the whole set of tasks. The architecture is scalable, since the computational and communication cost per agent depends on the number of neighbours rather than the overall number of agents. We derive Diff-DAC from duality theory and provide novel insights into the actor-critic framework, showing that it is actually an instance of the dual-ascent method. We prove almost sure convergence of Diff-DAC to a common policy under general assumptions that hold even for deep neural network approximations. For more restrictive assumptions, we also prove that this common policy is a stationary point of an approximation of the original problem. Numerical results on multitask extensions of common continuous control benchmarks demonstrate that Diff-DAC stabilises learning and has a regularising effect that induces higher performance and better generalisation properties than previous architectures.


1988 ◽  
Vol 104 (2) ◽  
pp. 371-381 ◽  
Author(s):  
Paul Deheuvels ◽  
Erich Haeusler ◽  
David M. Mason

AbstractIn this note we characterize those sequences kn such that the Hill estimator of the tail index based on the kn upper order statistics of a sample of size n from a Pareto-type distribution is strongly consistent.


2012 ◽  
Vol 12 (01) ◽  
pp. 1150004
Author(s):  
RICHARD C. BRADLEY

In an earlier paper by the author, as part of a construction of a counterexample to the central limit theorem under certain strong mixing conditions, a formula is given that shows, for strictly stationary sequences with mean zero and finite second moments and a continuous spectral density function, how that spectral density function changes if the observations in that strictly stationary sequence are "randomly spread out" in a particular way, with independent "nonnegative geometric" numbers of zeros inserted in between. In this paper, that formula will be generalized to the class of weakly stationary, mean zero, complex-valued random sequences, with arbitrary spectral measure.


Sign in / Sign up

Export Citation Format

Share Document