convergence in probability
Recently Published Documents


TOTAL DOCUMENTS

100
(FIVE YEARS 14)

H-INDEX

13
(FIVE YEARS 1)

Author(s):  
Alicja Dembczak-Kołodziejczyk ◽  
Anna Lytova

Given [Formula: see text], we study two classes of large random matrices of the form [Formula: see text] where for every [Formula: see text], [Formula: see text] are iid copies of a random variable [Formula: see text], [Formula: see text], [Formula: see text] are two (not necessarily independent) sets of independent random vectors having different covariance matrices and generating well concentrated bilinear forms. We consider two main asymptotic regimes as [Formula: see text]: a standard one, where [Formula: see text], and a slightly modified one, where [Formula: see text] and [Formula: see text] while [Formula: see text] for some [Formula: see text]. Assuming that vectors [Formula: see text] and [Formula: see text] are normalized and isotropic “in average”, we prove the convergence in probability of the empirical spectral distributions of [Formula: see text] and [Formula: see text] to a version of the Marchenko–Pastur law and the so-called effective medium spectral distribution, correspondingly. In particular, choosing normalized Rademacher random variables as [Formula: see text], in the modified regime one can get a shifted semicircle and semicircle laws. We also apply our results to the certain classes of matrices having block structures, which were studied in [G. M. Cicuta, J. Krausser, R. Milkus and A. Zaccone, Unifying model for random matrix theory in arbitrary space dimensions, Phys. Rev. E 97(3) (2018) 032113, MR3789138; M. Pernici and G. M. Cicuta, Proof of a conjecture on the infinite dimension limit of a unifying model for random matrix theory, J. Stat. Phys. 175(2) (2019) 384–401, MR3968860].


2021 ◽  
pp. 418-437
Author(s):  
James Davidson

This chapter looks in detail at proofs of the weak law of large numbers (convergence in probability) using the technique of establishing convergence in Lp‐norm. The extension to a proof of almost‐sure convergence is given, and then special results for martingale differences, mixingales, and approximable processes. These results are proved in array notation to allow general forms of heterogeneity.


2021 ◽  
pp. 400-417
Author(s):  
James Davidson

The modes of convergence introduced in Chapter 12 are studied in detail. Conditions for almost‐sure convergence are derived via the Borel–Cantelli lemma. Convergence in probability is contrasted, and then a number of results for convergence of transformed series are given. Convergence in LP‐norm is introduced as a sufficient condition for convergence in probability. Examples are given, and the chapter concludes with a preliminary look at the laws of large numbers.


Author(s):  
Federico Maddanu

AbstractThe estimation of the long memory parameter d is a widely discussed issue in the literature. The harmonically weighted (HW) process was recently introduced for long memory time series with an unbounded spectral density at the origin. In contrast to the most famous fractionally integrated process, the HW approach does not require the estimation of the d parameter, but it may be just as able to capture long memory as the fractionally integrated model, if the sample size is not too large. Our contribution is a generalization of the HW model, denominated the Generalized harmonically weighted (GHW) process, which allows for an unbounded spectral density at $$k \ge 1$$ k ≥ 1 frequencies away from the origin. The convergence in probability of the Whittle estimator is provided for the GHW process, along with a discussion on simulation methods. Fit and forecast performances are evaluated via an empirical application on paleoclimatic data. Our main conclusion is that the above generalization is able to model long memory, as well as its classical competitor, the fractionally differenced Gegenbauer process, does. In addition, the GHW process does not require the estimation of the memory parameter, simplifying the issue of how to disentangle long memory from a (moderately persistent) short memory component. This leads to a clear advantage of our formulation over the fractional long memory approach.


2021 ◽  
Vol 53 (1) ◽  
pp. 81-106
Author(s):  
Christopher King

AbstractA shared ledger is a record of transactions that can be updated by any member of a group of users. The notion of independent and consistent record-keeping in a shared ledger is important for blockchain and more generally for distributed ledger technologies. In this paper we analyze a stochastic model for the shared ledger known as the tangle, which was devised as the basis for the IOTA cryptocurrency. The model is a random directed acyclic graph, and its growth is described by a non-Markovian stochastic process. We first prove ergodicity of the stochastic process, and then derive a delay differential equation for the fluid model which describes the tangle at high arrival rate. We prove convergence in probability of the tangle process to the fluid model, and also prove global stability of the fluid model. The convergence proof relies on martingale techniques.


Author(s):  
Dimitra Antonopoulou ◽  
Ĺubomír Baňas ◽  
Robert Nürnberg ◽  
Andreas Prohl

AbstractWe consider the stochastic Cahn–Hilliard equation with additive noise term $$\varepsilon ^\gamma g\, {\dot{W}}$$ ε γ g W ˙ ($$\gamma >0$$ γ > 0 ) that scales with the interfacial width parameter $$\varepsilon $$ ε . We verify strong error estimates for a gradient flow structure-inheriting time-implicit discretization, where $$\varepsilon ^{-1}$$ ε - 1 only enters polynomially; the proof is based on higher-moment estimates for iterates, and a (discrete) spectral estimate for its deterministic counterpart. For $$\gamma $$ γ sufficiently large, convergence in probability of iterates towards the deterministic Hele–Shaw/Mullins–Sekerka problem in the sharp-interface limit $$\varepsilon \rightarrow 0$$ ε → 0 is shown. These convergence results are partly generalized to a fully discrete finite element based discretization. We complement the theoretical results by computational studies to provide practical evidence concerning the effect of noise (depending on its ’strength’ $$\gamma $$ γ ) on the geometric evolution in the sharp-interface limit. For this purpose we compare the simulations with those from a fully discrete finite element numerical scheme for the (stochastic) Mullins–Sekerka problem. The computational results indicate that the limit for $$\gamma \ge 1$$ γ ≥ 1 is the deterministic problem, and for $$\gamma =0$$ γ = 0 we obtain agreement with a (new) stochastic version of the Mullins–Sekerka problem.


2021 ◽  
Vol 147 (3) ◽  
pp. 553-578
Author(s):  
Dominic Breit ◽  
Alan Dodgson

AbstractWe study stochastic Navier–Stokes equations in two dimensions with respect to periodic boundary conditions. The equations are perturbed by a nonlinear multiplicative stochastic forcing with linear growth (in the velocity) driven by a cylindrical Wiener process. We establish convergence rates for a finite-element based space-time approximation with respect to convergence in probability (where the error is measured in the $$L^\infty _tL^2_x\cap L^2_tW^{1,2}_x$$ L t ∞ L x 2 ∩ L t 2 W x 1 , 2 -norm). Our main result provides linear convergence in space and convergence of order (almost) 1/2 in time. This improves earlier results from Carelli and Prohl (SIAM J Numer Anal 50(5):2467–2496, 2012) where the convergence rate in time is only (almost) 1/4. Our approach is based on a careful analysis of the pressure function using a stochastic pressure decomposition.


Author(s):  
Giuseppe Cavaliere ◽  
Heino Bohn Nielsen ◽  
Anders Rahbek

While often simple to implement in practice, application of the bootstrap in econometric modeling of economic and financial time series requires establishing validity of the bootstrap. Establishing bootstrap asymptotic validity relies on verifying often nonstandard regularity conditions. In particular, bootstrap versions of classic convergence in probability and distribution, and hence of laws of large numbers and central limit theorems, are critical ingredients. Crucially, these depend on the type of bootstrap applied (e.g., wild or independently and identically distributed (i.i.d.) bootstrap) and on the underlying econometric model and data. Regularity conditions and their implications for possible improvements in terms of (empirical) size and power for bootstrap-based testing differ from standard asymptotic testing, which can be illustrated by simulations.


2020 ◽  
Vol 52 (2) ◽  
pp. 491-522
Author(s):  
Guus Balkema ◽  
Natalia Nolde

AbstractLarge samples from a light-tailed distribution often have a well-defined shape. This paper examines the implications of the assumption that there is a limit shape. We show that the limit shape determines the upper quantiles for a large class of random variables. These variables may be described loosely as continuous homogeneous functionals of the underlying random vector. They play an important role in evaluating risk in a multivariate setting. The paper also looks at various coefficients of tail dependence and at the distribution of the scaled sample points for large samples. The paper assumes convergence in probability rather than almost sure convergence. This results in an elegant theory. In particular, there is a simple characterization of domains of attraction.


Mathematics ◽  
2020 ◽  
Vol 8 (4) ◽  
pp. 572 ◽  
Author(s):  
Edmondo Trentin

A soft-constrained neural network for density estimation (SC-NN-4pdf) has recently been introduced to tackle the issues arising from the application of neural networks to density estimation problems (in particular, the satisfaction of the second Kolmogorov axiom). Although the SC-NN-4pdf has been shown to outperform parametric and non-parametric approaches (from both the machine learning and the statistics areas) over a variety of univariate and multivariate density estimation tasks, no clear rationale behind its performance has been put forward so far. Neither has there been any analysis of the fundamental theoretical properties of the SC-NN-4pdf. This paper narrows the gaps, delivering a formal statement of the class of density functions that can be modeled to any degree of precision by SC-NN-4pdfs, as well as a proof of asymptotic convergence in probability of the SC-NN-4pdf training algorithm under mild conditions for a popular class of neural architectures. These properties of the SC-NN-4pdf lay the groundwork for understanding the strong estimation capabilities that SC-NN-4pdfs have only exhibited empirically so far.


Sign in / Sign up

Export Citation Format

Share Document