# law of large numbersRecently Published Documents

1347
(FIVE YEARS 167)

## H-INDEX

43
(FIVE YEARS 3)

Author(s):
S. Bowong ◽
A. Emakoua ◽
E. Pardoux
Keyword(s):

2022 ◽
Vol 77 (1) ◽
Author(s):
Karol Baron ◽
Rafał Kapica
Keyword(s):

AbstractAssume $$(\Omega , {\mathscr {A}}, P)$$ ( Ω , A , P ) is a probability space, X is a compact metric space with the $$\sigma$$ σ -algebra $${\mathscr {B}}$$ B of all its Borel subsets and $$f: X \times \Omega \rightarrow X$$ f : X × Ω → X is $${\mathscr {B}} \otimes {\mathscr {A}}$$ B ⊗ A -measurable and contractive in mean. We consider the sequence of iterates of f defined on $$X \times \Omega ^{{\mathbb {N}}}$$ X × Ω N by $$f^0(x, \omega ) = x$$ f 0 ( x , ω ) = x and $$f^n(x, \omega ) = f\big (f^{n-1}(x, \omega ), \omega _n\big )$$ f n ( x , ω ) = f ( f n - 1 ( x , ω ) , ω n ) for $$n \in {\mathbb {N}}$$ n ∈ N , and its weak limit $$\pi$$ π . We show that if $$\psi :X \rightarrow {\mathbb {R}}$$ ψ : X → R is continuous, then for every $$x \in X$$ x ∈ X the sequence $$\left( \frac{1}{n}\sum _{k=1}^n \psi \big (f^k(x,\cdot )\big )\right) _{n \in {\mathbb {N}}}$$ 1 n ∑ k = 1 n ψ ( f k ( x , · ) ) n ∈ N converges almost surely to $$\int _X\psi d\pi$$ ∫ X ψ d π . In fact, we are focusing on the case where the metric space is complete and separable.

2022 ◽
Vol 5 (1) ◽
pp. 63-88
Author(s):
Gizem Kodak ◽
Gökhan Kara ◽
Murat Yıldız ◽
Aydın Şalcı
Keyword(s):

In this study, maritime accidents that occurred in the Strait of Istanbul over a 10-year period were evaluated in terms of ship-based risk factors. The frequency analysis was performed using the R - Studio program language. In this context, the accident data obtained from the Ministry of Transport and Infrastructure Main Search and Rescue Coordination Center were matched with the ship information accessed from Türk Loydu database. Thus, ship origin risk factors to be used within the scope of the study were determined and 10 different criteria were included in the analysis. These are ship length, ship breadth, ship draught, ship age, ship DWT, turning point, turning radius, L/B ratio, B/T ratio and number of propellers. The process of creating a data set was completed by spatially filtering the data and classifying of the ship-based causes accidents. The variables were examined with frequency analysis in the perspective of the Law of Large Numbers. With the results obtained, optimum characteristics based on ship origin risk factors have been revealed for each ship type that will pass through the Strait.

2021 ◽
Author(s):
Philip Naveen
Keyword(s):

Deep-learning models estimate values using backpropagation. The activation function within hidden layers is a critical component to minimizing loss in deep neural-networks. Rectified Linear (ReLU) has been the dominant activation function for the past decade. Swish and Mish are newer activation functions that have shown to yield better results than ReLU given specific circumstances. Phish is a novel activation function proposed here. It is a composite function defined as f(x) = xTanH(GELU(x)), where no discontinuities are apparent in the differentiated graph on the domain observed. Four generalized networks were constructed using Phish, Swish, Sigmoid, and TanH. SoftMax was the output function. Using images from MNIST and CIFAR-10 databanks, these networks were trained to minimize sparse categorical crossentropy. A large scale cross-validation was simulated using stochastic Markov chains to account for the law of large numbers for the probability values. Statistical tests support the research hypothesis stating Phish could outperform other activation functions in classification. Future experiments would involve testing Phish in unsupervised learning algorithms and comparing it to more activation functions.

2021 ◽
Author(s):
Philip Naveen
Keyword(s):

Deep-learning models estimate values using backpropagation. The activation function within hidden layers is a critical component to minimizing loss in deep neural-networks. Rectified Linear (ReLU) has been the dominant activation function for the past decade. Swish and Mish are newer activation functions that have shown to yield better results than ReLU given specific circumstances. Phish is a novel activation function proposed here. It is a composite function defined as f(x) = xTanH(GELU(x)), where no discontinuities are apparent in the differentiated graph on the domain observed. Four generalized networks were constructed using Phish, Swish, Sigmoid, and TanH. SoftMax was the output function. Using images from MNIST and CIFAR-10 databanks, these networks were trained to minimize sparse categorical crossentropy. A large scale cross-validation was simulated using stochastic Markov chains to account for the law of large numbers for the probability values. Statistical tests support the research hypothesis stating Phish could outperform other activation functions in classification. Future experiments would involve testing Phish in unsupervised learning algorithms and comparing it to more activation functions.

2021 ◽
pp. 1-3
Author(s):
Calvin Wooyoung Chin
Keyword(s):

Author(s):
Anna Erschler ◽
Tianyi Zheng
Keyword(s):

AbstractWe prove the law of large numbers for the drift of random walks on the two-dimensional lamplighter group, under the assumption that the random walk has finite $$(2+\epsilon )$$ ( 2 + ϵ ) -moment. This result is in contrast with classical examples of abelian groups, where the displacement after n steps, normalised by its mean, does not concentrate, and the limiting distribution of the normalised n-step displacement admits a density whose support is $$[0,\infty )$$ [ 0 , ∞ ) . We study further examples of groups, some with random walks satisfying LLN for drift and other examples where such concentration phenomenon does not hold, and study relation of this property with asymptotic geometry of groups.

2021 ◽
Vol 4 ◽
pp. 1659-1703
Author(s):
Michele Ancona ◽
Thomas Letendre
Keyword(s):

Author(s):
Fred Espen Benth ◽
Dennis Schroers ◽
Almut E.D. Veraart
Keyword(s):

2021 ◽
Vol 2021 ◽
pp. 1-5
Author(s):
Stefan Tappe
Keyword(s):

We provide a permutation invariant version of the strong law of large numbers for exchangeable sequences of random variables. The proof consists of a combination of the Komlós–Berkes theorem, the usual strong law of large numbers for exchangeable sequences, and de Finetti’s theorem.