common probability
Recently Published Documents


TOTAL DOCUMENTS

37
(FIVE YEARS 8)

H-INDEX

5
(FIVE YEARS 1)

2021 ◽  
Vol 11 (1) ◽  
pp. 225-242
Author(s):  
Peter Bugiel ◽  
Stanisław Wędrychowicz ◽  
Beata Rzepka

Abstract Asymptotic properties of the sequences (a) { P j } j = 1 ∞ $\{P^{j}\}_{j=1}^{\infty}$ and (b) { j − 1 ∑ i = 0 j − 1 P i } j = 1 ∞ $\{ j^{-1} \sum _{i=0}^{j-1} P^{i}\}_{j=1}^{\infty}$ are studied for g ∈ G = {f ∈ L 1(I) : f ≥ 0 and ‖f ‖ = 1}, where P : L 1(I) → L 1(I) is a Markov operator defined by P f := ∫ P y f d p ( y ) $Pf:= \int P_{y}f\, dp(y) $ for f ∈ L 1; {Py } y∈Y is the family of the Frobenius-Perron operators associated with a family {φy } y∈Y of nonsingular Markov maps defined on a subset I ⊆ ℝ d ; and the index y runs over a probability space (Y, Σ(Y), p). Asymptotic properties of the sequences (a) and (b), of the Markov operator P, are closely connected with the asymptotic properties of the sequence of random vectors x j = φ ξ j ( x j − 1 ) $x_{j}=\varphi_{\xi_{j}}(x_{j-1})$ for j = 1,2, . . .,where { ξ j } j = 1 ∞ $\{\xi_{j}\}_{j=1}^{\infty}$ is a sequence of Y-valued independent random elements with common probability distribution p. An operator-theoretic analogue of Rényi’s Condition is introduced for the family {Py } y∈Y of the Frobenius-Perron operators. It is proved that under some additional assumptions this condition implies the L 1- convergence of the sequences (a) and (b) to a unique g 0 ∈ G. The general result is applied to some families {φy } y∈Y of smooth Markov maps in ℝ d .


2021 ◽  
Vol 53 (1) ◽  
pp. 133-161
Author(s):  
Krzysztof Burdzy ◽  
Soumik Pal

AbstractWe prove the sharp bound for the probability that two experts who have access to different information, represented by different $\sigma$-fields, will give radically different estimates of the probability of an event. This is relevant when one combines predictions from various experts in a common probability space to obtain an aggregated forecast. The optimizer for the bound is explicitly described. This paper was originally titled ‘Contradictory predictions’.


Author(s):  
Wenzhang Zhuge ◽  
Chenping Hou ◽  
Xinwang Liu ◽  
Hong Tao ◽  
Dongyun Yi

Incomplete multi-view clustering has attracted various attentions from diverse fields. Most existing methods factorize data to learn a unified representation linearly. Their performance may degrade when the relations between the unified representation and data of different views are nonlinear. Moreover, they need post-processing on the unified representations to extract the clustering indicators, which separates the consensus learning and subsequent clustering. To address these issues, in this paper, we propose a Simultaneous Representation Learning and Clustering (SRLC) method. Concretely, SRLC constructs similarity matrices to measure the relations between pair of instances, and learns low-dimensional representations of present instances on each view and a common probability label matrix simultaneously. Thus, the nonlinear information can be reflected by these representations and the clustering results can obtained from label matrix directly. An efficient iterative algorithm with guaranteed convergence is presented for optimization. Experiments on several datasets demonstrate the advantages of the proposed approach.


Sign in / Sign up

Export Citation Format

Share Document