positive distributions
Recently Published Documents


TOTAL DOCUMENTS

14
(FIVE YEARS 2)

H-INDEX

3
(FIVE YEARS 0)

Symmetry ◽  
2021 ◽  
Vol 13 (5) ◽  
pp. 908
Author(s):  
Perla Celis ◽  
Rolando de la Cruz ◽  
Claudio Fuentes ◽  
Héctor W. Gómez

We introduce a new class of distributions called the epsilon–positive family, which can be viewed as generalization of the distributions with positive support. The construction of the epsilon–positive family is motivated by the ideas behind the generation of skew distributions using symmetric kernels. This new class of distributions has as special cases the exponential, Weibull, log–normal, log–logistic and gamma distributions, and it provides an alternative for analyzing reliability and survival data. An interesting feature of the epsilon–positive family is that it can viewed as a finite scale mixture of positive distributions, facilitating the derivation and implementation of EM–type algorithms to obtain maximum likelihood estimates (MLE) with (un)censored data. We illustrate the flexibility of this family to analyze censored and uncensored data using two real examples. One of them was previously discussed in the literature; the second one consists of a new application to model recidivism data of a group of inmates released from the Chilean prisons during 2007. The results show that this new family of distributions has a better performance fitting the data than some common alternatives such as the exponential distribution.


2020 ◽  
Vol 14 (2) ◽  
pp. 2600-2652
Author(s):  
Jan-Christian Hütter ◽  
Cheng Mao ◽  
Philippe Rigollet ◽  
Elina Robeva

2018 ◽  
Vol 175 ◽  
pp. 07037 ◽  
Author(s):  
Lorenzo Luis Salcedo

Complex weights appear in Physics which are beyond a straightforward importance sampling treatment, as required in Monte Carlo calculations. This is the wellknown sign problem. The complex Langevin approach amounts to effectively construct a positive distribution on the complexified manifold reproducing the expectation values of the observables through their analytical extension. Here we discuss the direct construction of such positive distributions paying attention to their localization on the complexified manifold. Explicit localized representations are obtained for complex probabilities defined on Abelian and non Abelian groups. The viability and performance of a complex version of the heat bath method, based on such representations, is analyzed.


2014 ◽  
Vol 24 (3) ◽  
Author(s):  
LEON COHEN

We examine the construction of joint probabilities for non-commuting observables. We show that there are indications in standard quantum mechanics that imply the existence of conditional expectation values, which in turn implies the existence of a joint distribution. We also argue that the uncertainty principle has no bearing on the existence of joint distributions but only constrains the marginal distributions. In addition, we show that within classical probability theory there are mathematical quantities that are similar to quantum mechanical wave functions. This is shown by generalising a theorem of Khinchin on the necessary and sufficient conditions for a function to be a characteristic function.


Author(s):  
Radu Mutihac

Numerical methods commonly employed to convert experimental data into interpretable images and spectra commonly rely on straightforward transforms, such as the Fourier transform (FT), or quite elaborated emerging classes of transforms, like wavelets (Meyer, 1993; Mallat, 2000), wedgelets (Donoho, 1996), ridgelets (Candes, 1998), and so forth. Yet experimental data are incomplete and noisy due to the limiting constraints of digital data recording and the finite acquisition time. The pitfall of most transforms is that imperfect data are directly transferred into the transform domain along with the signals of interest. The traditional approach to data processing in the transform domain is to ignore any imperfections in data, set to zero any unmeasured data points, and then proceed as if data were perfect. Contrarily, the maximum entropy (ME) principle needs to proceed from frequency domain to space (time) domain. The ME techniques are used in data analysis mostly to reconstruct positive distributions, such as images and spectra, from blurred, noisy, and/or corrupted data. The ME methods may be developed on axiomatic foundations based on the probability calculus that has a special status as the only internally consistent language of inference (Skilling 1989; Daniell 1994). Within its framework, positive distributions ought to be assigned probabilities derived from their entropy. Bayesian statistics provides a unifying and selfconsistent framework for data modeling. Bayesian modeling deals naturally with uncertainty in data explained by marginalization in predictions of other variables. Data overfitting and poor generalization are alleviated by incorporating the principle of Occam’s razor, which controls model complexity and set the preference for simple models (MacKay, 1992). Bayesian inference satisfies the likelihood principle (Berger, 1985) in the sense that inferences depend only on the probabilities assigned to data that were measured and not on the properties of some admissible data that had never been acquired. Artificial neural networks (ANNs) can be conceptualized as highly flexible multivariate regression and multiclass classification non-linear models. However, over-flexible ANNs may discover non-existent correlations in data. Bayesian decision theory provides means to infer how flexible a model is warranted by data and suppresses the tendency to assess spurious structure in data. Any probabilistic treatment of images depends on the knowledge of the point spread function (PSF) of the imaging equipment, and the assumptions on noise, image statistics, and prior knowledge. Contrarily, the neural approach only requires relevant training examples where true scenes are known, irrespective of our inability or bias to express prior distributions. Trained ANNs are much faster image restoration means, especially in the case of strong implicit priors in the data, nonlinearity, and nonstationarity. The most remarkable work in Bayesian neural modeling was carried out by MacKay (1992, 2003) and Neal (1994, 1996), who theoretically set up the framework of Bayesian learning for adaptive models.


1995 ◽  
Vol 47 (5) ◽  
pp. 749-759 ◽  
Author(s):  
Yu. G. Kondrat'ev ◽  
L. Streit ◽  
W. Westerkamp

Sign in / Sign up

Export Citation Format

Share Document