joint probability distributions
Recently Published Documents


TOTAL DOCUMENTS

93
(FIVE YEARS 15)

H-INDEX

15
(FIVE YEARS 1)

2021 ◽  
pp. 152700252110558
Author(s):  
Franklin G. Mixon ◽  
Richard J. Cebula

Prior research uses the collapse of Soviet-style communism in 1991 as a de facto experimental framework within which to examine the impact of prospective benefits on the motivation of athletes to succeed in the Olympic Games. Prior to the collapse, successful Soviet Bloc Olympians were provided extraordinary living conditions and lifestyles. These rewards evaporated with the demise of the Soviet Union in 1991, subsequently resulting in relatively poorer Olympic performances of Soviet Bloc athletes. The current study extends earlier work by investigating the impact of appropriability on the supply of innovation by examining the frequency of eponymous skills in women's gymnastics before and during the transition to a new market-based economic order. Our central hypothesis is that following the dissolution the communist governments of the Soviet Bloc and its satellites, the supply of innovation in the form of eponymous skills in women's gymnastics from these countries has fallen. Frequency distributions of eponymous skills in women's gymnastics both prior to and after the dissolution of the aforementioned communist regimes support this hypothesis, as do results from goodness-of-fit tests and stochastic dominance analysis of joint probability distributions.


2020 ◽  
Vol 29 (08) ◽  
pp. 2050060
Author(s):  
M. Gazdzicki ◽  
M. I. Gorenstein ◽  
O. Savchuk ◽  
L. Tinti

Properties of basic statistical ensembles in the Cell Model are discussed. The simplest version of the model with a fixed total number of particles is considered. The microcanonical ensembles of distinguishable and indistinguishable particles, with and without a limit on the maximum number of particles in a single cell, are discussed. The joint probability distributions of particle multiplicities in cells for different ensembles are derived, and their second moments are calculated. The results for infinite volume limit are calculated. The obtained results resemble those in the statistical physics of bosons, fermions and boltzmanions.


2020 ◽  
Author(s):  
John Ferguson ◽  
Maurice O'Connell ◽  
Martin O'Donnell

Abstract Background Eide and Gefeller introduced the concepts of sequential and average attributable fractions as methods to partition the risk of disease to differing exposures. In particular, sequential attributable fractions are interpreted in terms of an incremental reduction in disease prevalence associated with removing a particular risk factor from the population, having removed other risk factors. Clearly, both concepts are causal entities, but are not usually estimated within a causal inference framework. Methods We propose causal definitions of sequential and average attributable fractions using the potential outcomes framework. To estimate these quantities in practice, we model exposure-exposure and exposure-disease interrelationships using a causal Bayesian network, assuming no unobserved variables. This allows us to model not only the direct impact of removing a risk factor on disease, but also the indirect impact through the effect on the prevalence of causally downstream risk factors that are typically ignored when calculating sequential and average attributable fractions. The procedure for calculating sequential attributable fractions involves repeated applications of Pearl's do-operator over a fitted Bayesian network, and simulation from the resulting joint probability distributions. Results The methods are applied to the INTERSTROKE study, which was designed to quantify disease burden attributable to the major risk factors for stroke. The resulting sequential and average PAFs are compared to results to a prior approach to estimate sequential PAFs which uses a single logistic model and which does not properly account for differing causal pathways. Conclusions In contrast to estimation using a single regression model, the proposed approaches allow consistent estimation of sequential, joint and average PAF under general causal structures.


Entropy ◽  
2020 ◽  
Vol 22 (3) ◽  
pp. 357 ◽  
Author(s):  
Nicholas Carrara ◽  
Kevin Vanslette

Using first principles from inference, we design a set of functionals for the purposes of ranking joint probability distributions with respect to their correlations. Starting with a general functional, we impose its desired behavior through the Principle of Constant Correlations (PCC), which constrains the correlation functional to behave in a consistent way under statistically independent inferential transformations. The PCC guides us in choosing the appropriate design criteria for constructing the desired functionals. Since the derivations depend on a choice of partitioning the variable space into n disjoint subspaces, the general functional we design is the n-partite information (NPI), of which the total correlation and mutual information are special cases. Thus, these functionals are found to be uniquely capable of determining whether a certain class of inferential transformations, ρ → ∗ ρ ′ , preserve, destroy or create correlations. This provides conceptual clarity by ruling out other possible global correlation quantifiers. Finally, the derivation and results allow us to quantify non-binary notions of statistical sufficiency. Our results express what percentage of the correlations are preserved under a given inferential transformation or variable mapping.


Entropy ◽  
2020 ◽  
Vol 22 (3) ◽  
pp. 298
Author(s):  
Noboru Watanabe

It has been shown that joint probability distributions of quantum systems generally do not exist, and the key to solving this concern is the compound state invented by Ohya. The Ohya compound state constructed by the Schatten decomposition (i.e., one-dimensional orthogonal projection) of the input state shows the correlation between the states of the input and output systems. In 1983, Ohya formulated the quantum mutual entropy by applying this compound state. Since this mutual entropy satisfies the fundamental inequality, one may say that it represents the amount of information correctly transmitted from the input system through the channel to the output system, and it may play an important role in discussing the efficiency of information transfer in quantum systems. Since the Ohya compound state is separable state, it is important that we must look more carefully into the entangled compound state. This paper is intended as an investigation of the construction of the entangled compound state, and the hybrid entangled compound state is introduced. The purpose of this paper is to consider the validity of the compound states constructing the quantum mutual entropy type complexity. It seems reasonable to suppose that the quantum mutual entropy type complexity defined by using the entangled compound state is not useful to discuss the efficiency of information transmission from the initial system to the final system.


2020 ◽  
Author(s):  
John Ferguson ◽  
Maurice O'Connell ◽  
Martin O'Donnell

Abstract Background: Eide and Gefeller [1] introduced the concepts of sequential and average attributable fractions as methods to partition the risk of disease to differing exposures. In particular, sequential attributable fractions are interpreted in terms of an incremental reduction in disease prevalence associated with removing a particular risk factor from the population, having removed other risk factors. Clearly, both concepts are causal entities, but are not usually estimated within a causal inference framework.Methods: We propose causal definitions of sequential and average attributable fractions using the potential outcomes framework. To estimate these quantities in practice, we model exposure-exposure and exposure-disease interrelationships using a causal Bayesian network, assuming no unobserved variables. This allows us to model not only the direct impact of removing a risk factor on disease, but also the indirect impact through the effect on the prevalence of causally downstream risk factors that are typically ignored when calculating sequential and average attributable fractions. The procedure for calculating sequential attributable fractions involves repeated applications of Pearl’s do-operator over a fitted Bayesian network, and simulation from the resulting joint probability distributions.Results: The methods are applied to the INTERSTROKE study, which was designed to quantify disease burden attributable to the major risk factors for stroke. The resulting sequential and average attributable fractions are compared to results to a prior estimation approach which uses a single logistic model and which does not properly account for differing causal pathways.Conclusions: In contrast to estimation using a single regression model, the proposed approaches allow consistent estimation of sequential, joint and average attributable fractions under general causal structures.


2019 ◽  
Vol 19 (12) ◽  
pp. 2795-2809 ◽  
Author(s):  
Andreia F. S. Ribeiro ◽  
Ana Russo ◽  
Célia M. Gouveia ◽  
Patrícia Páscoa ◽  
Carlos A. L. Pires

Abstract. Extreme weather events, such as droughts, have been increasingly affecting the agricultural sector, causing several socio-economic consequences. The growing economy requires improved assessments of drought-related impacts in agriculture, particularly under a climate that is getting drier and warmer. This work proposes a probabilistic model that is intended to contribute to the agricultural drought risk management in rainfed cropping systems. Our methodology is based on a bivariate copula approach using elliptical and Archimedean copulas, the application of which is quite recent in agrometeorological studies. In this work we use copulas to model joint probability distributions describing the amount of dependence between drought conditions and crop yield anomalies. Afterwards, we use the established copula models to simulate pairs of yield anomalies and drought hazard, preserving their dependence structure to further estimate the probability of crop loss. In the first step, we analyse the probability of crop loss without distinguishing the class of drought, and in the second step we compare the probability of crop loss under drought and non-drought conditions. The results indicate that, in general, Archimedean copulas provide the best statistical fits of the joint probability distributions, suggesting a dependence among extreme values of rainfed cereal yield anomalies and drought indicators. Moreover, the estimated conditional probabilities suggest that when drought conditions are below moderate thresholds, the risk of crop loss increases between 32.53 % (cluster 1) and 32.6 % (cluster 2) in the case of wheat and between 31.63 % (cluster 2) and 55.55 % (cluster 2) in the case of barley. From an operational point of view, the results aim to contribute to the decision-making process in agricultural practices.


Sign in / Sign up

Export Citation Format

Share Document