scholarly journals An information integration account for causal generalizations in uncertain causal networks

2022 ◽  
Author(s):  
Moyun Wang

In reasoning about common cause networks, given that a cause generates an effect, people often need to infer how likely the cause generate another effect. This causal generalization question has not systematically been investigated in previous research. We propose the information integration account for causal generalizations in uncertain casual networks with dichotomized continuous variables. It predicts that causal generalization is the joint function of conditional probabilities of causal links and cause strength indicated by the proportion of present collateral effects. Two experiments investigated causal generalizations in uncertain causal networks with and without probability distributions, respectively. It was found that in the presence of probability distributions there was the joint effect of conditional probability and cause strength on causal generalization; in the absence of probability distributions causal generalization depend only on cause strength. The overall response pattern favors the information integration account over the other alternative accounts.

Author(s):  
Ulpu Leijala ◽  
Jan-Victor Björkqvist ◽  
Milla M. Johansson ◽  
Havu Pellikka ◽  
Lauri Laakso ◽  
...  

Abstract. Tools for estimating probabilities of flooding hazards caused by the simultaneous effect of sea level and waves are needed for the secure planning of densely populated coastal areas that are strongly vulnerable to climate change. In this paper we present a method for combining location-specific probability distributions of three different components: (1) long-term mean sea level change, (2) short-term sea level variations, and (3) wind-generated waves. We apply the method in two locations in the Helsinki Archipelago to obtain run-up level estimates representing the joint effect of the still water level and the wave run-up. These estimates for the present, 2050 and 2100 are based on field measurements and mean sea level scenarios. In the case of our study locations, the significant locational variability of the wave conditions leads to a difference in the safe building levels of up to one meter. The rising mean sea level in the Gulf of Finland and the uncertainty related to the associated scenarios contribute significantly to the run-up levels for the year 2100. We also present a sensitivity test of the method and discuss its applicability to other coastal regions. Our approach allows for the determining of different building levels based on the acceptable risks for various infrastructure, thus reducing building costs while maintaining necessary safety margins.


Author(s):  
Robert H. Swendsen

The theory of probability developed in Chapter 3 for discrete random variables is extended to probability distributions, in order to treat the continuous momentum variables. The Dirac delta function is introduced as a convenient tool to transform continuous random variables, in analogy with the use of the Kronecker delta for discrete random variables. The properties of the Dirac delta function that are needed in statistical mechanics are presented and explained. The addition of two continuous random numbers is given as a simple example. An application of Bayesian probability is given to illustrate its significance. However, the components of the momenta of the particles in an ideal gas are continuous variables.


Symmetry ◽  
2020 ◽  
Vol 12 (7) ◽  
pp. 1099 ◽  
Author(s):  
Peter Adam ◽  
Vladimir A. Andreev ◽  
Margarita A. Man’ko ◽  
Vladimir I. Man’ko ◽  
Matyas Mechler

In view of the probabilistic quantizer–dequantizer operators introduced, the qubit states (spin-1/2 particle states, two-level atom states) realizing the irreducible representation of the S U ( 2 ) symmetry group are identified with probability distributions (including the conditional ones) of classical-like dichotomic random variables. The dichotomic random variables are spin-1/2 particle projections m = ± 1 / 2 onto three perpendicular directions in the space. The invertible maps of qubit density operators onto fair probability distributions are constructed. In the suggested probability representation of quantum states, the Schrödinger and von Neumann equations for the state vectors and density operators are presented in explicit forms of the linear classical-like kinetic equations for the probability distributions of random variables. The star-product and quantizer–dequantizer formalisms are used to study the qubit properties; such formalisms are discussed for photon tomographic probability distribution and its correspondence to the Heisenberg–Weyl symmetry properties.


2017 ◽  
Vol 17 (7&8) ◽  
pp. 541-567
Author(s):  
Imdad S.B. Sardharwalla ◽  
Sergii Strelchuk ◽  
Richard Jozsa

We define and study a new type of quantum oracle, the quantum conditional oracle, which provides oracle access to the conditional probabilities associated with an underlying distribution. Amongst other properties, we (a) obtain highly efficient quantum algorithms for identity testing, equivalence testing and uniformity testing of probability distributions; (b) study the power of these oracles for testing properties of boolean functions, and obtain an algorithm for checking whether an n-input m-output boolean function is balanced or e-far from balanced; and (c) give an algorithm, requiring O˜(n/e) queries, for testing whether an n-dimensional quantum state is maximally mixed or not.


2016 ◽  
Vol 23 (02) ◽  
pp. 1650011 ◽  
Author(s):  
Luigi Accardi ◽  
Andrei Khrennikov ◽  
Masanori Ohya ◽  
Yoshiharu Tanaka ◽  
Ichiro Yamato

Recently a novel quantum information formalism — quantum adaptive dynamics — was developed and applied to modelling of information processing by bio-systems including cognitive phenomena: from molecular biology (glucose-lactose metabolism for E.coli bacteria, epigenetic evolution) to cognition, psychology. From the foundational point of view quantum adaptive dynamics describes mutual adapting of the information states of two interacting systems (physical or biological) as well as adapting of co-observations performed by the systems. In this paper we apply this formalism to model unconscious inference: the process of transition from sensation to perception. The paper combines theory and experiment. Statistical data collected in an experimental study on recognition of a particular ambiguous figure, the Schröder stairs, support the viability of the quantum(-like) model of unconscious inference including modelling of biases generated by rotation-contexts. From the probabilistic point of view, we study (for concrete experimental data) the problem of contextuality of probability, its dependence on experimental contexts. Mathematically contextuality leads to non-Komogorovness: probability distributions generated by various rotation contexts cannot be treated in the Kolmogorovian framework. At the same time they can be embedded in a “big Kolmogorov space” as conditional probabilities. However, such a Kolmogorov space has too complex structure and the operational quantum formalism in the form of quantum adaptive dynamics simplifies the modelling essentially.


2011 ◽  
Vol 26 (4) ◽  
pp. 564-571 ◽  
Author(s):  
Thomas N. Nipen ◽  
Greg West ◽  
Roland B. Stull

Abstract A statistical postprocessing method for improving probabilistic forecasts of continuous weather variables, given recent observations, is presented. The method updates an existing probabilistic forecast by incorporating observations reported in the intermediary time since model initialization. As such, this method provides updated short-range probabilistic forecasts at an extremely low computational cost. The method models the time sequence of cumulative distribution function (CDF) values corresponding to the observation as a first-order Markov process. Verifying CDF values are highly correlated in time, and their changes in time are modeled probabilistically by a transition function. The effect of the method is that the spread of the probabilistic forecasts for the first few hours after an observation has been made is considerably narrower than the original forecast. The updated probability distributions widen back toward the original forecast for forecast times far in the future as the effect of the recent observation diminishes. The method is tested on probabilistic forecasts produced by an operational ensemble forecasting system. The method improves the ignorance score and the continuous ranked probability score of the probabilistic forecasts significantly for the first few hours after an observation has been made. The mean absolute error of the median of the probability distribution is also shown to be improved.


Psihologija ◽  
2009 ◽  
Vol 42 (1) ◽  
pp. 107-120
Author(s):  
Strahinja Dimitrijevic ◽  
Aleksandar Kostic ◽  
Petar Milin

The aim of the present study is to establish criteria for the optimal size of a corpus that can provide stable conditional probabilities of morphological and/or syntagmatic types. The optimality of corpus size is defined in terms of the smallest sample that generates probability distribution equal to distribution derived from the large sample that generates stable probabilities. The latter distribution we refer to as 'target distribution'. In order to establish the above criteria we varied the sample size, the word sequence size (bigrams and trigrams), sampling procedure (randomly chosen words and continuous text) and position of the target word in a sequence. The obtained distributions of conditional probabilities derived from smaller samples have been correlated with target distributions. Sample size at which probability distribution reaches maximal correlation (r=1) with the target distribution was taken as being optimal. The research was done on Corpus of Serbian language. In case of bigrams the optimal sample size for random word selection is 65.000 words, and 281.000 words for trigrams. In contrast, continuous text sampling requires much larger samples to reach stability: 810.000 words for bigrams and 868.000 words for trigrams. The factors that caused these differences remain unclear and need additional empirical investigation.


Author(s):  
MICHAEL J. MARKHAM ◽  
PAUL C. RHODES

The desire to use Causal Networks as Expert Systems even when the causal information is incomplete and/or when non-causal information is available has led researchers to look into the possibility of utilising Maximum Entropy. If this approach is taken, the known information is supplemented by maximising entropy to provide a unique initial probability distribution which would otherwise have been a consequence of the known information and the independence relationships implied by the network. Traditional maximising techniques can be used if the constraints are linear but the independence relationships give rise to non-linear constraints. This paper extends traditional maximising techniques to incorporate those types of non-linear constraints that arise from the independence relationships and presents an algorithm for implementing the extended method. Maximising entropy does not involve the concept of "causal" information. Consequently, the extended method will accept any mutually consistent set of conditional probabilities and expressions of independence. The paper provides a small example of how this property can be used to provide complete causal information, for use in a causal network, when the known information is incomplete and not in a causal form.


Sign in / Sign up

Export Citation Format

Share Document