1970 ◽  
Vol 8 (5) ◽  
pp. 317-320 ◽  
Author(s):  
Arthur I. Schulman ◽  
Gordon Z. Greenberg

1973 ◽  
Vol 37 (3) ◽  
pp. 771-776 ◽  
Author(s):  
Thomas G. Titus

Signal-detection models of recognition memory assume that S's decision as to whether or not he recognizes a stimulus is a function of a criterion value. In selecting his criterion, S takes into consideration the a priori probability of an old item and the costs and rewards of a hit or false alarm. In the present experiment, Ss were given feedback during recognition testing in an effort to determine whether it would aid S in selecting his criterion. The results showed that the feedback improved recognition performance by significantly reducing the number of false alarm errors. Evidence was presented to support the claim that S's criterion was affected by this manipulation.


Author(s):  
Lawrence Sklar

In statistical mechanics causation appears at the micro-level as the postulation that the full state of a system at one time can be specified by the dynamical state of all its micro-constituents (the positions and momenta of the molecules in a gas or, alternatively the wave function of these at one time), and that this state at one time generates, following the laws of dynamics (classical or quantum) the future dynamical state of the system characterized in these micro-constituent terms. So what is ‘non-causal’ in nature in explanations in statistical mechanics? This article explores two issues: The peculiar ‘transcendental’ nature of explanation in equilibrium theory in statistical mechanics; The need for introducing some a priori probability posit over initial conditions of systems in non-equilibrium theory.


2016 ◽  
Vol 208 ◽  
pp. 325-332 ◽  
Author(s):  
Lei Wang ◽  
Xu Zhao ◽  
Yuncai Liu

2019 ◽  
Vol 84 (02) ◽  
pp. 497-516
Author(s):  
WOLFGANG MERKLE ◽  
LIANG YU

AbstractLet an oracle be called low for prefix-free complexity on a set in case access to the oracle improves the prefix-free complexities of the members of the set at most by an additive constant. Let an oracle be called weakly low for prefix-free complexity on a set in case the oracle is low for prefix-free complexity on an infinite subset of the given set. Furthermore, let an oracle be called low and weakly for prefix-free complexity along a sequence in case the oracle is low and weakly low, respectively, for prefix-free complexity on the set of initial segments of the sequence. Our two main results are the following characterizations. An oracle is low for prefix-free complexity if and only if it is low for prefix-free complexity along some sequences if and only if it is low for prefix-free complexity along all sequences. An oracle is weakly low for prefix-free complexity if and only if it is weakly low for prefix-free complexity along some sequence if and only if it is weakly low for prefix-free complexity along almost all sequences. As a tool for proving these results, we show that prefix-free complexity differs from its expected value with respect to an oracle chosen uniformly at random at most by an additive constant, and that similar results hold for related notions such as a priori probability. Furthermore, we demonstrate that on every infinite set almost all oracles are weakly low but are not low for prefix-free complexity, while by Shoenfield absoluteness there is an infinite set on which uncountably many oracles are low for prefix-free complexity. Finally, we obtain no-gap results, introduce weakly low reducibility, or WLK-reducibility for short, and show that all its degrees except the greatest one are countable.


2002 ◽  
Vol 14 (1) ◽  
pp. 21-41 ◽  
Author(s):  
Marco Saerens ◽  
Patrice Latinne ◽  
Christine Decaestecker

It sometimes happens (for instance in case control studies) that a classifier is trained on a data set that does not reflect the true a priori probabilities of the target classes on real-world data. This may have a negative effect on the classification accuracy obtained on the real-world data set, especially when the classifier's decisions are based on the a posteriori probabilities of class membership. Indeed, in this case, the trained classifier provides estimates of the a posteriori probabilities that are not valid for this real-world data set (they rely on the a priori probabilities of the training set). Applying the classifier as is (without correcting its outputs with respect to these new conditions) on this new data set may thus be suboptimal. In this note, we present a simple iterative procedure for adjusting the outputs of the trained classifier with respect to these new a priori probabilities without having to refit the model, even when these probabilities are not known in advance. As a by-product, estimates of the new a priori probabilities are also obtained. This iterative algorithm is a straightforward instance of the expectation-maximization (EM) algorithm and is shown to maximize the likelihood of the new data. Thereafter, we discuss a statistical test that can be applied to decide if the a priori class probabilities have changed from the training set to the real-world data. The procedure is illustrated on different classification problems involving a multilayer neural network, and comparisons with a standard procedure for a priori probability estimation are provided. Our original method, based on the EM algorithm, is shown to be superior to the standard one for a priori probability estimation. Experimental results also indicate that the classifier with adjusted outputs always performs better than the original one in terms of classification accuracy, when the a priori probability conditions differ from the training set to the real-world data. The gain in classification accuracy can be significant.


Sign in / Sign up

Export Citation Format

Share Document