Bayes, Rationality, and Rashionality

Author(s):  
Vsevolod Kapatsinski

This chapter reviews the main ideas of Bayesian approaches to learning, compared to associationist approaches. It reviews and discusses Bayesian criticisms of associationist learning theory. In particular, Bayesian theorists have argued that associative models fail to represent confidence in belief and update confidence with experience. The chapter discusses whether updating confidence is necessary to capture entrenchment, suspicious coincidence, and category variability effects. The evidence is argued to be somewhat inconclusive at present, as simulated annealing can often suffice. Furthermore, when confidence updating is suggested by the data, the updating suggested by the data may be non-normative, contrary to the Bayesian notion of the learner as an ideal observer. Following Kruschke, learned selective attention is argued to explain many ways in which human learning departs from that of the ideal observer, most crucially including the weakness of backward relative to forward blocking. Other departures from the ideal observer may be due to biological organisms taking into account factors other than belief accuracy. Finally, generative and discriminative learning models are compared. Generative models are argued to be particularly likely when active learning is a possibility and when reversing the observed mappings may be required.

2020 ◽  
Vol 2020 (16) ◽  
pp. 41-1-41-7
Author(s):  
Orit Skorka ◽  
Paul J. Kane

Many of the metrics developed for informational imaging are useful in automotive imaging, since many of the tasks – for example, object detection and identification – are similar. This work discusses sensor characterization parameters for the Ideal Observer SNR model, and elaborates on the noise power spectrum. It presents cross-correlation analysis results for matched-filter detection of a tribar pattern in sets of resolution target images that were captured with three image sensors over a range of illumination levels. Lastly, the work compares the crosscorrelation data to predictions made by the Ideal Observer Model and demonstrates good agreement between the two methods on relative evaluation of detection capabilities.


Author(s):  
Masoumeh Zareapoor ◽  
Jie Yang

Image-to-Image translation aims to learn an image from a source domain to a target domain. However, there are three main challenges, such as lack of paired datasets, multimodality, and diversity, that are associated with these problems and need to be dealt with. Convolutional neural networks (CNNs), despite of having great performance in many computer vision tasks, they fail to detect the hierarchy of spatial relationships between different parts of an object and thus do not form the ideal representative model we look for. This article presents a new variation of generative models that aims to remedy this problem. We use a trainable transformer, which explicitly allows the spatial manipulation of data within training. This differentiable module can be augmented into the convolutional layers in the generative model, and it allows to freely alter the generated distributions for image-to-image translation. To reap the benefits of proposed module into generative model, our architecture incorporates a new loss function to facilitate an effective end-to-end generative learning for image-to-image translation. The proposed model is evaluated through comprehensive experiments on image synthesizing and image-to-image translation, along with comparisons with several state-of-the-art algorithms.


2015 ◽  
Vol 114 (6) ◽  
pp. 3076-3096 ◽  
Author(s):  
Ryan M. Peters ◽  
Phillip Staibano ◽  
Daniel Goldreich

The ability to resolve the orientation of edges is crucial to daily tactile and sensorimotor function, yet the means by which edge perception occurs is not well understood. Primate cortical area 3b neurons have diverse receptive field (RF) spatial structures that may participate in edge orientation perception. We evaluated five candidate RF models for macaque area 3b neurons, previously recorded while an oriented bar contacted the monkey's fingertip. We used a Bayesian classifier to assign each neuron a best-fit RF structure. We generated predictions for human performance by implementing an ideal observer that optimally decoded stimulus-evoked spike counts in the model neurons. The ideal observer predicted a saturating reduction in bar orientation discrimination threshold with increasing bar length. We tested 24 humans on an automated, precision-controlled bar orientation discrimination task and observed performance consistent with that predicted. We next queried the ideal observer to discover the RF structure and number of cortical neurons that best matched each participant's performance. Human perception was matched with a median of 24 model neurons firing throughout a 1-s period. The 10 lowest-performing participants were fit with RFs lacking inhibitory sidebands, whereas 12 of the 14 higher-performing participants were fit with RFs containing inhibitory sidebands. Participants whose discrimination improved as bar length increased to 10 mm were fit with longer RFs; those who performed well on the 2-mm bar, with narrower RFs. These results suggest plausible RF features and computational strategies underlying tactile spatial perception and may have implications for perceptual learning.


2001 ◽  
Author(s):  
Hongbin Zhang ◽  
Eric Clarkson ◽  
Harrison H. Barrett

2015 ◽  
Vol 15 (12) ◽  
pp. 1341
Author(s):  
Steven Shimozaki ◽  
Eleanor Swan ◽  
Claire Hutchinson ◽  
Jaspreet Mahal

2020 ◽  
Vol 34 (04) ◽  
pp. 5620-5627 ◽  
Author(s):  
Murat Sensoy ◽  
Lance Kaplan ◽  
Federico Cerutti ◽  
Maryam Saleki

Deep neural networks are often ignorant about what they do not know and overconfident when they make uninformed predictions. Some recent approaches quantify classification uncertainty directly by training the model to output high uncertainty for the data samples close to class boundaries or from the outside of the training distribution. These approaches use an auxiliary data set during training to represent out-of-distribution samples. However, selection or creation of such an auxiliary data set is non-trivial, especially for high dimensional data such as images. In this work we develop a novel neural network model that is able to express both aleatoric and epistemic uncertainty to distinguish decision boundary and out-of-distribution regions of the feature space. To this end, variational autoencoders and generative adversarial networks are incorporated to automatically generate out-of-distribution exemplars for training. Through extensive analysis, we demonstrate that the proposed approach provides better estimates of uncertainty for in- and out-of-distribution samples, and adversarial examples on well-known data sets against state-of-the-art approaches including recent Bayesian approaches for neural networks and anomaly detection methods.


Sign in / Sign up

Export Citation Format

Share Document