A Computational Model of Human Perception With Prior Expectation: Bayesian Integration and Efficient Coding

Author(s):  
Hideyoshi Yanagisawa

Human perception of sensory stimuli is affected by prior prediction of the sensory experience. For example, perception of weight of an object changes depending on weight predicted with size of the object appearance. We call such psychological phenomena expectation effect. The expectation effect is a key factor to explain a gap between physical variables and their perceptions. In this paper, we propose a novel computational model of human perception involving the expectation effect. We hypothesized that perceived physical variable was estimated using a Bayesian integration of prior prediction and sensory likelihood of a physical variable. We applied efficient coding hypothesis to form a shape of sensory likelihood. We formalized the expectation effect as a function of three factors: expectation error (difference between predicted and actual physical variables), prediction uncertainty (variance of prior distributions), and external noise (variance of noise distributions convolved with likelihood). Using the model, we conducted computer simulations to analyze the behavior of two opposite patterns of expectation effect, that is, assimilation and contrast. The results of the simulation revealed that 1) the pattern of expectation effect shifted from assimilation to contrast as the prediction error increased, 2) uncertainty decreased the extent of the expectation effect, 3) and external noise increased the assimilation.

2017 ◽  
Author(s):  
Jan Clemens ◽  
Nofar Ozeri-Engelhard ◽  
Mala Murthy

AbstractTo faithfully encode complex stimuli, sensory neurons should correct, via adaptation, for stimulus properties that corrupt pattern recognition. Here, we investigate sound intensity adaptation in the Drosophila auditory system, which is largely devoted to processing courtship song. Mechanosensory neurons (JONs) in the antenna are sensitive not only to sound-induced antennal vibrations, but also to wind or gravity, which affect the antenna’s mean position. Song pattern recognition therefore requires adaptation to antennal position (stimulus mean) in addition to sound intensity (stimulus variance). We discover fast variance adaptation in Drosophila JONs, which corrects for background noise over the behaviorally relevant intensity range. We determine where mean and variance adaptation arises and how they interact. A computational model explains our results using a sequence of subtractive and divisive adaptation modules, interleaved by rectification. These results lay the foundation for identifying the molecular and biophysical implementation of adaptation to the statistics of natural sensory stimuli.


2020 ◽  
Author(s):  
Marc Aurel Schnabel ◽  
Mitra Homolja ◽  
Sayyed Maghool

Despite theoretical evidence about the capabilities of visual properties of space for manipulating inhabitants' emotions, a gap in knowledge exists for empirical studies in controlled environments. Interdisciplinary studies at the intersection of architecture, psychology, and neuroscience can provide robust guidelines and criteria for designers to engineer emotions. Due to the novelty of the field, the theoretical framework for such studies is not well established. Consequently, this paper presents a systematic literature review to find and synthesize recent relevant studies at this intersection. Based on these findings, we will investigate the impact of other visuo-spatial stimuli on emotions in a rigorous way. According to the theories of emotions, manipulation of emotions is linked to oscillations in physiological responses caused by exposure to sensory stimuli. Moreover, there is a consensus that human perception is action-oriented. Therefore, our review focuses on studies that employ biosensors as subjects move in physical or virtual environments.


2019 ◽  
Author(s):  
Samuel Eckmann ◽  
Lukas Klimmasch ◽  
Bertram E. Shi ◽  
Jochen Triesch

The development of vision during the first months of life is an active process that comprises the learning of appropriate neural representations and the learning of accurate eye movements. While it has long been suspected that the two learning processes are coupled, there is still no widely accepted theoretical framework describing this joint development. Here we propose a computational model of the development of active binocular vision to fill this gap. The model is based on a new formulation of the Active Efficient Coding theory, which proposes that eye movements, as well as stimulus encoding, are jointly adapted to maximize the overall coding efficiency. Under healthy conditions, the model self-calibrates to perform accurate vergence and accommodation eye movements. It exploits disparity cues to deduce the direction of defocus, which leads to co-ordinated vergence and accommodation responses. In a simulated anisometropic case, where the refraction power of the two eyes differs, an amblyopia-like state develops, in which the foveal region of one eye is suppressed due to inputs from the other eye. After correcting for refractive errors, the model can only reach healthy performance levels if receptive fields are still plastic, in line with findings on a critical period for binocular vision development. Overall, our model offers a unifying conceptual framework for understanding the development of binocular vision.Significance StatementBrains must operate in an energy-efficient manner. The efficient coding hypothesis states that sensory systems achieve this by adapting neural representations to the statistics of sensory input signals. Importantly, however, these statistics are shaped by the organism’s behavior and how it samples information from the environment. Therefore, optimal performance requires jointly optimizing neural representations and behavior, a theory called Active Efficient Coding. Here we test the plausibility of this theory by proposing a computational model of the development of binocular vision. The model explains the development of accurate binocular vision under healthy conditions. In the case of refractive errors, however, the model develops an amblyopia-like state and suggests conditions for successful treatment.


2020 ◽  
Author(s):  
André Krügel ◽  
Lars Oliver Martin Rothkegel ◽  
Ralf Engbert

In an influential theoretical model, human sensorimotor control is achieved by a Bayesian decision process, which combines noisy sensory information and learned prior knowledge (Wolpert & Landy, 2012). A ubiquitous signature of prior knowledge and Bayesian integration in human perception and motor behavior is the frequently observed bias towards an average stimulus magnitude (i.e., a central-tendency bias, range effect, regression-to-the-mean effect). However, in the domain of eye movements there is a recent controversy about the fundamental existence of a range effect in the saccadic system (Gillen, Weiler, & Heath, 2013; Nuthmann, Vitu, Engbert, & Kliegl, 2016). Here we argue that the problem of the existence of a range effect is linked to the availability of prior knowledge for saccade control. We present results from two prosaccade experiments which both employ an informative prior structure (i.e., a non-uniform Gaussian distribution of saccade target distances). Our results demonstrate the validity of Bayesian integration in saccade control which generates a range effect in saccades. According to Bayesian integration principles, the saccadic range effect depends on the availability of prior knowledge and varies in size as a function of the reliability of the prior and the sensory likelihood.


1986 ◽  
Vol 5 (3) ◽  
pp. 104-112 ◽  
Author(s):  
Yukio Inukai ◽  
Hideto Taya ◽  
Hisao Miyano ◽  
Hiroshi Kuriyama

Subjective ratings of pure tones at low and infrasonic (3–40 Hz) were obtained on a set of semantic-differential-type scales and were analysed by factor analysis. From the results, it was concluded that there are three main factors in the human response to the stimulus sound, these are 1) sound pressure, 2) vibration, and 3) loudness. In order to predict the human responses from the physical variables of the sound stimuli, prediction equations were derived for each of the three factors. Also, equal sensation contours for the factors were obtained. From these results, a new evaluation method for the psychological effects is proposed, which considers the multidimensional aspects of human perception at loww and infrasonic frequencies.


2020 ◽  
Vol 2020 ◽  
pp. 1-26
Author(s):  
Mariana Antonia Aguiar-Furucho ◽  
Francisco Javier Ropero Peláez

Several research studies point to the fact that sensory and cognitive reductions like cataracts, deafness, macular degeneration, or even lack of activity after job retirement, precede the onset of Alzheimer’s disease. To simulate Alzheimer’s disease earlier stages, which manifest in sensory cortices, we used a computational model of the koniocortex that is the first cortical stage processing sensory information. The architecture and physiology of the modeled koniocortex resemble those of its cerebral counterpart being capable of continuous learning. This model allows one to analyze the initial phases of Alzheimer’s disease by “aging” the artificial koniocortex through synaptic pruning, by the modification of acetylcholine and GABA-A signaling, and by reducing sensory stimuli, among other processes. The computational model shows that during aging, a GABA-A deficit followed by a reduction in sensory stimuli leads to a dysregulation of neural excitability, which in the biological brain is associated with hypermetabolism, one of the earliest symptoms of Alzheimer’s disease.


Sign in / Sign up

Export Citation Format

Share Document