scholarly journals Opposing effects of selectivity and invariance in peripheral vision

2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Corey M. Ziemba ◽  
Eero P. Simoncelli

AbstractSensory processing necessitates discarding some information in service of preserving and reformatting more behaviorally relevant information. Sensory neurons seem to achieve this by responding selectively to particular combinations of features in their inputs, while averaging over or ignoring irrelevant combinations. Here, we expose the perceptual implications of this tradeoff between selectivity and invariance, using stimuli and tasks that explicitly reveal their opposing effects on discrimination performance. We generate texture stimuli with statistics derived from natural photographs, and ask observers to perform two different tasks: Discrimination between images drawn from families with different statistics, and discrimination between image samples with identical statistics. For both tasks, the performance of an ideal observer improves with stimulus size. In contrast, humans become better at family discrimination but worse at sample discrimination. We demonstrate through simulations that these behaviors arise naturally in an observer model that relies on a common set of physiologically plausible local statistical measurements for both tasks.

2020 ◽  
Vol 2020 (16) ◽  
pp. 41-1-41-7
Author(s):  
Orit Skorka ◽  
Paul J. Kane

Many of the metrics developed for informational imaging are useful in automotive imaging, since many of the tasks – for example, object detection and identification – are similar. This work discusses sensor characterization parameters for the Ideal Observer SNR model, and elaborates on the noise power spectrum. It presents cross-correlation analysis results for matched-filter detection of a tribar pattern in sets of resolution target images that were captured with three image sensors over a range of illumination levels. Lastly, the work compares the crosscorrelation data to predictions made by the Ideal Observer Model and demonstrates good agreement between the two methods on relative evaluation of detection capabilities.


2018 ◽  
Author(s):  
Abdellah Fourtassi ◽  
Michael C. Frank

Identifying a spoken word in a referential context requires both the ability to integrate multimodal input and the ability to reason under uncertainty. How do these tasks interact with one another? We study how adults identify novel words under joint uncertainty in the auditory and visual modalities and we propose an ideal observer model of how cues in these modalities are combined optimally. Model predictions are tested in four experiments where recognition is made under various sources of uncertainty. We found that participants use both auditory and visual cues to recognize novel words. When the signal is not distorted with environmental noise, participants weight the auditory and visual cues optimally, that is, according to the relative reliability of each modality. In contrast, when one modality has noise added to it, human perceivers systematically prefer the unperturbed modality to a greater extent than the optimal model does. This work extends the literature on perceptual cue combination to the case of word recognition in a referential context. In addition, this context offers a link to the study of multimodal information in word meaning learning.


2016 ◽  
Author(s):  
Adrian E Radillo ◽  
Alan Veliz-Cuba ◽  
Kresimir Josic ◽  
Zachary Kilpatrick

In a constantly changing world, animals must account for environmental volatility when making decisions. To appropriately discount older, irrelevant information, they need to learn the rate at which the environment changes. We develop an ideal observer model capable of inferring the present state of the environment along with its rate of change. Key to this computation is updating the posterior probability of all possible changepoint counts. This computation can be challenging, as the number of possibilities grows rapidly with time. However, we show how the computations can be simplified in the continuum limit by a moment closure approximation. The resulting low-dimensional system can be used to infer the environmental state and change rate with accuracy comparable to the ideal observer. The approximate computations can be performed by a neural network model via a rate-correlation based plasticity rule. We thus show how optimal observers accumulates evidence in changing environments, and map this computation to reduced models which perform inference using plausible neural mechanisms.


Perception ◽  
1996 ◽  
Vol 25 (1_suppl) ◽  
pp. 2-2 ◽  
Author(s):  
A J Ahumada

Letting external noise rather than internal noise limit discrimination performance allows information to be extracted about the observer's stimulus classification rule. A perceptual classification image is the correlation over trials between the noise amplitude at a spatial location and the observer's responses. If, for example, the observer followed the rule of the ideal observer, the response correlation image would be an estimate of the ideal observer filter, the difference between the two unmasked images being discriminated. Perceptual classification images were estimated for a Vernier discrimination task. The display screen had 48 pixels deg−1 horizontally and vertically. The no-offset image had a dark horizontal line of 4 pixels, a 1 pixel space, and 4 more dark pixels. Classification images were based on 1600 discrimination trials with the line contrast adjusted to keep the error rate near 25%. In the offset image, the second line was one pixel higher. Unlike the ideal observer filter (a horizontal dipole), the observer perceptual classification images are strongly oriented. Fourier transforms of the classification images had a peak amplitude near 1 cycle deg−1 and an orientation near 25 deg. The spatial spread is much more than image blur predicts, and probably indicates the spatial position uncertainty in the task.


1997 ◽  
Vol 104 (3) ◽  
pp. 524-553 ◽  
Author(s):  
Gordon E. Legge ◽  
Timothy S. Klitz ◽  
Bosco S. Tjan

2015 ◽  
Vol 114 (2) ◽  
pp. 808-817 ◽  
Author(s):  
Nicolaas A. J. Puts ◽  
Ashley D. Harris ◽  
Deana Crocetti ◽  
Carrie Nettles ◽  
Harvey S. Singer ◽  
...  

Tourette Syndrome (TS) is characterized by the presence of chronic tics. Individuals with TS often report difficulty with ignoring (habituating to) tactile sensations, and some patients perceive that this contributes to a “premonitory urge” to tic. While common, the physiological basis of impaired tactile processing in TS, and indeed tics themselves, remain poorly understood. It has been well established that GABAergic processing plays an important role in shaping the neurophysiological response to tactile stimulation. Furthermore, there are multiple lines of evidence suggesting that a deficit in GABAergic transmission may contribute to symptoms found in TS. In this study, GABA-edited magnetic resonance spectroscopy (MRS) was combined with a battery of vibrotactile tasks to investigate the role of GABA and atypical sensory processing in children with TS. Our results show reduced primary sensorimotor cortex (SM1) GABA concentration in children with TS compared with healthy control subjects (HC), as well as patterns of impaired performance on tactile detection and adaptation tasks, consistent with altered GABAergic function. Moreover, in children with TS SM1 GABA concentration correlated with motor tic severity, linking the core feature of TS directly to in vivo brain neurochemistry. There was an absence of the typical correlation between GABA and frequency discrimination performance in TS as was seen in HC. These data show that reduced GABA concentration in TS may contribute to both motor tics and sensory impairments in children with TS. Understanding the mechanisms of altered sensory processing in TS may provide a foundation for novel interventions to alleviate these symptoms.


2020 ◽  
Author(s):  
Rotem Ruach ◽  
Shai Yellinek ◽  
Eyal Itskovits ◽  
Alon Zaslaver

AbstractEfficient navigation based on chemical cues is an essential feature shared by all animals. These cues may be encountered in complex spatio-temporal patterns and with orders of magnitude varying intensities. Nevertheless, sensory neurons accurately extract the relevant information from such perplexing signals. Here, we show how a single sensory neuron in C. elegans worms can cell-autonomously encode complex stimulus patterns composed of instantaneous sharp changes and of slowly-changing continuous gradients. This encoding relies on a simple negative feedback in the GPCR signaling pathway in which TAX-6/Calcineurin plays a key role in mediating the feedback inhibition. Crucially, this negative feedback pathway supports several important coding features that underlie an efficient navigation strategy, including exact adaptation and adaptation to the magnitude of the gradient’s first derivative. A simple mathematical model accurately captured the fine neural dynamics of both wt and tax-6 mutant animals, further highlighting how the calcium-dependent activity of TAX-6/Calcineurin dictates GPCR inhibition and response dynamics. As GPCRs are ubiquitously expressed in all sensory neurons, this mechanism may be a universal solution for efficient cell-autonomous coding of external stimuli.


2018 ◽  
Author(s):  
T. Meindertsma ◽  
N.A. Kloosterman ◽  
A.K. Engel ◽  
E.J. Wagenmakers ◽  
T.H. Donner

AbstractLearning the statistical structure of the environment is crucial for adaptive behavior. Humans and non-human decision-makers seem to track such structure through a process of probabilistic inference, which enables predictions about behaviorally relevant events. Deviations from such predictions cause surprise, which in turn helps improve inference. Surprise about the timing of behaviorally relevant sensory events drives phasic responses of neuromodulatory brainstem systems, which project to the cerebral cortex. Here, we developed a computational model-based magnetoencephalography (MEG) approach for mapping the resulting cortical transients across space, time, and frequency, in the human brain (N=28, 17 female). We used a Bayesian ideal observer model to learn the statistics of the timing of changes in a simple visual detection task. This model yielded quantitative trial-by-trial estimates of temporal surprise. The model-based surprise variable predicted trial-by trial variations in reaction time more strongly than the externally observable interval timings alone. Trial-by-trial variations in surprise were negatively correlated with the power of cortical population activity measured with MEG. This surprise-related power suppression occurred transiently around the behavioral response, specifically in the beta frequency band. It peaked in parietal and prefrontal cortices, remote from the motor cortical suppression of beta power related to overt report (button press) of change detection. Our results indicate that surprise about sensory event timing transiently suppresses ongoing beta-band oscillations in association cortex. This transient suppression of frontal beta-band oscillations might reflect an active reset triggered by surprise, and is in line with the idea that beta-oscillations help maintain cognitive sets.Significance statementThe brain continuously tracks the statistical structure of the environment to anticipate behaviorally relevant events. Deviations from such predictions cause surprise, which in turn drives neural activity in subcortical brain regions that project to the cerebral cortex. We used magnetoencephalography in humans to map out surprise-related modulations of cortical population activity across space, time, and frequency. Surprise was elicited by variable timing of visual stimulus changes requiring a behavioral response. Surprise was quantified by means of an ideal observer model. Surprise predicted behavior as well as a transient suppression of beta frequency band oscillations in frontal cortical regions. Our results are in line with conceptual accounts that have linked neural oscillations in the beta-band to the maintenance of cognitive sets.


2019 ◽  
Author(s):  
Florian A. Dehmelt ◽  
Rebecca Meier ◽  
Julian Hinz ◽  
Takeshi Yoshimatsu ◽  
Clara A. Simacek ◽  
...  

AbstractMany animals have large visual fields, and sensory circuits may sample those regions of visual space most relevant to behaviours such as gaze stabilisation and hunting. Despite this, relatively small displays are often used in vision neuroscience. To sample stimulus locations across most of the visual field, we built a spherical stimulus arena with 14,848 independently controllable LEDs, measured the optokinetic response gain of immobilised zebrafish larvae, and related behaviour to previously published retinal photoreceptor densities. We measured tuning to steradian stimulus size and spatial frequency, and show it to be independent of visual field position. However, zebrafish react most strongly and consistently to lateral, nearly equatorial stimuli, consistent with previously reported higher spatial densities in the central retina of red, green and blue photoreceptors. Upside-down experiments suggest further extra-retinal processing. Our results demonstrate that motion vision circuits in zebrafish are anisotropic, and preferentially monitor areas with putative behavioural relevance.Author summaryThe visual system of larval zebrafish mirrors many features present in the visual system of other vertebrates, including its ability to mediate optomotor and optokinetic behaviour. Although the presence of such behaviours and some of the underlying neural correlates have been firmly established, previous experiments did not consider the large visual field of zebrafish, which covers more than 160° for each eye. Given that different parts of the visual field likely carry unequal amount of behaviourally relevant information for the animal, this raises the question whether optic flow is integrated across the entire visual field or just parts of it, and how this shapes behaviour such as the optokinetic response. We constructed a spherical LED arena to present visual stimuli almost anywhere across their visual field, while tracking horizontal eye movements. By displaying moving gratings on this LED arena, we demonstrate that the optokinetic response, one of the most prominent visually induced behaviours of zebrafish, indeed strongly depends on stimulus location and stimulus size, as well as on other parameters such as the spatial and temporal frequency of the gratings. This location dependence is consistent with areas of high retinal photoreceptor densities, though evidence suggests further extraretinal processing.


2021 ◽  
Author(s):  
Garry Kong ◽  
David Aagten-Murphy ◽  
Jessica MV McMaster ◽  
Paul M Bays

Our knowledge about objects in our environment reflects an integration of current visual input with information from preceding gaze fixations. Such a mechanism may reduce uncertainty, but requires the visual system to determine which information obtained in different fixations should be combined or kept separate. To investigate the basis of this decision, we conducted three experiments. Participants viewed a stimulus in their peripheral vision, then made a saccade that shifted the object into the opposite hemifield. During the saccade, the object underwent changes of varying magnitude in two feature dimensions (Experiment 1: color and location, Experiments 2 and 3: color and orientation). Participants reported whether they detected any change and estimated one of the post-saccadic features. Integration of pre-saccadic with post-saccadic input was observed as a bias in estimates towards the pre-saccadic feature value. In all experiments, pre-saccadic bias weakened as the magnitude of the transsaccadic change in the estimated feature increased. Changes in the other feature, despite having a similar probability of detection, had no effect on integration. Results were quantitatively captured by an observer model where the decision whether to integrate information from sequential fixations is made independently for each feature and coupled to awareness of a feature change.


Sign in / Sign up

Export Citation Format

Share Document