scholarly journals Separable Influences of Reward on Visual Processing and Choice

2021 ◽  
Vol 33 (2) ◽  
pp. 248-262
Author(s):  
Alireza Soltani ◽  
Mohsen Rakhshan ◽  
Robert J. Schafer ◽  
Brittany E. Burrows ◽  
Tirin Moore

Primate vision is characterized by constant, sequential processing and selection of visual targets to fixate. Although expected reward is known to influence both processing and selection of visual targets, similarities and differences between these effects remain unclear mainly because they have been measured in separate tasks. Using a novel paradigm, we simultaneously measured the effects of reward outcomes and expected reward on target selection and sensitivity to visual motion in monkeys. Monkeys freely chose between two visual targets and received a juice reward with varying probability for eye movements made to either of them. Targets were stationary apertures of drifting gratings, causing the end points of eye movements to these targets to be systematically biased in the direction of motion. We used this motion-induced bias as a measure of sensitivity to visual motion on each trial. We then performed different analyses to explore effects of objective and subjective reward values on choice and sensitivity to visual motion to find similarities and differences between reward effects on these two processes. Specifically, we used different reinforcement learning models to fit choice behavior and estimate subjective reward values based on the integration of reward outcomes over multiple trials. Moreover, to compare the effects of subjective reward value on choice and sensitivity to motion directly, we considered correlations between each of these variables and integrated reward outcomes on a wide range of timescales. We found that, in addition to choice, sensitivity to visual motion was also influenced by subjective reward value, although the motion was irrelevant for receiving reward. Unlike choice, however, sensitivity to visual motion was not affected by objective measures of reward value. Moreover, choice was determined by the difference in subjective reward values of the two options, whereas sensitivity to motion was influenced by the sum of values. Finally, models that best predicted visual processing and choice used sets of estimated reward values based on different types of reward integration and timescales. Together, our results demonstrate separable influences of reward on visual processing and choice, and point to the presence of multiple brain circuits for the integration of reward outcomes.

2020 ◽  
Author(s):  
Alireza Soltani ◽  
Mohsen Rakhshan ◽  
Robert J Schafer ◽  
Brittany E Burrows ◽  
Tirin Moore

AbstractPrimate vision is characterized by constant, sequential processing and selection of visual targets to fixate. Although expected reward is known to influence both processing and selection of visual targets, similarities and differences between these effects remains unclear mainly because they have been measured in separate tasks. Using a novel paradigm, we simultaneously measured the effects of reward outcomes and expected reward on target selection and sensitivity to visual motion in monkeys. Monkeys freely chose between two visual targets and received a juice reward with varying probability for eye movements made to either of them. Targets were stationary apertures of drifting gratings, causing the endpoints of eye movements to these targets to be systematically biased in the direction of motion. We used this motion-induced bias as a measure of sensitivity to visual motion on each trial. We then performed different analyses to explore effects of objective and subjective reward values on choice and sensitivity to visual motion in order to find similarities and differences between reward effects on these two processes. Specifically, we used different reinforcement learning models to fit choice behavior and estimate subjective reward values based on the integration of reward outcomes over multiple trials. Moreover, to compare the effects of subjective reward value on choice and sensitivity to motion directly, we considered correlations between each of these variables and integrated reward outcomes on a wide range of timescales. We found that in addition to choice, sensitivity to visual motion was also influenced by subjective reward value, even though the motion was irrelevant for receiving reward. Unlike choice, however, sensitivity to visual motion was not affected by objective measures of reward value. Moreover, choice was determined by the difference in subjective reward values of the two options whereas sensitivity to motion was influenced by the sum of values. Finally, models that best predicted visual processing and choice used sets of estimated reward values based on different types of reward integration and timescales. Together, our results demonstrate separable influences of reward on visual processing and choice, and point to the presence of multiple brain circuits for integration of reward outcomes.


Perception ◽  
1997 ◽  
Vol 26 (7) ◽  
pp. 823-830 ◽  
Author(s):  
Lothar Spillmann ◽  
Stuart Anstis ◽  
Anne Kurtenbach ◽  
Ian Howard

A random-dot field undergoing counterphase flicker paradoxically appears to move in the same direction as head and eye movements, ie opposite to the optic-flow field. The effect is robust and occurs over a wide range of flicker rates and pixel sizes. The phenomenon can be explained by reversed phi motion caused by apparent pixel movement between successive retinal images. The reversed motion provides a positive feedback control of the display, whereas under normal conditions retinal signals provide a negative feedback. This altered polarity invokes self-sustaining eye movements akin to involuntary optokinetic nystagmus.


2010 ◽  
Vol 22 (5) ◽  
pp. 1312-1332 ◽  
Author(s):  
Samat Moldakarimov ◽  
Maxim Bazhenov ◽  
Terrence J. Sejnowski

Perceiving and identifying an object is improved by prior exposure to the object. This perceptual priming phenomenon is accompanied by reduced neural activity. But whether suppression of neuronal activity with priming is responsible for the improvement in perception is unclear. To address this problem, we developed a rate-based network model of visual processing. In the model, decreased neural activity following priming was due to stimulus-specific sharpening of representations taking place in the early visual areas. Representation sharpening led to decreased interference of representations in higher visual areas that facilitated selection of one of the competing representations, thereby improving recognition. The model explained a wide range of psychophysical and physiological data observed in priming experiments, including antipriming phenomena, and predicted two functionally distinct stages of visual processing.


1997 ◽  
Vol 14 (2) ◽  
pp. 323-338 ◽  
Author(s):  
Vincent P. Ferrera ◽  
Stephen G. Lisberger

AbstractAs a step toward understanding the mechanism by which targets are selected for smooth-pursuit eye movements, we examined the behavior of the pursuit system when monkeys were presented with two discrete moving visual targets. Two rhesus monkeys were trained to select a small moving target identified by its color in the presence of a moving distractor of another color. Smooth-pursuit eye movements were quantified in terms of the latency of the eye movement and the initial eye acceleration profile. We have previously shown that the latency of smooth pursuit, which is normally around 100 ms, can be extended to 150 ms or shortened to 85 ms depending on whether there is a distractor moving in the opposite or same direction, respectively, relative to the direction of the target. We have now measured this effect for a 360 deg range of distractor directions, and distractor speeds of 5–45 deg/s. We have also examined the effect of varying the spatial separation and temporal asynchrony between target and distractor. The results indicate that the effect of the distractor on the latency of pursuit depends on its direction of motion, and its spatial and temporal proximity to the target, but depends very little on the speed of the distractor. Furthermore, under the conditions of these experiments, the direction of the eye movement that is emitted in response to two competing moving stimuli is not a vectorial combination of the stimulus motions, but is solely determined by the direction of the target. The results are consistent with a competitive model for smooth-pursuit target selection and suggest that the competition takes place at a stage of the pursuit pathway that is between visual-motion processing and motor-response preparation.


2020 ◽  
Author(s):  
Han Zhang

Mind-wandering (MW) is ubiquitous and is associated with reduced performance across a wide range of tasks. Recent studies have shown that MW can be related to changes in gaze parameters. In this dissertation, I explored the link between eye movements and MW in three different contexts that involve complex cognitive processing: visual search, scene perception, and reading comprehension. Study 1 examined how MW affects visual search performance, particularly the ability to suppress salient but irrelevant distractors during visual search. Study 2 used a scene encoding task to study how MW affects how eye movements change over time and their relationship with scene content. Study 3 examined how MW affects readers’ ability to detect semantic incongruities in the text and make necessary revisions of their understanding as they read jokes. All three studies showed that MW was associated with decreased task performance at the behavioral level (e.g., response time, recognition, and recall). Eye-tracking further showed that these behavioral costs can be traced to deficits in specific cognitive processes. The final chapter of this dissertation explored whether there are context-independent eye movement features of MW. MW manifests itself in different ways depending on task characteristics. In tasks that require extensive sampling of the stimuli (e.g., reading and scene viewing), MW was related to a global reduction in visual processing. But this was not the case for the search task, which involved speeded, simple visual processing. MW was instead related to increased looking time on the target after it was already located. MW affects the coupling between cognitive efforts and task demands, but the nature of this decoupling depends on the specific features of particular tasks.


2021 ◽  
Vol 125 (5) ◽  
pp. 1552-1576
Author(s):  
David Souto ◽  
Dirk Kerzel

People’s eyes are directed at objects of interest with the aim of acquiring visual information. However, processing this information is constrained in capacity, requiring task-driven and salience-driven attentional mechanisms to select few among the many available objects. A wealth of behavioral and neurophysiological evidence has demonstrated that visual selection and the motor selection of saccade targets rely on shared mechanisms. This coupling supports the premotor theory of visual attention put forth more than 30 years ago, postulating visual selection as a necessary stage in motor selection. In this review, we examine to which extent the coupling of visual and motor selection observed with saccades is replicated during ocular tracking. Ocular tracking combines catch-up saccades and smooth pursuit to foveate a moving object. We find evidence that ocular tracking requires visual selection of the speed and direction of the moving target, but the position of the motion signal may not coincide with the position of the pursuit target. Further, visual and motor selection can be spatially decoupled when pursuit is initiated (open-loop pursuit). We propose that a main function of coupled visual and motor selection is to serve the coordination of catch-up saccades and pursuit eye movements. A simple race-to-threshold model is proposed to explain the variable coupling of visual selection during pursuit, catch-up and regular saccades, while generating testable predictions. We discuss pending issues, such as disentangling visual selection from preattentive visual processing and response selection, and the pinpointing of visual selection mechanisms, which have begun to be addressed in the neurophysiological literature.


Perception ◽  
1997 ◽  
Vol 26 (1_suppl) ◽  
pp. 124-124
Author(s):  
H Deubel ◽  
S Shimojo ◽  
I Paprotta

Previous research has demonstrated that visual attention is focused on the movement target, both before saccadic eye movements and before manual reaching, allowing for spatially selective object recognition (Deubel and Schneider, 1996 Vision Research36 1827 – 1837; Deubel, Schneider, and Paprotta, 1996 Perception Supplement, 13 – 19). Here we study the illusory line motion effect (Hikosaka et al, 1993 Vision Research33 1219 – 1240) in a dual-task paradigm to further investigate the coupling of attention and movement target selection. Subjects were presented a display with two potential movement targets (small circles). When one of the circles flashed, they performed a reaching movement with the unseen hand to the other stimulus; movements were registered with a Polhemus FastTrack system. At a SOA that was varied between 0 and 1000 ms after the movement cue, a line appeared and connected both stimuli. After the reaching movement, subjects indicated the perceived direction of line motion. In a second experiment, saccadic eye movements instead of reaching movements were studied. The data show that for short SOAs the subjects reported illusory line motion away from the cue location indicating that attention is automatically drawn to the cue. For longer SOAs but well before movement onset the illusory motion effect inverted—evidence for an attention shift to the movement target. The findings were very similar for manual reaching and for saccadic eye movements. The results confirm the hypothesis that the preparation of a goal-directed movement requires the attentional selection of the movement target. We discuss the assumption of a unitary attention mechanism which selects an object for visual processing, and simultaneously provides the information necessary for goal-directed motor action such as saccades, pointing, and grasping.


2019 ◽  
Vol 121 (2) ◽  
pp. 646-661 ◽  
Author(s):  
Marie E. Bellet ◽  
Joachim Bellet ◽  
Hendrikje Nienborg ◽  
Ziad M. Hafed ◽  
Philipp Berens

Saccades are ballistic eye movements that rapidly shift gaze from one location of visual space to another. Detecting saccades in eye movement recordings is important not only for studying the neural mechanisms underlying sensory, motor, and cognitive processes, but also as a clinical and diagnostic tool. However, automatically detecting saccades can be difficult, particularly when such saccades are generated in coordination with other tracking eye movements, like smooth pursuits, or when the saccade amplitude is close to eye tracker noise levels, like with microsaccades. In such cases, labeling by human experts is required, but this is a tedious task prone to variability and error. We developed a convolutional neural network to automatically detect saccades at human-level accuracy and with minimal training examples. Our algorithm surpasses state of the art according to common performance metrics and could facilitate studies of neurophysiological processes underlying saccade generation and visual processing. NEW & NOTEWORTHY Detecting saccades in eye movement recordings can be a difficult task, but it is a necessary first step in many applications. We present a convolutional neural network that can automatically identify saccades with human-level accuracy and with minimal training examples. We show that our algorithm performs better than other available algorithms, by comparing performance on a wide range of data sets. We offer an open-source implementation of the algorithm as well as a web service.


1974 ◽  
Vol 71 (2-3) ◽  
pp. 209-214 ◽  
Author(s):  
Robert H. Wurtz ◽  
Charles W. Mohler

2009 ◽  
Vol 1 (3) ◽  
Author(s):  
Ozgur E. Akman ◽  
Richard A. Clement ◽  
David S. Broomhead ◽  
Sabira Mannan ◽  
Ian Moorhead ◽  
...  

The selection of fixation targets involves a combination of top-down and bottom-up processing. The role of bottom-up processing can be enhanced by using multistable stimuli because their constantly changing appearance seems to depend predominantly on stimulusdriven factors. We used this approach to investigate whether visual processing models based on V1 need to be extended to incorporate specific computations attributed to V4. Eye movements of 8 subjects were recorded during free viewing of the Marroquin pattern in which illusory circles appear and disappear. Fixations were concentrated on features arranged in concentric rings within the pattern. Comparison with simulated fixation data demonstrated that the saliency of these features can be predicted with appropriate weighting of lateral connections in existing V1 models.


Sign in / Sign up

Export Citation Format

Share Document