scholarly journals Act Quickly, Decide Later: Long-latency Visual Processing Underlies Perceptual Decisions but Not Reflexive Behavior

2011 ◽  
Vol 23 (12) ◽  
pp. 3734-3745 ◽  
Author(s):  
Jacob Jolij ◽  
H. Steven Scholte ◽  
Simon van Gaal ◽  
Timothy L. Hodgson ◽  
Victor A. F. Lamme

Humans largely guide their behavior by their visual representation of the world. Recent studies have shown that visual information can trigger behavior within 150 msec, suggesting that visually guided responses to external events, in fact, precede conscious awareness of those events. However, is such a view correct? By using a texture discrimination task, we show that the brain relies on long-latency visual processing in order to guide perceptual decisions. Decreasing stimulus saliency leads to selective changes in long-latency visually evoked potential components reflecting scene segmentation. These latency changes are accompanied by almost equal changes in simple RTs and points of subjective simultaneity. Furthermore, we find a strong correlation between individual RTs and the latencies of scene segmentation related components in the visually evoked potentials, showing that the processes underlying these late brain potentials are critical in triggering a response. However, using the same texture stimuli in an antisaccade task, we found that reflexive, but erroneous, prosaccades, but not antisaccades, can be triggered by earlier visual processes. In other words: The brain can act quickly, but decides late. Differences between our study and earlier findings suggesting that action precedes conscious awareness can be explained by assuming that task demands determine whether a fast and unconscious, or a slower and conscious, representation is used to initiate a visually guided response.

2020 ◽  
Author(s):  
Amy Chow ◽  
Andrew E. Silva ◽  
Katelyn Tsang ◽  
Gabriel Ng ◽  
Cindy Ho ◽  
...  

Abnormal visual experience during an early critical period of visual cortex development can lead to a neurodevelopmental disorder of vision called amblyopia. A key feature of amblyopia is interocular suppression, whereby information from the amblyopic eye is blocked from conscious awareness when both eyes are open. Suppression of the amblyopic eye is thought to occur at an early stage of visual processing and to be absolute. Using a binocular rivalry paradigm, we demonstrate that suppressed visual information from the amblyopic eye remains available for binocular integration and can influence overall perception of stimuli. This finding reveals that suppressed visual information continues to be represented within the brain even when it is blocked from conscious awareness by chronic pathological suppression. These results have direct implications for the clinical management of amblyopia.


Author(s):  
Martin V. Butz ◽  
Esther F. Kutter

While bottom-up visual processing is important, the brain integrates this information with top-down, generative expectations from very early on in the visual processing hierarchy. Indeed, our brain should not be viewed as a classification system, but rather as a generative system, which perceives something by integrating sensory evidence with the available, learned, predictive knowledge about that thing. The involved generative models continuously produce expectations over time, across space, and from abstracted encodings to more concrete encodings. Bayesian information processing is the key to understand how information integration must work computationally – at least in approximation – also in the brain. Bayesian networks in the form of graphical models allow the modularization of information and the factorization of interactions, which can strongly improve the efficiency of generative models. The resulting generative models essentially produce state estimations in the form of probability densities, which are very well-suited to integrate multiple sources of information, including top-down and bottom-up ones. A hierarchical neural visual processing architecture illustrates this point even further. Finally, some well-known visual illusions are shown and the perceptions are explained by means of generative, information integrating, perceptual processes, which in all cases combine top-down prior knowledge and expectations about objects and environments with the available, bottom-up visual information.


Author(s):  
Daniel Tomsic ◽  
Julieta Sztarker

Decapod crustaceans, in particular semiterrestrial crabs, are highly visual animals that greatly rely on visual information. Their responsiveness to visual moving stimuli, with behavioral displays that can be easily and reliably elicited in the laboratory, together with their sturdiness for experimental manipulation and the accessibility of their nervous system for intracellular electrophysiological recordings in the intact animal, make decapod crustaceans excellent experimental subjects for investigating the neurobiology of visually guided behaviors. Investigations of crustaceans have elucidated the general structure of their eyes and some of their specializations, the anatomical organization of the main brain areas involved in visual processing and their retinotopic mapping of visual space, and the morphology, physiology, and stimulus feature preferences of a number of well-identified classes of neurons, with emphasis on motion-sensitive elements. This anatomical and physiological knowledge, in connection with results of behavioral experiments in the laboratory and the field, are revealing the neural circuits and computations involved in important visual behaviors, as well as the substrate and mechanisms underlying visual memories in decapod crustaceans.


2021 ◽  
Author(s):  
Kimberly Reinhold ◽  
Arbora Resulaj ◽  
Massimo Scanziani

The behavioral state of a mammal impacts how the brain responds to visual stimuli as early as in the dorsolateral geniculate nucleus of the thalamus (dLGN), the primary relay of visual information to the cortex. A clear example of this is the markedly stronger response of dLGN neurons to higher temporal frequencies of the visual stimulus in alert as compared to quiescent animals. The dLGN receives strong feedback from the visual cortex, yet whether this feedback contributes to these state-dependent responses to visual stimuli is poorly understood. Here we show that in mice, silencing cortico-thalamic feedback abolishes state-dependent differences in the response of dLGN neurons to visual stimuli. This holds true for dLGN responses to both temporal and spatial features of the visual stimulus. These results reveal that the state-dependent shift of the response to visual stimuli in an early stage of visual processing depends on cortico-thalamic feedback.


Author(s):  
Martin V. Butz ◽  
Esther F. Kutter

This chapter addresses primary visual perception, detailing how visual information comes about and, as a consequence, which visual properties provide particularly useful information about the environment. The brain extracts this information systematically, and also separates redundant and complementary visual information aspects to improve the effectiveness of visual processing. Computationally, image smoothing, edge detectors, and motion detectors must be at work. These need to be applied in a convolutional manner over the fixated area, which are computations that are predestined to be solved by means of cortical columnar structures in the brain. On the next level, the extracted information needs to be integrated to be able to segment and detect object structures. The brain solves this highly challenging problem by incorporating top-down expectations and by integrating complementary visual information aspects, such as light reflections, texture information, line convergence information, shadows, and depth information. In conclusion, the need for integrating top-down visual expectations to form complete and stable perceptions is made explicit.


2005 ◽  
Vol 17 (8) ◽  
pp. 1341-1352 ◽  
Author(s):  
Joseph B. Hopfinger ◽  
Anthony J. Ries

Recent studies have generated debate regarding whether reflexive attention mechanisms are triggered in a purely automatic stimulus-driven manner. Behavioral studies have found that a nonpredictive “cue” stimulus will speed manual responses to subsequent targets at the same location, but only if that cue is congruent with actively maintained top-down settings for target detection. When a cue is incongruent with top-down settings, response times are unaffected, and this has been taken as evidence that reflexive attention mechanisms were never engaged in those conditions. However, manual response times may mask effects on earlier stages of processing. Here, we used event-related potentials to investigate the interaction of bottom-up sensory-driven mechanisms and top-down control settings at multiple stages of processing in the brain. Our results dissociate sensory-driven mechanisms that automatically bias early stages of visual processing from later mechanisms that are contingent on top-down control. An early enhancement of target processing in the extrastriate visual cortex (i.e., the P1 component) was triggered by the appearance of a unique bright cue, regardless of top-down settings. The enhancement of visual processing was prolonged, however, when the cue was congruent with top-down settings. Later processing in posterior temporal-parietal regions (i.e., the ipsilateral invalid negativity) was triggered automatically when the cue consisted of the abrupt appearance of a single new object. However, in cases where more than a single object appeared during the cue display, this stage of processing was contingent on top-down control. These findings provide evidence that visual information processing is biased at multiple levels in the brain, and the results distinguish automatically triggered sensory-driven mechanisms from those that are contingent on top-down control settings.


2020 ◽  
Author(s):  
Sanjeev Nara ◽  
Mikel Lizarazu ◽  
Craig G Richter ◽  
Diana C Dima ◽  
Mathieu Bourguignon ◽  
...  

AbstractPredictive processing has been proposed as a fundamental cognitive mechanism to account for how the brain interacts with the external environment via its sensory modalities. The brain processes external information about the content (i.e. “what”) and timing (i.e., “when”) of environmental stimuli to update an internal generative model of the world around it. However, the interaction between “what” and “when” has received very little attention when focusing on vision. In this magnetoencephalography (MEG) study we investigate how processing of feature specific information (i.e. “what”) is affected by temporal predictability (i.e. “when”). In line with previous findings, we observed a suppression of evoked neural responses in the visual cortex for predictable stimuli. Interestingly, we observed that temporal uncertainty enhances this expectation suppression effect. This suggests that in temporally uncertain scenarios the neurocognitive system relies more on internal representations and invests less resources integrating bottom-up information. Indeed, temporal decoding analysis indicated that visual features are encoded for a shorter time period by the neural system when temporal uncertainty is higher. This supports the fact that visual information is maintained active for less time for a stimulus whose time onset is unpredictable compared to when it is predictable. These findings highlight the higher reliance of the visual system on the internal expectations when the temporal dynamics of the external environment are less predictable.


2021 ◽  
Author(s):  
Rajwant Sandhu

Multi-modal integration often results in one modality dominating sensory perception. Such dominance is influenced by task demands, processing efficiency, and training. I assessed modality dominance between auditory and visual processing in a paradigm controlling for the first two factors while manipulating the third. In a uni-modal task auditory and visual processing was equated per individual participant. Pre and post training, participants completed a bimodal selective attention task where the relationship between relevant and irrelevant information, and the task-relevant modality changed across trials. Training in one modality was provided between pre and post-training tasks. Training resulted in non-specific speeding post-training. Pre-training, visual information impacted auditory responding more than vice versa and this pattern reversed following training, implying visual dominance pre, and auditory dominance post-training. Results suggest modality dominance is flexible and influenced by experimental design and participant abilities. Research should continue to uncover factors leading to sensory dominance by one modality.


1996 ◽  
Vol 82 (1) ◽  
pp. 67-75 ◽  
Author(s):  
Kathleen M. Scarvie ◽  
Angela O. Ballantyne ◽  
Doris A. Trauner

Infantile nephropathy cystinosis is a genetic metabolic disorder in which the amino acid cystine accumulates in various organs, including the kidney, cornea, thyroid, and brain. Despite normal intellect, individuals with cystinosis may have specific impairments in the processing of visual information. To examine further the specific types of deficits in visual processing found in individuals with cystinosis, we administered the Developmental Test of Visual-motor Integration to 26 children with cystinosis (4 to 16 yr. old) and 26 matched controls. The cystinosis group achieved a significantly lower standard score, raw score, and mean ceiling than did the control group. Qualitative analyses showed that in the cystinosis group, size within errors and rotation errors were more prevalent than in the control group. Correlational analyses showed that with advancing age, the cystinosis subjects tended to fall further behind their chronological age. Our data, together with the findings of previous studies, suggest that the visuospatial difficulties in children with cystinosis may be due to inadequate perception or processing of visually presented information. Furthermore, the increasing discrepancy with age may reflect a progressive cognitive impairment, possibly as a result of cystine accumulation in the brain over time.


2010 ◽  
Vol 103 (4) ◽  
pp. 1988-2001 ◽  
Author(s):  
Lorenzo Guerrasio ◽  
Julie Quinet ◽  
Ulrich Büttner ◽  
Laurent Goffart

When primates maintain their gaze directed toward a visual target (visual fixation), their eyes display a combination of miniature fast and slow movements. An involvement of the cerebellum in visual fixation is indicated by the severe gaze instabilities observed in patients suffering from cerebellar lesions. Recent studies in non-human primates have identified a cerebellar structure, the fastigial oculomotor region (FOR), as a major cerebellar output nucleus with projections toward oculomotor regions in the brain stem. Unilateral inactivation of the FOR leads to dysmetric visually guided saccades and to an offset in gaze direction when the animal fixates a visual target. However, the nature of this fixation offset is not fully understood. In the present work, we analyze the inactivation-induced effects on fixation. A novel technique is adopted to describe the generation of saccades when a target is being fixated (fixational saccades). We show that the offset is the result of a combination of impaired saccade accuracy and an altered encoding of the foveal target position. Because they are independent, we propose that these two impairments are mediated by the different projections of the FOR to the brain stem, in particular to the deep superior colliculus and the pontomedullary reticular formation. Our study demonstrates that the oculomotor cerebellum, through the activity in the FOR, regulates both the amplitude of fixational saccades and the position toward which the eyes must be directed, suggesting an involvement in the acquisition of visual information from the fovea.


Sign in / Sign up

Export Citation Format

Share Document