scholarly journals Distance estimation from monocular cues in an ethological visuomotor task

2021 ◽  
Author(s):  
Philip R L Parker ◽  
Eliott T T Abe ◽  
Natalie T Beatie ◽  
Emmalyn S P Leonard ◽  
Dylan M Martins ◽  
...  

In natural contexts, sensory processing and motor output are closely coupled, which is reflected in the fact that many brain areas contain both sensory and movement signals. However, standard reductionist paradigms decouple sensory decisions from their natural motor consequences, and head-fixation prevents the natural sensory consequences of self-motion. In particular, movement through the environment provides a number of depth cues beyond stereo vision that are poorly understood. To study the integration of visual processing and motor output in a naturalistic task, we investigated distance estimation in freely moving mice. We found that mice use vision to accurately jump across a variable gap, thus directly coupling a visual computation to its corresponding ethological motor output. Monocular eyelid suture did not affect performance, thus mice can use cues that do not depend on binocular disparity and stereo vision. Under monocular conditions, mice performed more vertical head movements, consistent with the use of motion parallax cues, and optogenetic suppression of primary visual cortex impaired task performance. Together, these results show that mice can use monocular cues, relying on visual cortex, to accurately judge distance. Furthermore, this behavioral paradigm provides a foundation for studying how neural circuits convert sensory information into ethological motor output.

2020 ◽  
Author(s):  
E Zamboni ◽  
VG Kemper ◽  
NR Goncalves ◽  
K Jia ◽  
VM Karlaftis ◽  
...  

AbstractAdapting to the environment statistics by reducing brain responses to repetitive sensory information is key for efficient information processing. Yet, the fine-scale computations that support this adaptive processing in the human brain remain largely unknown. Here, we capitalize on the sub-millimetre resolution afforded by ultra-high field imaging to examine BOLD-fMRI signals across cortical depth and discern competing hypotheses about the brain mechanisms (feedforward vs. feedback) that mediate adaptive visual processing. We demonstrate suppressive recurrent processing within visual cortex, as indicated by stronger BOLD decrease in superficial than middle and deeper layers for gratings that were repeatedly presented at the same orientation. Further, we show dissociable connectivity mechanisms for adaptive processing: enhanced feedforward connectivity within visual cortex, while feedback occipito-parietal connectivity, reflecting top-down influences on visual processing. Our findings provide evidence for a circuit of local recurrent and feedback interactions that mediate rapid brain plasticity for adaptive information processing.


2006 ◽  
Vol 96 (2) ◽  
pp. 775-784 ◽  
Author(s):  
Koji Inui ◽  
Ryusuke Kakigi

We previously examined the cortical processing in response to somatosensory, auditory and noxious stimuli, using magnetoencephalography in humans. Here, we performed a similar analysis of the processing in the human visual cortex for comparative purposes. After flash stimuli applied to the right eye, activations were found in eight cortical areas: the left medial occipital area around the calcarine fissure (primary visual cortex, V1), the left dorsomedial area around the parietooccipital sulcus (DM), the ventral (MOv) and dorsal (MOd) parts of the middle occipital area of bilateral hemispheres, the left temporo-occipito-parietal cortex corresponding to human MT/V5 (hMT), and the ventral surface of the medial occipital area (VO) of the bilateral hemispheres. The mean onset latencies of each cortical activity were (in ms): 27.5 (V1), 31.8 (DM), 32.8 (left MOv), 32.2 (right MOv), 33.4 (left MOd), 32.3 (right MOv), 37.8 (hMT), 46.9 (left VO), and 46.4 (right VO). Therefore the cortico-cortical connection time of visual processing at the early stage was 4–6 ms, which is very similar to the time delay between sequential activations in somatosensory and auditory processing. In addition, the activities in V1, MOd, DM, and hMT showed a similar biphasic waveform with a reversal of polarity after 10 ms, which is a common activation profile of the cortical activity for somatosensory, auditory, and pain-evoked responses. These results suggest similar mechanisms of the serial cortico-cortical processing of sensory information among all sensory areas of the cortex.


2000 ◽  
Vol 84 (6) ◽  
pp. 2984-2997 ◽  
Author(s):  
Per Jenmalm ◽  
Seth Dahlstedt ◽  
Roland S. Johansson

Most objects that we manipulate have curved surfaces. We have analyzed how subjects during a prototypical manipulatory task use visual and tactile sensory information for adapting fingertip actions to changes in object curvature. Subjects grasped an elongated object at one end using a precision grip and lifted it while instructed to keep it level. The principal load of the grasp was tangential torque due to the location of the center of mass of the object in relation to the horizontal grip axis joining the centers of the opposing grasp surfaces. The curvature strongly influenced the grip forces required to prevent rotational slips. Likewise the curvature influenced the rotational yield of the grasp that developed under the tangential torque load due to the viscoelastic properties of the fingertip pulps. Subjects scaled the grip forces parametrically with object curvature for grasp stability. Moreover in a curvature-dependent manner, subjects twisted the grasp around the grip axis by a radial flexion of the wrist to keep the desired object orientation despite the rotational yield. To adapt these fingertip actions to object curvature, subjects could use both vision and tactile sensibility integrated with predictive control. During combined blindfolding and digital anesthesia, however, the motor output failed to predict the consequences of the prevailing curvature. Subjects used vision to identify the curvature for efficient feedforward retrieval of grip force requirements before executing the motor commands. Digital anesthesia caused little impairment of grip force control when subjects had vision available, but the adaptation of the twist became delayed. Visual cues about the form of the grasp surface obtained before contact was used to scale the grip force, whereas the scaling of the twist depended on visual cues related to object movement. Thus subjects apparently relied on different visuomotor mechanisms for adaptation of grip force and grasp kinematics. In contrast, blindfolded subjects used tactile cues about the prevailing curvature obtained after contact with the object for feedforward adaptation of both grip force and twist. We conclude that humans use both vision and tactile sensibility for feedforward parametric adaptation of grip forces and grasp kinematics to object curvature. Normal control of the twist action, however, requires digital afferent input, and different visuomotor mechanisms support the control of the grasp twist and the grip force. This differential use of vision may have a bearing to the two-stream model of human visual processing.


2021 ◽  
pp. 1-14
Author(s):  
Jie Huang ◽  
Paul Beach ◽  
Andrea Bozoki ◽  
David C. Zhu

Background: Postmortem studies of brains with Alzheimer’s disease (AD) not only find amyloid-beta (Aβ) and neurofibrillary tangles (NFT) in the visual cortex, but also reveal temporally sequential changes in AD pathology from higher-order association areas to lower-order areas and then primary visual area (V1) with disease progression. Objective: This study investigated the effect of AD severity on visual functional network. Methods: Eight severe AD (SAD) patients, 11 mild/moderate AD (MAD), and 26 healthy senior (HS) controls undertook a resting-state fMRI (rs-fMRI) and a task fMRI of viewing face photos. A resting-state visual functional connectivity (FC) network and a face-evoked visual-processing network were identified for each group. Results: For the HS, the identified group-mean face-evoked visual-processing network in the ventral pathway started from V1 and ended within the fusiform gyrus. In contrast, the resting-state visual FC network was mainly confined within the visual cortex. AD disrupted these two functional networks in a similar severity dependent manner: the more severe the cognitive impairment, the greater reduction in network connectivity. For the face-evoked visual-processing network, MAD disrupted and reduced activation mainly in the higher-order visual association areas, with SAD further disrupting and reducing activation in the lower-order areas. Conclusion: These findings provide a functional corollary to the canonical view of the temporally sequential advancement of AD pathology through visual cortical areas. The association of the disruption of functional networks, especially the face-evoked visual-processing network, with AD severity suggests a potential predictor or biomarker of AD progression.


1996 ◽  
Vol 8 (6) ◽  
pp. 603-625 ◽  
Author(s):  
Pieter R. Roelfsema ◽  
Andreas K. Engel ◽  
Peter König ◽  
Wolf Singer

Recent experimental results in the visual cortex of cats and monkeys have suggested an important role for synchronization of neuronal activity on a millisecond time scale. Synchronization has been found to occur selectively between neuronal responses to related image components. This suggests that not only the firing rates of neurons but also the relative timing of their action potentials is used as a coding dimension. Thus, a powerful relational code would be available, in addition to the rate code, for the representation of perceptual objects. This could alleviate difficulties in the simultaneous representation of multiple objects. In this article we present a set of theoretical arguments and predictions concerning the mechanisms that could group neurons responding to related image components into coherently active aggregates. Synchrony is likely to be mediated by synchronizing connections; we introduce the concept of an interaction skeleton to refer to the subset of synchronizing connections that are rendered effective by a particular stimulus configuration. If the image is segmented into objects, these objects can typically be segmented further into their constituent parts. The synchronization behavior of neurons that represent the various image components may accurately reflect this hierarchical clustering. We propose that the range of synchronizing interactions is a dynamic parameter of the cortical network, so that the grain of the resultant grouping process may be adapted to the actual behavioral requirements. It can be argued that different aspects of purposeful behavior rely on separable processes by which sensory input is transformed into adjustments of motor activity. Indeed, neurophysiological evidence has suggested separate processing streams originating in the primary visual cortex for object identification and sensorimotor coordination. However, such a separation calls for a mechanism that avoids interference effects in the presence of multiple objects, or when multiple motor programs are simultaneously prepared. In this article we suggest that synchronization between responses of neurons in both the visual cortex and in areas that are involved in response selection and execution might allow for a selective routing of sensory information to the appropriate motor program.


2009 ◽  
Vol 102 (6) ◽  
pp. 3469-3480 ◽  
Author(s):  
H. M. Van Ettinger-Veenstra ◽  
W. Huijbers ◽  
T. P. Gutteling ◽  
M. Vink ◽  
J. L. Kenemans ◽  
...  

It is well known that parts of a visual scene are prioritized for visual processing, depending on the current situation. How the CNS moves this focus of attention across the visual image is largely unknown, although there is substantial evidence that preparation of an action is a key factor. Our results support the view that direct corticocortical feedback connections from frontal oculomotor areas to the visual cortex are responsible for the coupling between eye movements and shifts of visuospatial attention. Functional magnetic resonance imaging (fMRI)–guided transcranial magnetic stimulation (TMS) was applied to the frontal eye fields (FEFs) and intraparietal sulcus (IPS). A single pulse was delivered 60, 30, or 0 ms before a discrimination target was presented at, or next to, the target of a saccade in preparation. Results showed that the known enhancement of discrimination performance specific to locations to which eye movements are being prepared was enhanced by early TMS on the FEF contralateral to eye movement direction, whereas TMS on the IPS resulted in a general performance increase. The current findings indicate that the FEF affects selective visual processing within the visual cortex itself through direct feedback projections.


1982 ◽  
Vol 4 (3) ◽  
pp. 81-98

An evoked potential (EP) is the electrical response of the CNS to an external stimulus. Each EP may be represented as a sequence of waves, the amplitude and length of which reflect the conduction and processing of sensory information through the CNS. Visual, auditory, and somatic EP are used clinically in pediatrics. Visual evoked potentials are the responses recorded from the occipital cortex of the scalp near the primary visual cortex to a stroboscopic light flash. The occipital potential orginates in the retina. This study can be used to assess the functional integrity of the visual system. Visual acuity can be assessed using refractive correction to enhance the amplitude of the recorded response in small children.


NeuroImage ◽  
2012 ◽  
Vol 63 (3) ◽  
pp. 1464-1477 ◽  
Author(s):  
Andreas A. Ioannides ◽  
Vahe Poghosyan ◽  
Lichan Liu ◽  
George A. Saridis ◽  
Marco Tamietto ◽  
...  

2017 ◽  
Vol 117 (1) ◽  
pp. 388-402 ◽  
Author(s):  
Michael A. Cohen ◽  
George A. Alvarez ◽  
Ken Nakayama ◽  
Talia Konkle

Visual search is a ubiquitous visual behavior, and efficient search is essential for survival. Different cognitive models have explained the speed and accuracy of search based either on the dynamics of attention or on similarity of item representations. Here, we examined the extent to which performance on a visual search task can be predicted from the stable representational architecture of the visual system, independent of attentional dynamics. Participants performed a visual search task with 28 conditions reflecting different pairs of categories (e.g., searching for a face among cars, body among hammers, etc.). The time it took participants to find the target item varied as a function of category combination. In a separate group of participants, we measured the neural responses to these object categories when items were presented in isolation. Using representational similarity analysis, we then examined whether the similarity of neural responses across different subdivisions of the visual system had the requisite structure needed to predict visual search performance. Overall, we found strong brain/behavior correlations across most of the higher-level visual system, including both the ventral and dorsal pathways when considering both macroscale sectors as well as smaller mesoscale regions. These results suggest that visual search for real-world object categories is well predicted by the stable, task-independent architecture of the visual system. NEW & NOTEWORTHY Here, we ask which neural regions have neural response patterns that correlate with behavioral performance in a visual processing task. We found that the representational structure across all of high-level visual cortex has the requisite structure to predict behavior. Furthermore, when directly comparing different neural regions, we found that they all had highly similar category-level representational structures. These results point to a ubiquitous and uniform representational structure in high-level visual cortex underlying visual object processing.


2018 ◽  
Author(s):  
Andreea Lazar ◽  
Chris Lewis ◽  
Pascal Fries ◽  
Wolf Singer ◽  
Danko Nikolić

SummarySensory exposure alters the response properties of individual neurons in primary sensory cortices. However, it remains unclear how these changes affect stimulus encoding by populations of sensory cells. Here, recording from populations of neurons in cat primary visual cortex, we demonstrate that visual exposure enhances stimulus encoding and discrimination. We find that repeated presentation of brief, high-contrast shapes results in a stereotyped, biphasic population response consisting of a short-latency transient, followed by a late and extended period of reverberatory activity. Visual exposure selectively improves the stimulus specificity of the reverberatory activity, by increasing the magnitude and decreasing the trial-to-trial variability of the neuronal response. Critically, this improved stimulus encoding is distributed across the population and depends on precise temporal coordination. Our findings provide evidence for the existence of an exposure-driven optimization process that enhances the encoding power of neuronal populations in early visual cortex, thus potentially benefiting simple readouts at higher stages of visual processing.


Sign in / Sign up

Export Citation Format

Share Document