scholarly journals Memory-guided saccades show effect of a perceptual illusion whereas visually guided saccades do not

2018 ◽  
Vol 119 (1) ◽  
pp. 62-72 ◽  
Author(s):  
Delphine Massendari ◽  
Matteo Lisi ◽  
Thérèse Collins ◽  
Patrick Cavanagh

The double-drift stimulus (a drifting Gabor with orthogonal internal motion) generates a large discrepancy between its physical and perceived path. Surprisingly, saccades directed to the double-drift stimulus land along the physical, and not perceived, path (Lisi M, Cavanagh P. Curr Biol 25: 2535−2540, 2015). We asked whether memory-guided saccades exhibited the same dissociation from perception. Participants were asked to keep their gaze centered on a fixation dot while the double-drift stimulus moved back and forth on a linear path in the periphery. The offset of the fixation was the go signal to make a saccade to the target. In the visually guided saccade condition, the Gabor kept moving on its trajectory after the go signal but was removed once the saccade began. In the memory conditions, the Gabor disappeared before or at the same time as the go-signal (0- to 1,000-ms delay) and participants made a saccade to its remembered location. The results showed that visually guided saccades again targeted the physical rather than the perceived location. However, memory saccades, even with 0-ms delay, had landing positions shifted toward the perceived location. Our result shows that memory- and visually guided saccades are based on different spatial information. NEW & NOTEWORTHY We compared the effect of a perceptual illusion on two types of saccades, visually guided vs. memory-guided saccades, and found that whereas visually guided saccades were almost unaffected by the perceptual illusion, memory-guided saccades exhibited a strong effect of the illusion. Our result is the first evidence in the literature to show that visually and memory-guided saccades use different spatial representations.

2004 ◽  
Vol 91 (6) ◽  
pp. 2628-2648 ◽  
Author(s):  
Melanie T. Wyder ◽  
Dino P. Massoglia ◽  
Terrence R. Stanford

This study examines the influence of behavioral context on the activity of visuomotor neurons in primate central thalamus. Neurons that combine information about sensory stimuli and their behavioral relevance are thought to contribute to the decision mechanisms that link specific stimuli to specific responses. We reported in a previous study that neurons in central thalamus carry spatial information throughout the instructed delay period of a visually guided delayed saccade task. The goal of the current study was to determine whether the delay-period activity of thalamic neurons is modulated by behavioral context. Single neurons were evaluated during performance of visually guided and memory-guided variants of a saccadic choice task in which a cue designated the response field stimulus as the target of a rewarded saccade or as an irrelevant distracter. The relative influence of the physical stimulus and context on delay-period activity suggested a minimum of 3 neural groups. Some neurons signaled the locations of visible stimuli regardless of behavioral relevance. Other neurons preferentially signaled the locations of current saccadic goals and did so even in the absence of the physical stimulus. A third group signaled only the locations of currently visible saccadic goals. For the latter 2 groups, activity was the product of both stimulus and context, suggesting that central thalamic neurons play a role in the context-dependent linkage of sensory signals and saccadic commands. More generally, these data suggest that the anatomical substrate of sensorimotor decision making may include the cortico-subcortical loops for which central thalamus serves as the penultimate synapse.


2003 ◽  
Vol 90 (3) ◽  
pp. 2029-2052 ◽  
Author(s):  
Melanie T. Wyder ◽  
Dino P. Massoglia ◽  
Terrence R. Stanford

This study investigates the visuomotor properties of several nuclei within primate central thalamus. These nuclei, which might be considered components of an oculomotor thalamus (OcTh), are found within and at the borders of the internal medullary lamina. These nuclei have extensive anatomical links to numerous cortical and subcortical visuomotor areas including the frontal eye fields, supplementary eye fields, prefrontal cortex, posterior parietal cortex, caudate, and substantia nigra pars reticulata. Previous single-unit recordings have shown that neurons in OcTh respond during self-paced spontaneous saccades and to visual stimuli in the absence of any specific behavioral requirement, but a thorough account of the activity of these areas in association with voluntary, goal-directed movement is lacking. We recorded activity from single neurons in primate central thalamus during performance of a visually guided delayed saccade task. The sample consisted primarily of neurons from the centrolateral and paracentral intralaminar nuclei and paralaminar regions of the ventral anterior and ventral lateral nuclei. Neurons responsive to sensory, delay, and motor phases of the task were observed in each region, with many neurons modulated during multiple task periods. Across the population, variation in the quality and timing of saccade-contingent activity suggested participation in functions ranging from generating a saccade (presaccadic) to registering its consequences (e.g., efference copy). Finally, many neurons were found to carry spatial information during the delay period, suggesting a role for central thalamus in higher-order aspects of visuomotor control.


2006 ◽  
Vol 96 (2) ◽  
pp. 813-825 ◽  
Author(s):  
Yoram Gutfreund ◽  
Eric I. Knudsen

Auditory neurons in the owl’s external nucleus of the inferior colliculus (ICX) integrate information across frequency channels to create a map of auditory space. This study describes a powerful, sound-driven adaptation of unit responsiveness in the ICX and explores the implications of this adaptation for sensory processing. Adaptation in the ICX was analyzed by presenting lightly anesthetized owls with sequential pairs of dichotic noise bursts. Adaptation occurred in response even to weak, threshold-level sounds and remained strong for more than 100 ms after stimulus offset. Stimulation by one range of sound frequencies caused adaptation that generalized across the entire broad range of frequencies to which these units responded. Identical stimuli were used to test adaptation in the lateral shell of the central nucleus of the inferior colliculus (ICCls), which provides input directly to the ICX. Compared with ICX adaptation, adaptation in the ICCls was substantially weaker, shorter lasting, and far more frequency specific, suggesting that part of the adaptation observed in the ICX was attributable to processes resident to the ICX. The sharp tuning of ICX neurons to space, along with their broad tuning to frequency, allows ICX adaptation to preserve a representation of stimulus location, regardless of the frequency content of the sound. The ICX is known to be a site of visually guided auditory map plasticity. ICX adaptation could play a role in this cross-modal plasticity by providing a short-term memory of the representation of auditory localization cues that could be compared with later-arriving, visual–spatial information from bimodal stimuli.


2015 ◽  
Vol 114 (6) ◽  
pp. 3211-3219 ◽  
Author(s):  
J. J. Tramper ◽  
W. P. Medendorp

It is known that the brain uses multiple reference frames to code spatial information, including eye-centered and body-centered frames. When we move our body in space, these internal representations are no longer in register with external space, unless they are actively updated. Whether the brain updates multiple spatial representations in parallel, or whether it restricts its updating mechanisms to a single reference frame from which other representations are constructed, remains an open question. We developed an optimal integration model to simulate the updating of visual space across body motion in multiple or single reference frames. To test this model, we designed an experiment in which participants had to remember the location of a briefly presented target while being translated sideways. The behavioral responses were in agreement with a model that uses a combination of eye- and body-centered representations, weighted according to the reliability in which the target location is stored and updated in each reference frame. Our findings suggest that the brain simultaneously updates multiple spatial representations across body motion. Because both representations are kept in sync, they can be optimally combined to provide a more precise estimate of visual locations in space than based on single-frame updating mechanisms.


1997 ◽  
Vol 8 (3) ◽  
pp. 224-230 ◽  
Author(s):  
Rick O. Gilmore ◽  
Mark H. Johnson

The extent to which infants combine visual (i e, retinal position) and nonvisual (eye or head position) spatial information in planning saccades relates to the issue of what spatial frame or frames of reference influence early visually guided action We explored this question by testing infants from 4 to 6 months of age on the double-step saccade paradigm, which has shown that adults combine visual and eye position information into an egocentric (head- or trunk-centered) representation of saccade target locations In contrast, our results imply that infants depend on a simple retinocentric representation at age 4 months, but by 6 months use egocentric representations more often to control saccade planning Shifts in the representation of visual space for this simple sensorimotor behavior may index maturation in cortical circuitry devoted to visual spatial processing in general


2019 ◽  
Author(s):  
Soyoun Kim ◽  
Dajung Jung ◽  
Sébastien Royer

AbstractPlace cells exhibit spatially selective firing fields and collectively map the continuum of positions in environments; how such network pattern develops with experience remains unclear. Here, we recorded putative granule (GC) and mossy (MC) cells from the dentate gyrus (DG) over 27 days as mice repetitively ran through a sequence of objects fixed onto a treadmill belt. We observed a progressive transformation of GC spatial representations, from a sparse encoding of object locations and periodic spatial intervals to increasingly more single, evenly dispersed place fields, while MCs showed little transformation and preferentially encoded object locations. A competitive learning model of the DG reproduced GC transformations via the progressive integration of landmark-vector cells and grid cell inputs and required MC-mediated feedforward inhibition to evenly distribute GC representations, suggesting that GCs progressively encode conjunctions of objects and spatial information via competitive learning, while MCs help homogenize GC spatial representations.


2017 ◽  
Vol 26 (03) ◽  
pp. 1750015 ◽  
Author(s):  
Sotiris Batsakis ◽  
Ilias Tachmazidis ◽  
Grigoris Antoniou

Representation of temporal and spatial information for the Semantic Web often involves qualitative defined information (i.e., information described using natural language terms such as “before” or “overlaps”) since precise dates or coordinates are not always available. This work proposes several temporal representations for time points and intervals and spatial topological representations in ontologies by means of OWL properties and reasoning rules in SWRL. All representations are fully compliant with existing Semantic Web standards and W3C recommendations. Although qualitative representations for temporal interval and point relations and spatial topological relations exist, this is the first work proposing representations combining qualitative and quantitative information for the Semantic Web. In addition to this, several existing and proposed approaches are compared using different reasoners and experimental results are presented in detail. The proposed approach is applied to topological relations (RCC5 and RCC8) supporting both qualitative and quantitative (i.e., using coordinates) spatial relations. Experimental results illustrate that reasoning performance differs greatly between different representations and reasoners. To the best of our knowledge, this is the first such experimental evaluation of both qualitative and quantitative Semantic Web temporal and spatial representations. In addition to the above, querying performance using SPARQL is evaluated. Evaluation results demonstrate that extracting qualitative relations from quantitative representations using reasoning rules and querying qualitative relations instead of directly querying quantitative representations increases performance at query time.


eLife ◽  
2020 ◽  
Vol 9 ◽  
Author(s):  
Hannah S Wirtshafter ◽  
Matthew A Wilson

The lateral septum (LS), which is innervated by the hippocampus, is known to represent spatial information. However, the details of place representation in the LS, and whether this place information is combined with reward signaling, remains unknown. We simultaneously recorded from rat CA1 and caudodorsal lateral septum in rat during a rewarded navigation task and compared spatial firing in the two areas. While LS place cells are less numerous than in hippocampus, they are similar to the hippocampus in field size and number of fields per cell, but with field shape and center distributions that are more skewed toward reward. Spike cross-correlations between the hippocampus and LS are greatest for cells that have reward-proximate place fields, suggesting a role for the LS in relaying task-relevant hippocampal spatial information to downstream areas, such as the VTA.


2020 ◽  
Vol 20 (11) ◽  
pp. 359
Author(s):  
Harun Karimpur ◽  
Johannes Kurz ◽  
Katja Fiehler

2017 ◽  
Author(s):  
Tara Arbab ◽  
Cyriel MA Pennartz ◽  
Francesco P Battaglia

AbstractFragile X syndrome (FXS) is an X-chromosome linked intellectual disability and the most common genetic cause of autism spectrum disorder (ASD). Building upon demonstrated deficits in neuronal plasticity and spatial memory in FXS, we investigated how spatial information processing is affected in vivo in an FXS mouse model (Fmr1-KO). Healthy hippocampal neurons (so-called place cells) exhibit place-related activity during spatial exploration, and the stability of these spatial representations can be taken as an index of memory function. We find impaired stability and reduced specificity of Fmr1-KO spatial representations. This is a potential biomarker for the cognitive dysfunction observed in FXS, informative on the ability to integrate sensory information into an abstract representation and successfully retain this conceptual memory. Our results provide key insight into the biological mechanisms underlying cognitive disabilities in FXS and ASD, paving the way for a targeted approach to remedy these.


Sign in / Sign up

Export Citation Format

Share Document