The effects of endogenous attention and stimulus onsets on encoding target location

2002 ◽  
Vol 55 (3) ◽  
pp. 987-1006 ◽  
Author(s):  
Shai Danziger

The effects of endogenously attended and non-attended stimulus onsets on spatial stimulus encoding of a target were explored in a Simon task. In each experiment participants made speeded left or right key-press responses to the colour of a target that followed a cueing display consisting of several shapes. The target appeared within some shapes and not others. The target's spatial code as measured by a Simon task was its location relative to possible target positions and relative to the centre of the display. Target location was not coded relative to the positions of onset shapes that could not contain a target. These spatial coding effects were found at cue-target intervals of 50, 300, and 1000 ms. The data indicate that target location is defined relative to the distribution of endogenous attention and reference frames aligned with the centre of the display and that the spatial code assigned to a target is not affected when attention is shifted in the target's direction.

Author(s):  
Luisa Lugli ◽  
Stefania D’Ascenzo ◽  
Roberto Nicoletti ◽  
Carlo Umiltà

Abstract. The Simon effect lies on the automatic generation of a stimulus spatial code, which, however, is not relevant for performing the task. Results typically show faster performance when stimulus and response locations correspond, rather than when they do not. Considering reaction time distributions, two types of Simon effect have been individuated, which are thought to depend on different mechanisms: visuomotor activation versus cognitive translation of spatial codes. The present study aimed to investigate whether the presence of a distractor, which affects the allocation of attentional resources and, thus, the time needed to generate the spatial code, changes the nature of the Simon effect. In four experiments, we manipulated the presence and the characteristics of the distractor. Findings extend previous evidence regarding the distinction between visuomotor activation and cognitive translation of spatial stimulus codes in a Simon task. They are discussed with reference to the attentional model of the Simon effect.


1997 ◽  
Vol 352 (1360) ◽  
pp. 1515-1524 ◽  
Author(s):  
J. Bures ◽  
A. A. Fenton ◽  
Yu. Kaminsky ◽  
J. Rossier ◽  
B. Sacchetti ◽  
...  

Navigation by means of cognitive maps appears to require the hippocampus; hippocampal place cells (PCs) appear to store spatial memories because their discharge is confined to cell–specific places called firing fields (FFs). Experiments with rats manipulated idiothetic and landmark–related information to understand the relationship between PC activity and spatial cognition. Rotating a circular arena in the light caused a discrepancy between these cues. This discrepancy caused most FFs to disappear in both the arena and room reference frames. However, FFs persisted in the rotating arena frame when the discrepancy was reduced by darkness or by a card in the arena. The discrepancy was increased by ’field clamping’the rat in a room–defined FF location by rotations that countered its locomotion. Most FFs dissipated and reappeared an hour or more after the clamp. Place–avoidance experiments showed that navigation uses independent idiothetic and exteroceptive memories. Rats learned to avoid the unmarked footshock region within a circular arena. When acquired on the stable arena in the light, the location of the punishment was learned by using both room and idiothetic cues; extinction in the dark transferred to the following session in the light. If, however, extinction occurred during rotation, only the arena–frame avoidance was extinguished in darkness; the room–defined location was avoided when the lights were turned back on. Idiothetic memory of room–defined avoidance was not formed during rotation in light; regardless of rotation, there was no avoidance when the lights were turned off, but room–frame avoidance reappeared when the lights were turned back on. The place–preference task rewarded visits to an allocentric target location with a randomly dispersed pellet. The resulting behaviour alternated between random pellet searching and target–directed navigation, making it possible to examine PC correlates of these two classes of spatial behaviour. The independence of idiothetic and exteroceptive spatial memories and the disruption of PC firing during rotation suggest that PCs may not be necessary for spatial cognition; this idea can be tested by recordings during the place–avoidance and preference tasks.


2015 ◽  
Vol 114 (6) ◽  
pp. 3211-3219 ◽  
Author(s):  
J. J. Tramper ◽  
W. P. Medendorp

It is known that the brain uses multiple reference frames to code spatial information, including eye-centered and body-centered frames. When we move our body in space, these internal representations are no longer in register with external space, unless they are actively updated. Whether the brain updates multiple spatial representations in parallel, or whether it restricts its updating mechanisms to a single reference frame from which other representations are constructed, remains an open question. We developed an optimal integration model to simulate the updating of visual space across body motion in multiple or single reference frames. To test this model, we designed an experiment in which participants had to remember the location of a briefly presented target while being translated sideways. The behavioral responses were in agreement with a model that uses a combination of eye- and body-centered representations, weighted according to the reliability in which the target location is stored and updated in each reference frame. Our findings suggest that the brain simultaneously updates multiple spatial representations across body motion. Because both representations are kept in sync, they can be optimally combined to provide a more precise estimate of visual locations in space than based on single-frame updating mechanisms.


2004 ◽  
Vol 69 (3) ◽  
pp. 179-190 ◽  
Author(s):  
Rob H. J. Van der Lubbe ◽  
Piotr Jaśkowski ◽  
Rolf Verleger

2021 ◽  
Vol 12 ◽  
Author(s):  
Lei Zheng ◽  
Jan-Gabriel Dobroschke ◽  
Stefan Pollmann

We investigated if contextual cueing can be guided by egocentric and allocentric reference frames. Combinations of search configurations and external frame orientations were learned during a training phase. In Experiment 1, either the frame orientation or the configuration was rotated, thereby disrupting either the allocentric or egocentric and allocentric predictions of the target location. Contextual cueing survived both of these manipulations, suggesting that it can overcome interference from both reference frames. In contrast, when changed orientations of the external frame became valid predictors of the target location in Experiment 2, we observed contextual cueing as long as one reference frame was predictive of the target location, but contextual cueing was eliminated when both reference frames were invalid. Thus, search guidance in repeated contexts can be supported by both egocentric and allocentric reference frames as long as they contain valid information about the search goal.


2020 ◽  
pp. 787-801
Author(s):  
S MORARESKU ◽  
K VLCEK

The dissociation between egocentric and allocentric reference frames is well established. Spatial coding relative to oneself has been associated with a brain network distinct from spatial coding using a cognitive map independently of the actual position. These differences were, however, revealed by a variety of tasks from both static conditions, using a series of images, and dynamic conditions, using movements through space. We aimed to clarify how these paradigms correspond to each other concerning the neural correlates of the use of egocentric and allocentric reference frames. We review here studies of allocentric and egocentric judgments used in static two- and three-dimensional tasks and compare their results with the findings from spatial navigation studies. We argue that neural correlates of allocentric coding in static conditions but using complex three-dimensional scenes and involving spatial memory of participants resemble those in spatial navigation studies, while allocentric representations in two-dimensional tasks are connected with other perceptual and attentional processes. In contrast, the brain networks associated with the egocentric reference frame in static two-dimensional and three-dimensional tasks and spatial navigation tasks are, with some limitations, more similar. Our review demonstrates the heterogeneity of experimental designs focused on spatial reference frames. At the same time, it indicates similarities in brain activation during reference frame use despite this heterogeneity.


2021 ◽  
Author(s):  
Xiaoyang Long ◽  
Bin Deng ◽  
Jing Cai ◽  
Zhe Sage Chen ◽  
Sheng-Jia Zhang

ABSTRACTBoth egocentric and allocentric representations of space are essential to spatial navigation. Although some studies of egocentric coding have been conducted within and around the hippocampal formation, externally anchored egocentric spatial representations have not yet been fully explored. Here we record and identify two subtypes of border cell in the rat primary somatosensory cortex (S1) and secondary visual cortex (V2). Subpopulations of S1 and V2 border cells exhibit rotation-selective asymmetric firing fields in an either clockwise (CW) or counterclockwise (CCW) manner. CW- and CCW-border cells increase their firing rates when animals move unidirectionally along environmental border(s). We demonstrate that both CW- and CCW-border cells fire in an egocentric reference frame relative to environmental borders, maintain preferred directional tunings in rotated, stretched, dark as well as novel arenas, and switch their directional firings in the presence of multi-layer concentric enclosures. These findings may provide rotation-selective egocentric reference frames within a larger spatial navigation system, and point to a common computational principle of spatial coding shared by multiple sensory cortical areas.HighlightsEgocentric border cells are present in rat S1 and V2Subtypes of border cells display egocentric asymmetric codingEgocentric and allocentric streams coexist in sensory corticesRotation-selective asymmetric firing is robust with environmental manipulations


2010 ◽  
Vol 104 (3) ◽  
pp. 1239-1248 ◽  
Author(s):  
Stan Van Pelt ◽  
Ivan Toni ◽  
Jörn Diedrichsen ◽  
W. Pieter Medendorp

The path from perception to action involves the transfer of information across various reference frames. Here we applied a functional magnetic resonance imaging (fMRI) repetition suppression paradigm to determine the reference frame(s) in which the cortical activity is coded at several phases of the sensorimotor transformation for a saccade, including sensory processing, saccade planning, and saccade execution. We distinguished between retinal (eye-centered) and nonretinal (e.g., head-centered) coding frames in three key regions: the intraparietal sulcus (IPS), frontal eye field (FEF), and supplementary eye field (SEF). Subjects ( n = 18) made delayed saccades to one of five possible peripheral targets, separated at intervals of 9° visual angle. Target locations were chosen pseudorandomly, based on a 2 × 2 factorial design, with factors retinal and nonretinal coordinates and levels novel and repeated. In all three regions, analysis of the blood oxygenation level dependent dynamics revealed an attenuation of the fMRI signal in trials repeating the location of the target in retinal coordinates. The amount of retinal suppression varied across the three phases of the trial, with the strongest suppression during saccade planning. The paradigm revealed only weak traces of nonretinal coding in these regions. Further analyses showed an orderly representation of the retinal target location, as expressed by a contralateral bias of activation, in the IPS and FEF, but not in the SEF. These results provide evidence that the sensorimotor processing in these centers reflects saccade generation in eye-centered coordinates, irrespective of their topographic organization.


Sign in / Sign up

Export Citation Format

Share Document