scholarly journals The temporal resolution of neural codes: does response latency have a unique role?

2002 ◽  
Vol 357 (1424) ◽  
pp. 987-1001 ◽  
Author(s):  
M. W. Oram ◽  
D. Xiao ◽  
B. Dritschel ◽  
K. R. Payne

This article reviews the nature of the neural code in non-human primate cortex and assesses the potential for neurons to carry two or more signals simultaneously. Neurophysiological recordings from visual and motor systems indicate that the evidence for a role for precisely timed spikes relative to other spike times ( ca . 1–10 ms resolution) is inconclusive. This indicates that the visual system does not carry a signal that identifies whether the responses were elicited when the stimulus was attended or not. Simulations show that the absence of such a signal reduces, but does not eliminate, the increased discrimination between stimuli that are attended compared with when the stimuli are unattended. The increased accuracy asymptotes with increased gain control, indicating limited benefit from increasing attention. The absence of a signal identifying the attentional state under which stimuli were viewed can produce the greatest discrimination between attended and unattended stimuli. Furthermore, the greatest reduction in discrimination errors occurs for a limited range of gain control, again indicating that attention effects are limited. By contrast to precisely timed patterns of spikes where the timing is relative to other spikes, response latency provides a fine temporal resolution signal ( ca . 10 ms resolution) that carries information that is unavailable from coarse temporal response measures. Changes in response latency and changes in response magnitude can give rise to different predictions for the patterns of reaction times. The predictions are verified, and it is shown that the standard method for distinguishing executive and slave processes is only valid if the representations of interest, as evidenced by the neural code, are known. Overall, the data indicate that the signalling evident in neural signals is restricted to the spike count and the precise times of spikes relative to stimulus onset (response latency). These coding issues have implications for our understanding of cognitive models of attention and the roles of executive and slave systems.

2021 ◽  
Author(s):  
Bruce C. Hansen ◽  
Michelle R. Greene ◽  
David J. Field

AbstractA chief goal of systems neuroscience is to understand how the brain encodes information in our visual environments. Understanding that neural code is crucial to explaining how visual content is transformed via subsequent semantic representations to enable intelligent behavior. Although the visual code is not static, this reality is often obscured in voxel-wise encoding models of BOLD signals due to fMRI’s poor temporal resolution. We leveraged the high temporal resolution of EEG to develop an encoding technique based in state-space theory. This approach maps neural signals to each pixel within a given image and reveals location-specific transformations of the visual code, providing a spatiotemporal signature for the image at each electrode. This technique offers a spatiotemporal visualization of the evolution of the neural code of visual information thought impossible to obtain from EEG and promises to provide insight into how visual meaning is developed through dynamic feedforward and recurrent processes.


2003 ◽  
Vol 17 (3) ◽  
pp. 113-123 ◽  
Author(s):  
Jukka M. Leppänen ◽  
Mirja Tenhunen ◽  
Jari K. Hietanen

Abstract Several studies have shown faster choice-reaction times to positive than to negative facial expressions. The present study examined whether this effect is exclusively due to faster cognitive processing of positive stimuli (i.e., processes leading up to, and including, response selection), or whether it also involves faster motor execution of the selected response. In two experiments, response selection (onset of the lateralized readiness potential, LRP) and response execution (LRP onset-response onset) times for positive (happy) and negative (disgusted/angry) faces were examined. Shorter response selection times for positive than for negative faces were found in both experiments but there was no difference in response execution times. Together, these results suggest that the happy-face advantage occurs primarily at premotoric processing stages. Implications that the happy-face advantage may reflect an interaction between emotional and cognitive factors are discussed.


2001 ◽  
Vol 15 (4) ◽  
pp. 256-274 ◽  
Author(s):  
Caterina Pesce ◽  
Rainer Bösel

Abstract In the present study we explored the focusing of visuospatial attention in subjects practicing and not practicing activities with high attentional demands. Similar to the studies of Castiello and Umiltà (e. g., 1990) , our experimental procedure was a variation of Posner's (1980) basic paradigm for exploring covert orienting of visuospatial attention. In a simple RT-task, a peripheral cue of varying size was presented unilaterally or bilaterally from a central fixation point and followed by a target at different stimulus-onset-asynchronies (SOAs). The target could occur validly inside the cue or invalidly outside the cue with varying spatial relation to its boundary. Event-related brain potentials (ERPs) and reaction times (RTs) were recorded to target stimuli under the different task conditions. RT and ERP findings showed converging aspects as well as dissociations. Electrophysiological results revealed an amplitude modulation of the ERPs in the early and late Nd time interval at both anterior and posterior scalp sites, which seems to be related to the effects of peripheral informative cues as well as to the attentional expertise. Results were: (1) shorter latency effects confirm the positive-going amplitude enhancement elicited by unilateral peripheral cues and strengthen the criticism against the neutrality of spatially nonpredictive peripheral cueing of all possible target locations which is often presumed in behavioral studies. (2) Longer latency effects show that subjects with attentional expertise modulate the distribution of the attentional resources in the visual space differently than nonexperienced subjects. Skilled practice may lead to minimizing attentional costs by automatizing the use of a span of attention that is adapted to the most frequent task demands and endogenously increases the allocation of resources to cope with less usual attending conditions.


2010 ◽  
Vol 24 (3) ◽  
pp. 198-209 ◽  
Author(s):  
Yan Wang ◽  
Jianhui Wu ◽  
Shimin Fu ◽  
Yuejia Luo

In the present study, we used event-related potentials (ERPs) and behavioral measurements in a peripherally cued line-orientation discrimination task to investigate the underlying mechanisms of orienting and focusing in voluntary and involuntary attention conditions. Informative peripheral cue (75% valid) with long stimulus onset asynchrony (SOA) was used in the voluntary attention condition; uninformative peripheral cue (50% valid) with short SOA was used in the involuntary attention condition. Both orienting and focusing were affected by attention type. Results for attention orienting in the voluntary attention condition confirmed the “sensory gain control theory,” as attention enhanced the amplitude of the early ERP components, P1 and N1, without latency changes. In the involuntary attention condition, compared with invalid trials, targets in the valid trials elicited larger and later contralateral P1 components, and smaller and later contralateral N1 components. Furthermore, but only in the voluntary attention condition, targets in the valid trials elicited larger N2 and P3 components than in the invalid trials. Attention focusing in the involuntary attention condition resulted in larger P1 components elicited by targets in small-cue trials compared to large-cue trials, whereas in the voluntary attention condition, larger P1 components were elicited by targets in large-cue trials than in small-cue trials. There was no interaction between orienting and focusing. These results suggest that orienting and focusing of visual-spatial attention are deployed independently regardless of attention type. In addition, the present results provide evidence of dissociation between voluntary and involuntary attention during the same task.


Perception ◽  
10.1068/p7085 ◽  
2012 ◽  
Vol 41 (2) ◽  
pp. 131-147 ◽  
Author(s):  
Nicola J Gregory ◽  
Timothy L Hodgson

Pointing with the eyes or the finger occurs frequently in social interaction to indicate direction of attention and one's intentions. Research with a voluntary saccade task (where saccade direction is instructed by the colour of a fixation point) suggested that gaze cues automatically activate the oculomotor system, but non-biological cues, like arrows, do not. However, other work has failed to support the claim that gaze cues are special. In the current research we introduced biological and non-biological cues into the anti-saccade task, using a range of stimulus onset asynchronies (SOAs). The anti-saccade task recruits both top–down and bottom–up attentional mechanisms, as occurs in naturalistic saccadic behaviour. In experiment 1 gaze, but not arrows, facilitated saccadic reaction times (SRTs) in the opposite direction to the cues over all SOAs, whereas in experiment 2 directional word cues had no effect on saccades. In experiment 3 finger pointing cues caused reduced SRTs in the opposite direction to the cues at short SOAs. These findings suggest that biological cues automatically recruit the oculomotor system whereas non-biological cues do not. Furthermore, the anti-saccade task set appears to facilitate saccadic responses in the opposite direction to the cues.


2017 ◽  
Vol 114 (39) ◽  
pp. 10473-10478 ◽  
Author(s):  
Peter Kok ◽  
Pim Mostert ◽  
Floris P. de Lange

Perception can be described as a process of inference, integrating bottom-up sensory inputs and top-down expectations. However, it is unclear how this process is neurally implemented. It has been proposed that expectations lead to prestimulus baseline increases in sensory neurons tuned to the expected stimulus, which in turn, affect the processing of subsequent stimuli. Recent fMRI studies have revealed stimulus-specific patterns of activation in sensory cortex as a result of expectation, but this method lacks the temporal resolution necessary to distinguish pre- from poststimulus processes. Here, we combined human magnetoencephalography (MEG) with multivariate decoding techniques to probe the representational content of neural signals in a time-resolved manner. We observed a representation of expected stimuli in the neural signal shortly before they were presented, showing that expectations indeed induce a preactivation of stimulus templates. The strength of these prestimulus expectation templates correlated with participants’ behavioral improvement when the expected feature was task-relevant. These results suggest a mechanism for how predictive perception can be neurally implemented.


1992 ◽  
Vol 43 ◽  
pp. 27-38
Author(s):  
Ton Dijkstra

Two divided attention experiments investigated whether graphemes and phonemes can mutually activate each other during bimodal sublexical processing. Dutch subjects reacted to target letters and/or speech sounds in single-channel and bimodal stimuli. In some bimodal conditions, the visual and auditory targets were congruent (e.g., visual A, auditory /a:/), in others they were not (e.g., visual U, auditory /a:/). Temporal aspects of cross-modal activation were examined by varying the stimulus onset asynchrony (SOA) of visual and auditory stimulus components. Processing differences among stimuli (e.g., the letters A and U) were accounted for by correcting the obtained bimodal reaction times by means of the predictions of an independent race-model. Comparing the results of the adapted congruent and incongruent conditions for each SOA, it can be concluded that (a) cross-modal activation takes place in this task situation; (b) it is bidirectional, i.e. it spreads from grapheme to phoneme and vice versa; and (c) it occurs very rapidly.


2009 ◽  
Vol 101 (1) ◽  
pp. 198-206 ◽  
Author(s):  
Aarlenne Z. Khan ◽  
Gunnar Blohm ◽  
Robert M. McPeek ◽  
Philippe Lefèvre

A salient peripheral cue can capture attention, influencing subsequent responses to a target. Attentional cueing effects have been studied for head-restrained saccades; however, under natural conditions, the head contributes to gaze shifts. We asked whether attention influences head movements in combined eye–head gaze shifts and, if so, whether this influence is different for the eye and head components. Subjects made combined eye–head gaze shifts to horizontal visual targets. Prior to target onset, a behaviorally irrelevant cue was flashed at the same (congruent) or opposite (incongruent) location at various stimulus-onset asynchrony (SOA) times. We measured eye and head movements and neck muscle electromyographic signals. Reaction times for the eye and head were highly correlated; both showed significantly shorter latencies (attentional facilitation) for congruent compared with incongruent cues at the two shortest SOAs and the opposite pattern (inhibition of return) at the longer SOAs, consistent with attentional modulation of a common eye–head gaze drive. Interestingly, we also found that the head latency relative to saccade onset was significantly shorter for congruent than that for incongruent cues. This suggests an effect of attention on the head separate from that on the eyes.


1976 ◽  
Vol 42 (3) ◽  
pp. 767-770 ◽  
Author(s):  
Matti J. Saari ◽  
Bruce A. Pappas

The EKG was recorded while Ss differentially responded to auditory or visual stimuli in a reaction time task. The EKG record was analyzed by dividing each R-R interval encompassing a stimulus presentation into 9 equal phases. Reaction times were determined as a function of the phase encompassing stimulus onset while movement times were determined for the phase in which the response was initiated. Only reaction time significantly varied with cardiac cycle, with reactions during the second phase being slower than later phases.


1978 ◽  
Vol 21 (4) ◽  
pp. 638-651 ◽  
Author(s):  
Krzysztof Izdebski ◽  
Thomas Shipp

The maximum speed at which voluntary vocal and digital responses can be initiated was investigated in 15 male and 15 female neurologically normal adults using simple reaction time (RT) methodology. All subjects were pretrained to respond as quickly as possible to stimulus onset following a computer-controlled preparatory interval. Voluntary minimal RTs for phonation initiation were studied as a function of (1) stimulus type (auditory and somesthetic), (2) prephonatory vocal-fold position (abducted and adducted), and (3) subject’s lung volume (75%, 50%, and 25% VC). The average minimal vocal RT across subjects was 195 msec, and the fastest recorded vocal RT was 120 msec. Although vocal responses to an auditory stimulus were somewhat shorter than to a somesthetic stimulus, neither these differences nor the RTs between sexes were statistically significant except that females had shorter vocal RTs from an abducted prephonatory vocal-fold position. Shorter vocal RTs were obtained when phonation was initiated at midlung volume than at the lung volume extremes, and for both sexes the average digital RTs were significantly shorter than vocal RTs.


Sign in / Sign up

Export Citation Format

Share Document