scholarly journals The N300: An Index for Predictive Coding of Complex Visual Objects and Scenes

Author(s):  
Manoj Kumar ◽  
Kara D Federmeier ◽  
Diane M Beck

Abstract Predictive coding models can simulate known perceptual or neuronal phenomena, but there have been fewer attempts to identify a reliable neural signature of predictive coding for complex stimuli. In a pair of studies, we test whether the N300 component of the event-related potential, occurring 250–350 ms post-stimulus-onset, has the response properties expected for such a signature of perceptual hypothesis testing at the level of whole objects and scenes. We show that N300 amplitudes are smaller to representative (“good exemplars”) compared to less representative (“bad exemplars”) items from natural scene categories. Integrating these results with patterns observed for objects, we establish that, across a variety of visual stimuli, the N300 is responsive to statistical regularity, or the degree to which the input is “expected” (either explicitly or implicitly) based on prior knowledge, with statistically regular images evoking a reduced response. Moreover, we show that the measure exhibits context-dependency; that is, we find the N300 sensitivity to category representativeness when stimuli are congruent with, but not when they are incongruent with, a category pre-cue. Thus, we argue that the N300 is the best candidate to date for an index of perceptual hypotheses testing for complex visual objects and scenes.

2020 ◽  
Author(s):  
Manoj Kumar ◽  
Kara D. Federmeier ◽  
Diane M. Beck

AbstractThe bulk of support for predictive coding models has come from the models’ ability to simulate known perceptual or neuronal phenomena, but there have been fewer attempts to identify a reliable neural signature of predictive coding. Here we propose that the N300 component of the event-related potential (ERP), occurring 250-350 ms post-stimulus-onset, may be such a signature of perceptual hypothesis testing operating at the scale of whole objects and scenes. We show that N300 amplitudes are smaller to representative (“good exemplars”) compared to less representative (“bad exemplars”) items from natural scene categories. Integrating these results with patterns observed for objects, we establish that, across a variety of visual stimuli, the N300 is responsive to statistical regularity, or the degree to which the input is “expected” (either explicitly or implicitly) by the system based on prior knowledge, with statistically regular images, which entail reduced prediction error, evoking a reduced response. Moreover, we show that the measure exhibits context-dependency; that is, we find the N300 sensitivity to category representativeness only when stimuli are congruent with and not when they are incongruent with a category pre-cue, suggesting that the component may reflect the ease with which an image matches the current hypothesis generated by the visual system. Thus, we argue that the N300 ERP component is the best candidate to date for an index of perceptual hypotheses testing, whereby incoming sensory information for complex visual objects and scenes is accessed against contextual predictions generated in mid-level visual areas.Significance StatementPredictive coding models postulate that our perception of visual sensory input is guided by prior knowledge and the situational context, such that it is facilitated when the input matches expectation and hence produces less prediction error. Here, we show that an electrophysiological measure, the N300, matches the features hypothesized for a measure of predictive coding: complex scenes (like objects) elicit less N300 activity when they are statistically regular (e.g., more representative of their categories), in a manner that itself is context dependent. We thus show that the N300 provides a window into the interaction of context, prediction, and visual perception.


2021 ◽  
Vol 11 (12) ◽  
pp. 1581
Author(s):  
Alexis E. Whitton ◽  
Kathryn E. Lewandowski ◽  
Mei-Hua Hall

Motivational and perceptual disturbances co-occur in psychosis and have been linked to aberrations in reward learning and sensory gating, respectively. Although traditionally studied independently, when viewed through a predictive coding framework, these processes can both be linked to dysfunction in striatal dopaminergic prediction error signaling. This study examined whether reward learning and sensory gating are correlated in individuals with psychotic disorders, and whether nicotine—a psychostimulant that amplifies phasic striatal dopamine firing—is a common modulator of these two processes. We recruited 183 patients with psychotic disorders (79 schizophrenia, 104 psychotic bipolar disorder) and 129 controls and assessed reward learning (behavioral probabilistic reward task), sensory gating (P50 event-related potential), and smoking history. Reward learning and sensory gating were correlated across the sample. Smoking influenced reward learning and sensory gating in both patient groups; however, the effects were in opposite directions. Specifically, smoking was associated with improved performance in individuals with schizophrenia but impaired performance in individuals with psychotic bipolar disorder. These findings suggest that reward learning and sensory gating are linked and modulated by smoking. However, disorder-specific associations with smoking suggest that nicotine may expose pathophysiological differences in the architecture and function of prediction error circuitry in these overlapping yet distinct psychotic disorders.


2017 ◽  
Author(s):  
Nicolas Burra ◽  
Dirk Kerzel ◽  
David Munoz ◽  
Didier Grandjean ◽  
Leonardo Ceravolo

Salient vocalizations, especially aggressive voices, are believed to attract attention due to an automatic threat detection system. However, studies assessing the temporal dynamics of auditory spatial attention to aggressive voices are missing. Using event-related potential markers of auditory spatial attention (N2ac and LPCpc), we show that attentional processing of threatening vocal signals is enhanced at two different stages of auditory processing. As early as 200 ms post stimulus onset, attentional orienting/engagement is enhanced for threatening as compared to happy vocal signals. Subsequently, as early as 400 ms post stimulus onset, the reorienting of auditory attention to the center of the screen (or disengagement from the target) is enhanced. This latter effect is consistent with the need to optimize perception by balancing the intake of stimulation from left and right auditory space. Our results extend the scope of theories from the visual to the auditory modality by showing that threatening stimuli also bias early spatial attention in the auditory modality. Although not the focus of the present work, we observed that the attentional enhancement was more pronounced in female than male participants.


2003 ◽  
Vol 15 (7) ◽  
pp. 1039-1051 ◽  
Author(s):  
Ute Leonards ◽  
Julie Palix ◽  
Christoph Michel ◽  
Vicente Ibanez

Functional magnetic resonance imaging studies have indicated that efficient feature search (FS) and inefficient conjunction search (CS) activate partially distinct frontoparietal cortical networks. However, it remains a matter of debate whether the differences in these networks reflect differences in the early processing during FS and CS. In addition, the relationship between the differences in the networks and spatial shifts of attention also remains unknown. We examined these issues by applying a spatio-temporal analysis method to high-resolution visual event-related potentials (ERPs) and investigated how spatio-temporal activation patterns differ for FS and CS tasks. Within the first 450 msec after stimulus onset, scalp potential distributions (ERP maps) revealed 7 different electric field configurations for each search task. Configuration changes occurred simultaneously in the two tasks, suggesting that contributing processes were not significantly delayed in one task compared to the other. Despite this high spatial and temporal correlation, two ERP maps (120–190 and 250–300 msec) differed between the FS and CS. Lateralized distributions were observed only in the ERP map at 250–300 msec for the FS. This distribution corresponds to that previously described as the N2pc component (a negativity in the time range of the N2 complex over posterior electrodes of the hemisphere contralateral to the target hemifield), which has been associated with the focusing of attention onto potential target items in the search display. Thus, our results indicate that the cortical networks involved in feature and conjunction searching partially differ as early as 120 msec after stimulus onset and that the differences between the networks employed during the early stages of FS and CS are not necessarily caused by spatial attention shifts.


2004 ◽  
Vol 16 (3) ◽  
pp. 503-522 ◽  
Author(s):  
Matthias M. Müller ◽  
Andreas Keil

In the present study, subjects selectively attended to the color of checkerboards in a feature-based attention paradigm. Induced gamma band responses (GBRs), the induced alpha band, and the event-related potential (ERP) were analyzed to uncover neuronal dynamics during selective feature processing. Replicating previous ERP findings, the selection negativity (SN) with a latency of about 160 msec was extracted. Furthermore, and similarly to previous EEG studies, a gamma band peak in a time window between 290 and 380 msec was found. This peak had its major energy in the 55to 70-Hz range and was significantly larger for the attended color. Contrary to previous human induced gamma band studies, a much earlier 40to 50-Hz peak in a time window between 160 and 220 msec after stimulus onset and, thus, concurrently to the SN was prominent with significantly more energy for attended as opposed to unattended color. The induced alpha band (9.8–11.7 Hz), on the other hand, exhibited a marked suppression for attended color in a time window between 450 and 600 msec after stimulus onset. A comparison of the time course of the 40to 50-Hz and 55to 70-Hz induced GBR, the induced alpha band, and the ERP revealed temporal coincidences for changes in the morphology of these brain responses. Despite these similarities in the time domain, the cortical source configuration was found to discriminate between induced GBRs and the SN. Our results suggest that large-scale synchronous high-frequency brain activity as measured in the human GBR play a specific role in attentive processing of stimulus features.


2006 ◽  
Vol 18 (9) ◽  
pp. 1488-1497 ◽  
Author(s):  
James W. Tanaka ◽  
Tim Curran ◽  
Albert L. Porterfield ◽  
Daniel Collins

Electrophysiological studies using event-related potentials have demonstrated that face stimuli elicit a greater negative brain potential in right posterior recording sites 170 msec after stimulus onset (N170) relative to nonface stimuli. Results from repetition priming paradigms have shown that repeated exposures of familiar faces elicit a larger negative brainwave (N250r) at inferior temporal sites compared to repetitions of unfamiliar faces. However, less is known about the time course and learning conditions under which the N250 face representation is acquired. In the familiarization phase of the Joe/no Joe task, subjects studied a target “Joe” face (“Jane” for female subjects) and, during the course of the experiment, identified a series of sequentially presented faces as either Joe or not Joe. The critical stimulus conditions included the subject's own face, a same-sex Joe ( Jane) face and a same-sex “other” face. The main finding was that the subject's own face produced a focal negative deflection (N250) in posterior channels relative to nontarget faces. The task-relevant Joe target face was not differentiated from other nontarget faces in the first half of the experiment. However, in the second half, the Joe face produced an N250 response that was similar in magnitude to the own face. These findings suggest that the N250 indexes two types of face memories: a preexperimentally familiar face representation (i.e., the “own face” and a newly acquired face representation (i.e., the Joe/Jane face) that was formed during the course of the experiment.


2014 ◽  
Author(s):  
Jaime Martin del Campo ◽  
John Maltby ◽  
Giorgio Fuggetta

The present study tested the Dysexecutive Luck hypothesis by examining whether deficits in the early stage of top down attentional control led to an increase of neural activity in later stages of response related selection process among those who thought themselves to be unlucky. Individuals with these beliefs were compared to a control group using an Event-Related Potential (ERP) measure assessing underlying neural activity of semantic inhibition while completing a Stroop test. Results showed stronger main interference effects in the former group, via greater reaction times and a more negative distributed scalp late ERP component during incongruent trials in the time window of 450 – 780 ms post stimulus onset. Further, less efficient maintenance of task set among the former group was associated with greater late ERP response-related activation to compensate for the lack of top-down attentional control. These findings provide electrophysiological evidence to support the Dysexecutive Luck hypothesis.


2019 ◽  
Author(s):  
Amanda K. Robinson ◽  
Tijl Grootswagers ◽  
Thomas A. Carlson

AbstractRapid image presentations combined with time-resolved multivariate analysis methods of EEG or MEG (rapid-MVPA) offer unique potential in assessing the temporal limitations of the human visual system. Recent work has shown that multiple visual objects presented sequentially can be simultaneously decoded from M/EEG recordings. Interestingly, object representations reached higher stages of processing for slower image presentation rates compared to fast rates. This fast rate attenuation is probably caused by forward and backward masking from the other images in the stream. Two factors that are likely to influence masking during rapid streams are stimulus duration and stimulus onset asynchrony (SOA). Here, we disentangle these effects by studying the emerging neural representation of visual objects using rapid-MVPA while independently manipulating stimulus duration and SOA. Our results show that longer SOAs enhance the decodability of neural representations, regardless of stimulus presentation duration, suggesting that subsequent images act as effective backward masks. In contrast, image duration does not appear to have a graded influence on object representations. Interestingly, however, decodability was improved when there was a gap between subsequent images, indicating that an abrupt onset or offset of an image enhances its representation. Our study yields insight into the dynamics of object processing in rapid streams, paving the way for future work using this promising approach.


2021 ◽  
Author(s):  
Josephine Zerna ◽  
Alexander Strobel ◽  
Christoph Scheffel

In electroencephalography (EEG), microstates are distributions of activity across the scalp that persist for several tens of milliseconds before changing into a different topographical pattern. Microstate analysis is a promising way of utilizing EEG as both temporal and spatial imaging tool, but has mostly been applied to resting state data. This study aimed to conceptually replicate microstate findings of valence and arousal processing and to investigate the effects of emotion regulation on microstates, using existing data of an EEG paradigm with 107 healthy adults who were to actively view emotional pictures, cognitively detach from them, or suppress facial reactions. EEG data were clustered into microstates based on topographical similarity and compared on global and electrode level between conditions of interest. Within the first 600 ms after stimulus onset only the comparison of viewing positive and negative pictures yielded significant global results, caused by different electrodes depending on the microstate. Since the microstates associated with more and less arousing pictures did not differ from each other, sequential processing of valence and arousal information could not be replicated. When extending the analysis to 2,000 ms after stimulus onset, global microstate differences were exclusive to the comparison of viewing and detaching from negative pictures. Intriguingly, we observed the novel phenomenon of a significant global difference that could not be attributed to single electrodes on the local level. This suggests that microstate analysis can detect differences beyond those detected by event-related potential analysis, simply by not confining the analysis to a few electrodes.


Sign in / Sign up

Export Citation Format

Share Document