EEG-based single-trial detection of errors from multiple error-related brain activity

Author(s):  
Guofa Shou ◽  
Lei Ding
2013 ◽  
Vol 26 (5) ◽  
pp. 483-502 ◽  
Author(s):  
Antonia Thelen ◽  
Micah M. Murray

This review article summarizes evidence that multisensory experiences at one point in time have long-lasting effects on subsequent unisensory visual and auditory object recognition. The efficacy of single-trial exposure to task-irrelevant multisensory events is its ability to modulate memory performance and brain activity to unisensory components of these events presented later in time. Object recognition (either visual or auditory) is enhanced if the initial multisensory experience had been semantically congruent and can be impaired if this multisensory pairing was either semantically incongruent or entailed meaningless information in the task-irrelevant modality, when compared to objects encountered exclusively in a unisensory context. Processes active during encoding cannot straightforwardly explain these effects; performance on all initial presentations was indistinguishable despite leading to opposing effects with stimulus repetitions. Brain responses to unisensory stimulus repetitions differ during early processing stages (∼100 ms post-stimulus onset) according to whether or not they had been initially paired in a multisensory context. Plus, the network exhibiting differential responses varies according to whether or not memory performance is enhanced or impaired. The collective findings we review indicate that multisensory associations formedviasingle-trial learning exert influences on later unisensory processing to promote distinct object representations that manifest as differentiable brain networks whose activity is correlated with memory performance. These influences occur incidentally, despite many intervening stimuli, and are distinguishable from the encoding/learning processes during the formation of the multisensory associations. The consequences of multisensory interactions thus persist over time to impact memory retrieval and object discrimination.


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Leandro M. Alonso ◽  
Guillermo Solovey ◽  
Toru Yanagawa ◽  
Alex Proekt ◽  
Guillermo A. Cecchi ◽  
...  

2016 ◽  
Vol 38 (3) ◽  
pp. 1421-1437 ◽  
Author(s):  
Michele Allegra ◽  
Shima Seyed-Allaei ◽  
Fabrizio Pizzagalli ◽  
Fahimeh Baftizadeh ◽  
Marta Maieron ◽  
...  

2020 ◽  
Author(s):  
Kelsey Mankel ◽  
Philip I. Pavlik ◽  
Gavin M. Bidelman

AbstractPercepts are naturally grouped into meaningful categories to process continuous stimulus variations in the environment. Theories of category acquisition have existed for decades, but how they arise in the brain due to learning is not well understood. Here, advanced computational modeling techniques borrowed from educational data mining and cognitive psychology were used to trace the development of auditory categories within a short-term training session. Nonmusicians were rapidly trained for 20 min on musical interval identification (i.e., minor and major 3rd interval dyads) while their brain activity was recorded via EEG. Categorization performance and neural responses were then assessed for the trained (3rds) and novel untrained (major/minor 6ths) continua. Computational modeling was used to predict behavioral identification responses and whether the inclusion of single-trial features of the neural data could predict successful learning performance. Model results revealed meaningful brain-behavior relationships in auditory category learning detectible on the single-trial level; smaller P2 amplitudes were associated with a greater probability of correct interval categorization after learning. These findings highlight the nuanced dynamics of brain-behavior coupling that help explain the temporal emergence of auditory categorical learning in the brain.


2018 ◽  
Author(s):  
Timothy F. Brady ◽  
George A. Alvarez ◽  
Viola S. Störmer

AbstractHow people process images is known to affect memory for those images, but these effects have typically been studied using explicit task instructions to vary encoding. Here, we investigate the effects of intrinsic variation in processing on subsequent memory, testing whether recognizing an ambiguous stimulus as meaningful (as a face vs. as shape blobs) predicts subsequent visual memory even when matching the perceptual features and the encoding strategy between subsequently remembered and subsequently forgotten items. We show that single trial EEG activity can predict whether participants will subsequently remember an ambiguous Mooney face image (e.g., an image that will sometimes be seen as a face and sometimes not be seen as a face). In addition, we show that a classifier trained only to discriminate between whether participants perceive a face vs. non-face can generalize to predict whether an ambiguous image is subsequently remembered. Furthermore, when we examine the N170, an ERP index of face processing, we find that images that elicit larger N170s are more likely to be remembered than those that elicit smaller N170s, even when the exact same image elicited larger or smaller N170s across participants. Thus, images processed as meaningful – in this case as a face–during encoding are better remembered than identical images that are not processed as a face. This provides strong evidence that understanding the meaning of a stimulus during encoding plays a critical role in visual memory.Significance StatementIs visual memory inherently visual or does meaning and other conceptual information necessarily play a role even in memory for detailed visual information? Here we show that it’s easier to remember an image when it’s processed in a meaningful way -- as indexed by the amount of category-specific brain activity it elicits. In particular, we use single-trial EEG activity to predict whether an image will be subsequently remembered, and show that the main driver of this prediction ability is whether or not an image is seen as meaningful or non-meaningful. This shows that the extent to which an image is processed as meaningful can be used to predict subsequent memory even when controlling for perceptual factors and encoding strategies that typically differ across images.


Sign in / Sign up

Export Citation Format

Share Document