scholarly journals Temporal codes provide additional category-related information in object category decoding: a systematic comparison of informative EEG features

Author(s):  
Hamid Karimi-Rouzbahani ◽  
Mozhgan Shahmohammadi ◽  
Ehsan Vahab ◽  
Saeed Setayeshi ◽  
Thomas Carlson

AbstractHumans are remarkably efficent at recognizing objects. Understanding how the brain performs object recognition has been challenging. Our understanding has been advanced substantially in recent years with the development of multivariate decoding methods. Most start-of-the-art decoding procedures, make use of the ‘mean’ neural activation to extract object category information, which overlooks temporal variability in the signals. Here, we studied category-related information in 30 mathematically distinct features from electroencephalography (EEG) across three independent and highly-varied datasets using multivariate decoding. While the event-related potential (ERP) components of N1 and P2a were among the most informative features, the informative original signal samples and Wavelet coefficients, selected through principal component analysis, outperformed them. The four mentioned informative features showed more pronounced decoding in the Theta frequency band, which has been suggested to support feed-forward processing of visual information in the brain. Correlational analyses showed that the features, which were most informative about object categories, could predict participants’ behavioral performance (reaction time) more accurately than the less informative features. These results suggest a new approach for studying how the human brain encodes object category information and how we can read them out more optimally to investigate the temporal dynamics of the neural code. The codes are available online at https://osf.io/wbvpn/.

1997 ◽  
Vol 15 (1) ◽  
pp. 69-98 ◽  
Author(s):  
Edwin C. Hantz ◽  
Kelley G. Kreilick ◽  
William Kananen ◽  
Kenneth P. Swartz

The event-related evoked potential (ERP) responses to sentence endings that either confirm or violate syntactic/semantic constraints have been extensively studied. Very little is known, however, about the corresponding situation with respect to music. The current study investigates the brain- wave (ERP) responses to perceived phrase closure. ERPs are a potentially valid measure of how language-like or uniquely musical the perception of phrase closure is. In our study, highly trained musicians (N= 16) judged whether or not novel musical phrases were closed (melodically or harmonically). Three stimulus series consisted of seven- note tunes with four possible endings: closed (tonic note or tonic chord), open/ diatonic (dominant chord or a member thereof), open/ chromatic (a chromatic note or chord outside the key of the melody), or open/white noise (a nonmusical control). One series included melodies alone, a second series included melodies harmonized, and a third series included melodies in which the melodic contexts were disrupted rather than the endings. In the recorded ERPs, a statistically significant negative drift in the waveforms occurred over the course of the context series, indicating anticipation of closure. The drift-corrected poststimulus waveforms for all series were subjected to a principal components analysis/analysis of variance. Two subject variables were also considered: sex and absolute pitch. All four stimulus types elicited identifiable responses. The waveform peaks for the four stimulus types are clearly differentiated by principal component analysis scores to two components: one with a maximum value at 273 ms and one with a maximum value at 471 ms. Taking the closed endings as the expected "standard," the waveforms for the two types of musical deviant endings were significantly below the standard at 273 ms and above the standard at 471 ms. The amount of negativity was proportional to the amount of deviance of the ending. The positive peak in the closed condition and the reduced peak in the open/diatonic condition are contrary to the normal inverse relationship between peak size and stimulus probability; the former agrees with peaks found in response to syntactic closure in language. Significant, though isolated, interactions involving both sex and absolute pitch also emerged.


2020 ◽  
Author(s):  
Sanjeev Nara ◽  
Mikel Lizarazu ◽  
Craig G Richter ◽  
Diana C Dima ◽  
Mathieu Bourguignon ◽  
...  

AbstractPredictive processing has been proposed as a fundamental cognitive mechanism to account for how the brain interacts with the external environment via its sensory modalities. The brain processes external information about the content (i.e. “what”) and timing (i.e., “when”) of environmental stimuli to update an internal generative model of the world around it. However, the interaction between “what” and “when” has received very little attention when focusing on vision. In this magnetoencephalography (MEG) study we investigate how processing of feature specific information (i.e. “what”) is affected by temporal predictability (i.e. “when”). In line with previous findings, we observed a suppression of evoked neural responses in the visual cortex for predictable stimuli. Interestingly, we observed that temporal uncertainty enhances this expectation suppression effect. This suggests that in temporally uncertain scenarios the neurocognitive system relies more on internal representations and invests less resources integrating bottom-up information. Indeed, temporal decoding analysis indicated that visual features are encoded for a shorter time period by the neural system when temporal uncertainty is higher. This supports the fact that visual information is maintained active for less time for a stimulus whose time onset is unpredictable compared to when it is predictable. These findings highlight the higher reliance of the visual system on the internal expectations when the temporal dynamics of the external environment are less predictable.


Author(s):  
Junkai Shao ◽  
Chengqi Xue

In this study, event-related potential (ERP) was used to examine whether the brain has an inhibition effect on the interference of audio-visual information in the Chinese interface. Concrete icons (flame and snowflake) or Chinese characters ([Formula: see text] and [Formula: see text]) with opposite semantics were used as target carriers, and colors (red and blue) and speeches ([Formula: see text] and [Formula: see text]) were used as audio-visual intervention stimuli. In the experiment, target carrier and audio-visual intervention were presented in a random combination, and the subjects needed to determine whether the semantics of the two matched quickly. By comparing the overall cognitive performance of two carriers, it was found that the brain had a more significant inhibition effect on audio-visual intervention stimuli with different semantics (SBH/LBH and SRC/LRC) relative to the same semantics (SRH/LRH). The semantic mismatch caused significant N400, indicating that semantic interference in the interface information would trigger the brain’s inhibition effect. Therefore, the more complex the semantic matching of interface information was, the higher the amplitude of N400 became. The results confirmed that the semantic relationship between target carrier and audio-visual intervention was the key factor affecting the cognitive inhibition effect. Moreover, under different intervention stimuli, the ERP’s negative activity caused by Chinese characters in frontal and parietal-occipital regions was more evident than that by concrete icons, indicating that concrete icons had a lower inhibition effect than Chinese characters. Therefore, we considered that this inhibition effect was based on the semantic constraints of the target carrier itself, which might come from the knowledge learning and intuitive experience stored in the human brain.


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
J. Brendan Ritchie ◽  
Hans Op de Beeck

Abstract A large number of neuroimaging studies have shown that information about object category can be decoded from regions of the ventral visual pathway. One question is how this information might be functionally exploited in the brain. In an attempt to help answer this question, some studies have adopted a neural distance-to-bound approach, and shown that distance to a classifier decision boundary through neural activation space can be used to predict reaction times (RT) on animacy categorization tasks. However, these experiments have not controlled for possible visual confounds, such as shape, in their stimulus design. In the present study we sought to determine whether, when animacy and shape properties are orthogonal, neural distance in low- and high-level visual cortex would predict categorization RTs, and whether a combination of animacy and shape distance might predict RTs when categories crisscrossed the two stimulus dimensions, and so were not linearly separable. In line with previous results, we found that RTs correlated with neural distance, but only for animate stimuli, with similar, though weaker, asymmetric effects for the shape and crisscrossing tasks. Taken together, these results suggest there is potential to expand the neural distance-to-bound approach to other divisions beyond animacy and object category.


2019 ◽  
Author(s):  
Anh-Thu Mai ◽  
Tijl Grootswagers ◽  
Thomas A. Carlson

AbstractThe mere presence of information in the brain does not always mean that this information is available to consciousness (de-Wit, Alexander, Ekroll, & Wagemans, 2016). Experiments using paradigms such as binocular rivalry, visual masking, and the attentional blink have shown that visual information can be processed and represented by the visual system without reaching consciousness. Using multivariate pattern analysis (MVPA) and magneto-encephalography (MEG), we investigated the temporal dynamics of information processing for unconscious and conscious stimuli. We decoded stimulus information from the brain recordings while manipulating visual consciousness by presenting stimuli at threshold contrast in a backward masking paradigm. Participants’ consciousness was measured using both a forced-choice categorisation task and self-report. We show that brain activity during both conscious and non-conscious trials contained stimulus information, and that this information was enhanced in conscious trials. Overall, our results indicate that visual consciousness is characterised by enhanced neural activity representing the visual stimulus, and that this effect arises as early as 180 ms post-stimulus onset.


2018 ◽  
Author(s):  
J.Brendan Ritchie ◽  
Hans Op de Beeck

A large number of neuroimaging studies have shown that information about object category can be decoded from regions of the ventral visual pathway. One question is how this information might be functionally exploited in the brain. In an attempt to answer this question, some studies have adopted a neural distance-to-bound approach, and shown that distance to a classifier decision boundary through neural activation space can be used to predict reaction times (RT) on animacy categorization tasks. However, these experiments have not controlled for possible visual confounds, such as shape, in their stimulus design. In the present study we sought to determine whether, when animacy and shape properties are orthogonal, neural distance in low- and high-level visual cortex would predict categorization RTs. We also investigated whether a combination of animacy and shape distance might predict RTs when categories crisscrossed the two stimulus dimensions, and so were not linearly separable. In line with previous results, we found that RTs correlated with neural distance, but only for animate stimuli, with similar, though weaker, asymmetric effects for the shape and crisscrossing tasks. Taken together, these results suggest there is potential to expand the neural distance-to-bound approach to other divisions beyond animacy and object category.


2021 ◽  
pp. 1-46
Author(s):  
Hamid Karimi-Rouzbahani ◽  
Mozhgan Shahmohammadi ◽  
Ehsan Vahab ◽  
Saeed Setayeshi ◽  
Thomas Carlson

Abstract How does the human brain encode visual object categories? Our understanding of this has advanced substantially with the development of multivariate decoding analyses. However, conventional electroencephalography (EEG) decoding predominantly uses the mean neural activation within the analysis window to extract category information. Such temporal averaging overlooks the within-trial neural variability that is suggested to provide an additional channel for the encoding of information about the complexity and uncertainty of the sensory input. The richness of temporal variabilities, however, has not been systematically compared with the conventional mean activity. Here we compare the information content of 31 variability-sensitive features against the mean of activity, using three independent highly varied data sets. In whole-trial decoding, the classical event-related potential (ERP) components of P2a and P2b provided information comparable to those provided by original magnitude data (OMD) and wavelet coefficients (WC), the two most informative variability-sensitive features. In time-resolved decoding, the OMD and WC outperformed all the other features (including the mean), which were sensitive to limited and specific aspects of temporal variabilities, such as their phase or frequency. The information was more pronounced in the theta frequency band, previously suggested to support feedforward visual processing. We concluded that the brain might encode the information in multiple aspects of neural variabilities simultaneously such as phase, amplitude, and frequency rather than mean per se. In our active categorization data set, we found that more effective decoding of the neural codes corresponds to better prediction of behavioral performance. Therefore, the incorporation of temporal variabilities in time-resolved decoding can provide additional category information and improved prediction of behavior.


2005 ◽  
Vol 17 (5) ◽  
pp. 768-776 ◽  
Author(s):  
María Ruz ◽  
Michael S. Worden ◽  
Pío Tudela ◽  
Bruce D. McCandliss

We investigated the dependence of visual word processes on attention by examining event-related potential (ERP) responses as subjects viewed words while their attention was engaged by a concurrent highly demanding task. We used a paradigm from a previous functional magnetic resonance imaging (fMRI) experiment [Rees, G., Russel, C., Frith, C. D., & Driver, J. Inattentional blindness vs. inattentional amnesia for fixated but ignored words. Science, 286, 2504–2506, 1999] in which participants attended either to drawings or to overlapping letters (words or nonwords) presented at a fast rate. Although previous fMRI results supported the notion that word processing was obliterated by attention withdrawal, the current electrophysiological results demonstrated that visual words are processed even under conditions in which attentional resources are engaged in a different task that does not involve reading. In two experiments, ERPs for attended words versus nonwords differed in the left frontal, left posterior, and medial scalp locations. However, in contrast to the previous fMRI results, ERPs responded differentially to ignored words and consonant strings in several regions. These results suggest that fMRI and ERPs may have differential sensitivity to some forms of neural activation. Moreover, they provide evidence to restore the notion that the brain analyzes words even when attention is tied to another dimension.


2018 ◽  
Author(s):  
Lina Teichmann ◽  
Tijl Grootswagers ◽  
Thomas Carlson ◽  
Anina N. Rich

AbstractColour is a defining feature of many objects, playing a crucial role in our ability to rapidly recognise things in the world around us and make categorical distinctions. For example, colour is a useful cue when distinguishing lemons from limes or blackberries from raspberries. That means our representation of many objects includes key colour-related information. The question addressed here is whether the neural representation activated byknowingthat something is red is the same as that activated when weactually seesomething red, particularly in regard to timing. We addressed this question using neural timeseries (magnetoencephalography, MEG) data to contrast real colour perception and implied object colour activation. We applied multivariate pattern analysis (MVPA) to analyse the brain activationpatternsevoked by colour accessed via real colour perception and implied colour activation. Applying MVPA to MEG data allows us here to focus on the temporal dynamics of these processes. Male and female human participants (N=18) viewed isoluminant red and green shapes and grey-scale, luminance-matched pictures of fruits and vegetables that are red (e.g., tomato) or green (e.g., kiwifruit) in nature. We show that the brain activation pattern evoked by real colour perception is similar to implied colour activation, but that this pattern is instantiated at a later time. These results suggest that a common colour representation can be triggered by activating object representations from memory and perceiving colours.


2019 ◽  
Vol 121 (5) ◽  
pp. 1588-1590 ◽  
Author(s):  
Luca Casartelli

Neural, oscillatory, and computational counterparts of multisensory processing remain a crucial challenge for neuroscientists. Converging evidence underlines a certain efficiency in balancing stability and flexibility of sensory sampling, supporting the general idea that multiple parallel and hierarchically organized processing stages in the brain contribute to our understanding of the (sensory/perceptual) world. Intriguingly, how temporal dynamics impact and modulate multisensory processes in our brain can be investigated benefiting from studies on perceptual illusions.


Sign in / Sign up

Export Citation Format

Share Document