scholarly journals Concurrent contextual and time-distant mnemonic information co-exist as feedback in human visual cortex.

2021 ◽  
Author(s):  
Javier Ortiz-Tudela ◽  
Johanna Bergmann ◽  
Matthew Bennett ◽  
Isabelle Ehrlich ◽  
Lars Muckli ◽  
...  

Efficient processing of visual environment necessitates the integration of incoming sensory evidence with concurrent contextual inputs and mnemonic content from our past experiences. To delineate how this integration takes place in the brain, we studied modulations of feedback neural patterns in non-stimulated areas of the early visual cortex in humans (i.e., V1 and V2). Using functional magnetic resonance imaging and multivariate pattern analysis, we show that both, concurrent contextual and time-distant mnemonic information, coexist in V1/V2 as feedback signals. The extent to which mnemonic information is reinstated in V1/V2 depends on whether the information is retrieved episodically or semantically. These results demonstrate that our stream of visual experience contains not just information from the visual surrounding, but also memory-based predictions internally generated in the brain.

2016 ◽  
Author(s):  
Radoslaw Martin Cichy ◽  
Dimitrios Pantazis

1AbstractMultivariate pattern analysis of magnetoencephalography (MEG) and electroencephalography (EEG) data can reveal the rapid neural dynamics underlying cognition. However, MEG and EEG have systematic differences in sampling neural activity. This poses the question to which degree such measurement differences consistently bias the results of multivariate analysis applied to MEG and EEG activation patterns. To investigate, we conducted a concurrent MEG/EEG study while participants viewed images of everyday objects. We applied multivariate classification analyses to MEG and EEG data, and compared the resulting time courses to each other, and to fMRI data for an independent evaluation in space. We found that both MEG and EEG revealed the millisecond spatio-temporal dynamics of visual processing with largely equivalent results. Beyond yielding convergent results, we found that MEG and EEG also captured partly unique aspects of visual representations. Those unique components emerged earlier in time for MEG than for EEG. Identifying the sources of those unique components with fMRI, we found the locus for both MEG and EEG in high-level visual cortex, and in addition for MEG in early visual cortex. Together, our results show that multivariate analyses of MEG and EEG data offer a convergent and complimentary view on neural processing, and motivate the wider adoption of these methods in both MEG and EEG research.


2017 ◽  
Author(s):  
Fernando M. Ramírez

AbstractThe use of multivariate pattern analysis (MVPA) methods has enjoyed this past decade a rapid increase in popularity among neuroscientists. More recently, similarity-based multivariate methods aiming not only to extract information regarding the class membership of stimuli from their associated brain patterns, say, decode a face from a potato, but to understand the form of the underlying representational structure associated with stimulus dimensions of interest, say, 2D grating or 3D face orientation, have flourished under the name of Representational Similarity Analysis (RSA). However, data-preprocessing steps implemented prior to RSA can significantly change the covariance (and correlation) structure of the data, hence possibly leading to representational confusion—i.e., a researcher inferring that brain area A encodes information according to representational scheme X, and not Y, when the opposite is true. Here, I demonstrate with simulations that time-series demeaning (including z-scoring) can plausibly lead to representational confusion. Further, I expose potential interactions between the effects of data demeaning and how the brain happens to encode information. Finally, I emphasize the importance in the context of similarity analyses of at least occasionally explicitly considering the direction of pattern vectors in multivariate space, rather than focusing exclusively on the relative location of their endpoints. Overall, I expect this article will promote awareness of the impact of data demeaning on inferences regarding representational structure and neural coding.


2020 ◽  
Author(s):  
Andrew E. Silva ◽  
Benjamin Thompson ◽  
Zili Liu

AbstractThis study explores how the human brain solves the challenge of flicker noise in motion processing. Despite providing no useful directional motion information, flicker is common in the visual environment and exhibits omnidirectional motion energy which is processed by low-level motion detectors. Models of motion processing propose a mechanism called motion opponency that reduces the processing of flicker noise. Motion opponency involves the pooling of local motion signals to calculate an overall motion direction. A neural correlate of motion opponency has been observed in human area MT+/V5 using fMRI, whereby stimuli with perfectly balanced motion energy constructed from dots moving in counter-phase elicit a weaker BOLD response than non-balanced (in-phase) motion stimuli. Building on this previous work, we used multivariate pattern analysis to examine whether the patterns of brain activation elicited by motion opponent stimuli resemble the activation elicited by flicker noise across the human visual cortex. Robust multivariate signatures of opponency were observed in V5 and in V3A. Our results support the notion that V5 is centrally involved in motion opponency and in the reduction of flicker noise during visual processing. Furthermore, these results demonstrate the utility of powerful multivariate analysis methods in revealing the role of additional visual areas, such as V3A, in opponency and in motion processing more generally.HighlightsOpponency is demonstrated in multivariate and univariate analysis of V5 data.Multivariate fMRI also implicates V3A in motion opponency.Multivariate analyses are useful for examining opponency throughout visual cortex.


2017 ◽  
Author(s):  
J. Brendan Ritchie ◽  
David Michael Kaplan ◽  
Colin Klein

AbstractSince its introduction, multivariate pattern analysis (MVPA), or “neural decoding”, has transformed the field of cognitive neuroscience. Underlying its influence is a crucial inference, which we call the Decoder’s Dictum: if information can be decoded from patterns of neural activity, then this provides strong evidence about what information those patterns represent. Although the Dictum is a widely held and well-motivated principle in decoding research, it has received scant philosophical attention. We critically evaluate the Dictum, arguing that it is false: decodability is a poor guide for revealing the content of neural representations. However, we also suggest how the Dictum can be improved on, in order to better justify inferences about neural representation using MVPA.


2019 ◽  
Author(s):  
Zhiai Li ◽  
Hongbo Yu ◽  
Yongdi Zhou ◽  
Tobias Kalenscher ◽  
Xiaolin Zhou

AbstractPeople do not only feel guilty for transgressions of social norms/expectations that they are causally responsible for, but they also feel guilty for transgressions committed by those they identify as in-group (i.e., collective or group-based guilt). However, the neurocognitive basis of group-based guilt and its relation to personal guilt are unknown. To address these questions, we combined functional MRI with an interaction-based minimal group paradigm in which participants either directly caused harm to victims (i.e., personal guilt), or observed in-group members cause harm to the victims (i.e., group-based guilt). In three experiments (N = 90), we demonstrated that perceived shared responsibility with in-group members in the transgression predicted behavioral and neural manifestations of group-based guilt. Multivariate pattern analysis of the functional MRI data showed that group-based guilt recruited a similar brain representation in anterior middle cingulate cortex as personal guilt. These results have broaden our understanding of how group membership is integrated into social emotions.


2019 ◽  
Author(s):  
Andrew A. Chen ◽  
Joanne C. Beer ◽  
Nicholas J. Tustison ◽  
Philip A. Cook ◽  
Russell T. Shinohara ◽  
...  

AbstractTo acquire larger samples for answering complex questions in neuroscience, researchers have increasingly turned to multi-site neuroimaging studies. However, these studies are hindered by differences in images acquired across multiple scanners. These effects have been shown to bias comparison between scanners, mask biologically meaningful associations, and even introduce spurious associations. To address this, the field has focused on harmonizing data by removing scanner-related effects in the mean and variance of measurements. Contemporaneously with the increase in popularity of multi-center imaging, the use of multivariate pattern analysis (MVPA) has also become commonplace. These approaches have been shown to provide improved sensitivity, specificity, and power due to their modeling the joint relationship across measurements in the brain. In this work, we demonstrate that methods for removing scanner effects in mean and variance may not be sufficient for MVPA. This stems from the fact that such methods fail to address how correlations between measurements can vary across scanners. Data from the Alzheimer’s Disease Neuroimaging Initiative is used to show that considerable differences in covariance exist across scanners and that popular harmonization techniques do not address this issue. We also propose a novel methodology that harmonizes covariance of multivariate image measurements across scanners and demonstrate its improved performance in data harmonization.


Sign in / Sign up

Export Citation Format

Share Document