Decoding source memory processes using multivoxel pattern analysis

2009 ◽  
Author(s):  
Kenneth A. Norman
Author(s):  
Benjamin M. Rosenberg ◽  
Vincent Taschereau-Dumouchel ◽  
Hakwan Lau ◽  
Katherine S. Young ◽  
Robin Nusslock ◽  
...  

Author(s):  
Giuseppe Di Cesare ◽  
Giancarlo Valente ◽  
Cinzia Di Dio ◽  
Emanuele Ruffaldi ◽  
Massimo Bergamasco ◽  
...  

2018 ◽  
Vol 13 (5) ◽  
pp. 1273-1280 ◽  
Author(s):  
Jianing Zhang ◽  
Wanyi Cao ◽  
Mingyu Wang ◽  
Nizhuan Wang ◽  
Shuqiao Yao ◽  
...  

2014 ◽  
Vol 26 (5) ◽  
pp. 955-969 ◽  
Author(s):  
Annelinde R. E. Vandenbroucke ◽  
Johannes J. Fahrenfort ◽  
Ilja G. Sligte ◽  
Victor A. F. Lamme

Every day, we experience a rich and complex visual world. Our brain constantly translates meaningless fragmented input into coherent objects and scenes. However, our attentional capabilities are limited, and we can only report the few items that we happen to attend to. So what happens to items that are not cognitively accessed? Do these remain fragmentary and meaningless? Or are they processed up to a level where perceptual inferences take place about image composition? To investigate this, we recorded brain activity using fMRI while participants viewed images containing a Kanizsa figure, an illusion in which an object is perceived by means of perceptual inference. Participants were presented with the Kanizsa figure and three matched nonillusory control figures while they were engaged in an attentionally demanding distractor task. After the task, one group of participants was unable to identify the Kanizsa figure in a forced-choice decision task; hence, they were “inattentionally blind.” A second group had no trouble identifying the Kanizsa figure. Interestingly, the neural signature that was unique to the processing of the Kanizsa figure was present in both groups. Moreover, within-subject multivoxel pattern analysis showed that the neural signature of unreported Kanizsa figures could be used to classify reported Kanizsa figures and that this cross-report classification worked better for the Kanizsa condition than for the control conditions. Together, these results suggest that stimuli that are not cognitively accessed are processed up to levels of perceptual interpretation.


2017 ◽  
Author(s):  
Ashley Prichard ◽  
Peter F. Cook ◽  
Mark Spivak ◽  
Raveena Chhibber ◽  
Gregory S. Berns

AbstractHow do dogs understand human words? At a basic level, understanding would require the discrimination of words from non-words. To determine the mechanisms of such a discrimination, we trained 12 dogs to retrieve two objects based on object names, then probed the neural basis for these auditory discriminations using awake-fMRI. We compared the neural response to these trained words relative to “oddball” pseudowords the dogs had not heard before. Consistent with novelty detection, we found greater activation for pseudowords relative to trained words bilaterally in the parietotemporal cortex. To probe the neural basis for representations of trained words, searchlight multivoxel pattern analysis (MVPA) revealed that a subset of dogs had clusters of informative voxels that discriminated between the two trained words. These clusters included the left temporal cortex and amygdala, left caudate nucleus, and thalamus. These results demonstrate that dogs’ processing of human words utilizes basic processes like novelty detection, and for some dogs, may also include auditory and hedonic representations.


2021 ◽  
Author(s):  
Trung Quang Pham ◽  
Takaaki Yoshimoto ◽  
Haruki Niwa ◽  
Haruka K Takahashi ◽  
Ryutaro Uchiyama ◽  
...  

AbstractHumans and now computers can derive subjective valuations from sensory events although such transformation process is essentially unknown. In this study, we elucidated unknown neural mechanisms by comparing convolutional neural networks (CNNs) to their corresponding representations in humans. Specifically, we optimized CNNs to predict aesthetic valuations of paintings and examined the relationship between the CNN representations and brain activity via multivoxel pattern analysis. Primary visual cortex and higher association cortex activities were similar to computations in shallow CNN layers and deeper layers, respectively. The vision-to-value transformation is hence proved to be a hierarchical process which is consistent with the principal gradient that connects unimodal to transmodal brain regions (i.e. default mode network). The activity of the frontal and parietal cortices was approximated by goal-driven CNN. Consequently, representations of the hidden layers of CNNs can be understood and visualized by their correspondence with brain activity–facilitating parallels between artificial intelligence and neuroscience.


Sign in / Sign up

Export Citation Format

Share Document