A multivoxel pattern analysis of anhedonia during fear extinction – implications for safety learning

Author(s):  
Benjamin M. Rosenberg ◽  
Vincent Taschereau-Dumouchel ◽  
Hakwan Lau ◽  
Katherine S. Young ◽  
Robin Nusslock ◽  
...  
2021 ◽  
Vol 89 (9) ◽  
pp. S165
Author(s):  
Benjamin Rosenberg ◽  
Vincent Taschereau-Dumouchel ◽  
Katherine Young ◽  
Hakwan Lau ◽  
Richard Zinbarg ◽  
...  

Author(s):  
Giuseppe Di Cesare ◽  
Giancarlo Valente ◽  
Cinzia Di Dio ◽  
Emanuele Ruffaldi ◽  
Massimo Bergamasco ◽  
...  

2018 ◽  
Vol 13 (5) ◽  
pp. 1273-1280 ◽  
Author(s):  
Jianing Zhang ◽  
Wanyi Cao ◽  
Mingyu Wang ◽  
Nizhuan Wang ◽  
Shuqiao Yao ◽  
...  

2014 ◽  
Vol 26 (5) ◽  
pp. 955-969 ◽  
Author(s):  
Annelinde R. E. Vandenbroucke ◽  
Johannes J. Fahrenfort ◽  
Ilja G. Sligte ◽  
Victor A. F. Lamme

Every day, we experience a rich and complex visual world. Our brain constantly translates meaningless fragmented input into coherent objects and scenes. However, our attentional capabilities are limited, and we can only report the few items that we happen to attend to. So what happens to items that are not cognitively accessed? Do these remain fragmentary and meaningless? Or are they processed up to a level where perceptual inferences take place about image composition? To investigate this, we recorded brain activity using fMRI while participants viewed images containing a Kanizsa figure, an illusion in which an object is perceived by means of perceptual inference. Participants were presented with the Kanizsa figure and three matched nonillusory control figures while they were engaged in an attentionally demanding distractor task. After the task, one group of participants was unable to identify the Kanizsa figure in a forced-choice decision task; hence, they were “inattentionally blind.” A second group had no trouble identifying the Kanizsa figure. Interestingly, the neural signature that was unique to the processing of the Kanizsa figure was present in both groups. Moreover, within-subject multivoxel pattern analysis showed that the neural signature of unreported Kanizsa figures could be used to classify reported Kanizsa figures and that this cross-report classification worked better for the Kanizsa condition than for the control conditions. Together, these results suggest that stimuli that are not cognitively accessed are processed up to levels of perceptual interpretation.


2017 ◽  
Author(s):  
Ashley Prichard ◽  
Peter F. Cook ◽  
Mark Spivak ◽  
Raveena Chhibber ◽  
Gregory S. Berns

AbstractHow do dogs understand human words? At a basic level, understanding would require the discrimination of words from non-words. To determine the mechanisms of such a discrimination, we trained 12 dogs to retrieve two objects based on object names, then probed the neural basis for these auditory discriminations using awake-fMRI. We compared the neural response to these trained words relative to “oddball” pseudowords the dogs had not heard before. Consistent with novelty detection, we found greater activation for pseudowords relative to trained words bilaterally in the parietotemporal cortex. To probe the neural basis for representations of trained words, searchlight multivoxel pattern analysis (MVPA) revealed that a subset of dogs had clusters of informative voxels that discriminated between the two trained words. These clusters included the left temporal cortex and amygdala, left caudate nucleus, and thalamus. These results demonstrate that dogs’ processing of human words utilizes basic processes like novelty detection, and for some dogs, may also include auditory and hedonic representations.


Sign in / Sign up

Export Citation Format

Share Document