scholarly journals When a Thought Equals a Look: Refreshing Enhances Perceptual Memory

2008 ◽  
Vol 20 (8) ◽  
pp. 1371-1380 ◽  
Author(s):  
Do-Joon Yi ◽  
Nicholas B. Turk-Browne ◽  
Marvin M. Chun ◽  
Marcia K. Johnson

Cognition constantly involves retrieving and maintaining information that is not perceptually available in the current environment. Studies on visual imagery and working memory suggest that such high-level cognition might, in part, be mediated by the revival of perceptual representations in the inferior temporal cortex. Here, we provide new support for this hypothesis, showing that reflectively accessed information can have similar consequences for subsequent perception as actual perceptual input. Participants were presented with pairs of frames in which a scene could appear, and were required to make a category judgment on the second frame. In the critical condition, a scene was presented in the first frame, but the second frame was blank. Thus, it was necessary to refresh the scene from the first frame in order to make the category judgment. Scenes were then repeated in subsequent trials to measure the effect of refreshing on functional magnetic resonance imaging repetition attenuation—a neural index of memory—in a scene-selective region of the visual cortex. Surprisingly, the refreshed scenes produced equal attenuation as scenes that had been presented twice during encoding, and more attenuation than scenes that had been presented once during encoding, but that were not refreshed. Thus, the top-down revival of a percept had a similar effect on memory as actually seeing the stimulus again. These findings indicate that high-level cognition can activate stimulus-specific representations in the ventral visual cortex, and that such top-down activation, like that from sensory stimulation, produces memorial changes that affect perceptual processing during a later encounter with the stimulus.

2003 ◽  
Vol 15 (4) ◽  
pp. 574-583 ◽  
Author(s):  
Paul J. Reber ◽  
Darren R. Gitelman ◽  
Todd B. Parrish ◽  
M. Marsel Mesulam

Neuroimaging of healthy volunteers identified separate neural systems supporting the expression of category knowledge depending on whether the learning mode was intentional or incidental. The same visual category was learned either intentionally or implicitly by two separate groups of participants. During a categorization test, functional magnetic resonance imaging (fMRI) was used to compare brain activity evoked by category members and nonmembers. After implicit learning, when participants had learned the category incidentally, decreased occipital activity was observed for novel categorical stimuli compared with noncategorical stimuli. In contrast, after intentional learning, novel categorical stimuli evoked increased activity in the hippocampus, right prefrontal cortex, left inferior temporal cortex, precuneus, and posterior cingulate. Even though the categorization test was identical in the two conditions, the differences in brain activity indicate differing representations of category knowledge depending on whether the category had been learned intentionally or implicitly.


2010 ◽  
Vol 103 (3) ◽  
pp. 1501-1507 ◽  
Author(s):  
P.-J. Hsieh ◽  
E. Vul ◽  
N. Kanwisher

Early retinotopic cortex has traditionally been viewed as containing a veridical representation of the low-level properties of the image, not imbued by high-level interpretation and meaning. Yet several recent results indicate that neural representations in early retinotopic cortex reflect not just the sensory properties of the image, but also the perceived size and brightness of image regions. Here we used functional magnetic resonance imaging pattern analyses to ask whether the representation of an object in early retinotopic cortex changes when the object is recognized compared with when the same stimulus is presented but not recognized. Our data confirmed this hypothesis: the pattern of response in early retinotopic visual cortex to a two-tone “Mooney” image of an object was more similar to the response to the full grayscale photo version of the same image when observers knew what the two-tone image represented than when they did not. Further, in a second experiment, high-level interpretations actually overrode bottom-up stimulus information, such that the pattern of response in early retinotopic cortex to an identified two-tone image was more similar to the response to the photographic version of that stimulus than it was to the response to the identical two-tone image when it was not identified. Our findings are consistent with prior results indicating that perceived size and brightness affect representations in early retinotopic visual cortex and, further, show that even higher-level information—knowledge of object identity—also affects the representation of an object in early retinotopic cortex.


2009 ◽  
Vol 20 (11) ◽  
pp. 1322-1331 ◽  
Author(s):  
Kevin N. Ochsner ◽  
Rebecca R. Ray ◽  
Brent Hughes ◽  
Kateri McRae ◽  
Jeffrey C. Cooper ◽  
...  

Emotions are generally thought to arise through the interaction of bottom-up and top-down processes. However, prior work has not delineated their relative contributions. In a sample of 20 females, we used functional magnetic resonance imaging to compare the neural correlates of negative emotions generated by the bottom-up perception of aversive images and by the top-down interpretation of neutral images as aversive. We found that (a) both types of responses activated the amygdala, although bottom-up responses did so more strongly; (b) bottom-up responses activated systems for attending to and encoding perceptual and affective stimulus properties, whereas top-down responses activated prefrontal regions that represent high-level cognitive interpretations; and (c) self-reported affect correlated with activity in the amygdala during bottom-up responding and with activity in the medial prefrontal cortex during top-down responding. These findings provide a neural foundation for emotion theories that posit multiple kinds of appraisal processes and help to clarify mechanisms underlying clinically relevant forms of emotion dysregulation.


2018 ◽  
Author(s):  
Eleanor Palser ◽  
Cristiana Cavina-Pratesi

Previous work has demonstrated that a region of the extrastriate visual cortex, the left lateral occipito-temporal cortex, contains a region that is activated by both images of human hands and manual tools. The current paper examined the functional significance of this observation. A functional Magnetic Resonance Imaging (fMRI) adaptation paradigm was used to investigate the hypothesis that this region is selective for the functional compatibility of hands and tools. The present results suggest that this region is indeed involved in matching compatible hands postures with tools for their skilled and effective use, with significant adaptation with successive presentation of compatible hands and tools. It is proposed that this region of the extrastriate visual cortex represents a crucial node within a much wider cortical network that supports skilled tool use in humans. The present results are discussed in terms of their implication for our understanding of the organization of the visual brain.


Author(s):  
Benjamin O Barnett ◽  
Jeffrey A Brooks ◽  
Jonathan B Freeman

Abstract Previous research has shown that social-conceptual associations, such as stereotypes, can influence the visual representation of faces and neural pattern responses in ventral temporal cortex (VTC) regions, such as the fusiform gyrus (FG). Current models suggest that this social-conceptual impact requires medial orbitofrontal cortex (mOFC) feedback signals during perception. Backward masking can disrupt such signals, as it is a technique known to reduce functional connectivity between VTC regions and regions outside VTC. During functional magnetic resonance imaging (fMRI), subjects passively viewed masked and unmasked faces, and following the scan, perceptual biases and stereotypical associations were assessed. Multi-voxel representations of faces across the VTC, and in the FG and mOFC, reflected stereotypically biased perceptions when faces were unmasked, but this effect was abolished when faces were masked. However, the VTC still retained the ability to process masked faces and was sensitive to their categorical distinctions. Functional connectivity analyses confirmed that masking disrupted mOFC–FG connectivity, which predicted a reduced impact of stereotypical associations in the FG. Taken together, our findings suggest that the biasing of face representations in line with stereotypical associations does not arise from intrinsic processing within the VTC and FG alone, but instead it depends in part on top-down feedback from the mOFC during perception.


2017 ◽  
Author(s):  
Raúl Hernández-Pérez ◽  
Luis Concha ◽  
Laura V. Cuaya

AbstractDogs can interpret emotional human faces (especially the ones expressing happiness), yet the cerebral correlates of this process are unknown. Using functional magnetic resonance imaging (fMRI) we studied eight awake and unrestrained dogs. In Experiment 1 dogs observed happy and neutral human faces, and found increased brain activity when viewing happy human faces in temporal cortex and caudate. In Experiment 2 the dogs were presented with human faces expressing happiness, anger, fear, or sadness. Using the resulting cluster from Experiment 1 we trained a linear support vector machine classifier to discriminate between pairs of emotions and found that it could only discriminate between happiness and the other emotions. Finally, evaluation of the whole-brain fMRI time courses through a similar classifier allowed us to predict the emotion being observed by the dogs. Our results show that human emotions are specifically represented in dogs’ brains, highlighting their importance for inter-species communication.


Sign in / Sign up

Export Citation Format

Share Document