scholarly journals Visual and semantic representations predict subsequent memory in perceptual and conceptual memory tests

2020 ◽  
Author(s):  
Simon W. Davis ◽  
Benjamin R. Geib ◽  
Erik A. Wing ◽  
Wei-Chun Wang ◽  
Mariam Hovhannisyan ◽  
...  

AbstractIt is generally assumed that the encoding of a single event generates multiple memory representations, which contribute differently to subsequent episodic memory. We used fMRI and representational similarity analysis (RSA) to examine how visual and semantic representations predicted subsequent memory for single item encoding (e.g., seeing an orange). Three levels of visual representations corresponding to early, middle, and late visual processing stages were based on a deep neural network. Three levels of semantic representations were based on normative Observed (“is round”), Taxonomic (“is a fruit”), and Encyclopedic features (“is sweet”). We identified brain regions where each representation type predicted later Perceptual Memory, Conceptual Memory, or both (General Memory). Participants encoded objects during fMRI, and then completed both a word-based conceptual and picture-based perceptual memory test. Visual representations predicted subsequent Perceptual Memory in visual cortices, but also facilitated Conceptual and General Memory in more anterior regions. Semantic representations, in turn, predicted Perceptual Memory in visual cortex, Conceptual Memory in the perirhinal and inferior prefrontal cortex, and General Memory in the angular gyrus. These results suggest that the contribution of visual and semantic representations to subsequent memory effects depends on a complex interaction between representation, test type, and storage location.

2020 ◽  
Author(s):  
Simon W Davis ◽  
Benjamin R Geib ◽  
Erik A Wing ◽  
Wei-Chun Wang ◽  
Mariam Hovhannisyan ◽  
...  

Abstract It is generally assumed that the encoding of a single event generates multiple memory representations, which contribute differently to subsequent episodic memory. We used functional magnetic resonance imaging (fMRI) and representational similarity analysis to examine how visual and semantic representations predicted subsequent memory for single item encoding (e.g., seeing an orange). Three levels of visual representations corresponding to early, middle, and late visual processing stages were based on a deep neural network. Three levels of semantic representations were based on normative observed (“is round”), taxonomic (“is a fruit”), and encyclopedic features (“is sweet”). We identified brain regions where each representation type predicted later perceptual memory, conceptual memory, or both (general memory). Participants encoded objects during fMRI, and then completed both a word-based conceptual and picture-based perceptual memory test. Visual representations predicted subsequent perceptual memory in visual cortices, but also facilitated conceptual and general memory in more anterior regions. Semantic representations, in turn, predicted perceptual memory in visual cortex, conceptual memory in the perirhinal and inferior prefrontal cortex, and general memory in the angular gyrus. These results suggest that the contribution of visual and semantic representations to subsequent memory effects depends on a complex interaction between representation, test type, and storage location.


2006 ◽  
Vol 35 (3) ◽  
pp. 259-272 ◽  
Author(s):  
Nicole Buck ◽  
Merel Kindt ◽  
Marcel van den Hout ◽  
Lou Steens ◽  
Cintha Linders

Ehlers and Clark (2000) hypothesize that persistent PTSD is explained by a predominance of data-driven processing and a lack of conceptually-driven processing of the trauma. Data-driven/conceptually-driven processing is thought to relate to perceptual memory representations and memory fragmentation. The present study measured the result of data-driven/conceptually-driven processing in three ways: on utterance level by assessing 1) the ratio between perceptual and conceptual memory representations and 2) utterance disorganization, and 3) on narrative level by assessing the incoherence of the trauma narrative. Twenty-nine patients discharged from the Intensive Care Unit (ICU) were assessed within two weeks after ICU discharge and at 4 months follow-up. The present study tested whether perceptual memory representations, narrative disorganization and narrative incoherence immediately after ICU discharge are related to post-trauma symptomatology. If so, whether these variables are specific for PTSD as compared to depression. Data-driven/conceptually-driven processing was related to PTSD and Depression symptoms on utterance level. Although narrative incoherence did not predict PTSD symptoms, it was predictive of depression symptoms. The present study showed the viability of the data-driven/conceptually-driven conceptualization in explaining post-trauma symptomatology.


2010 ◽  
Vol 7 (1) ◽  
pp. 86-88 ◽  
Author(s):  
Seth D. Dobson ◽  
Chet C. Sherwood

Anthropoid primates are distinguished from other mammals by having relatively large primary visual cortices (V1) and complex facial expressions. We present a comparative test of the hypothesis that facial expression processing coevolved with the expansion of V1 in anthropoids. Previously published data were analysed using phylogenetic comparative methods. The results of our study suggest a pattern of correlated evolution linking social group size, facial motor control and cortical visual processing in catarrhines, but not platyrrhines. Catarrhines that live in relatively large social groups tended to have relatively large facial motor nuclei, and relatively large primary visual cortices. We conclude that catarrhine brains are adapted for producing and processing complex facial displays.


2021 ◽  
Vol 15 ◽  
Author(s):  
Trung Quang Pham ◽  
Shota Nishiyama ◽  
Norihiro Sadato ◽  
Junichi Chikazoe

Multivoxel pattern analysis (MVPA) has become a standard tool for decoding mental states from brain activity patterns. Recent studies have demonstrated that MVPA can be applied to decode activity patterns of a certain region from those of the other regions. By applying a similar region-to-region decoding technique, we examined whether the information represented in the visual areas can be explained by those represented in the other visual areas. We first predicted the brain activity patterns of an area on the visual pathway from the others, then subtracted the predicted patterns from their originals. Subsequently, the visual features were derived from these residuals. During the visual perception task, the elimination of the top-down signals enhanced the simple visual features represented in the early visual cortices. By contrast, the elimination of the bottom-up signals enhanced the complex visual features represented in the higher visual cortices. The directions of such modulation effects varied across visual perception/imagery tasks, indicating that the information flow across the visual cortices is dynamically altered, reflecting the contents of visual processing. These results demonstrated that the distillation approach is a useful tool to estimate the hidden content of information conveyed across brain regions.


2018 ◽  
Vol 29 (5) ◽  
pp. 845-856 ◽  
Author(s):  
Ilona M. Bloem ◽  
Yurika L. Watanabe ◽  
Melissa M. Kibbe ◽  
Sam Ling

How distinct are visual memory representations from visual perception? Although evidence suggests that briefly remembered stimuli are represented within early visual cortices, the degree to which these memory traces resemble true visual representations remains something of a mystery. Here, we tested whether both visual memory and perception succumb to a seemingly ubiquitous neural computation: normalization. Observers were asked to remember the contrast of visual stimuli, which were pitted against each other to promote normalization either in perception or in visual memory. Our results revealed robust normalization between visual representations in perception, yet no signature of normalization occurring between working memory stores—neither between representations in memory nor between memory representations and visual inputs. These results provide unique insight into the nature of visual memory representations, illustrating that visual memory representations follow a different set of computational rules, bypassing normalization, a canonical visual computation.


2019 ◽  
Author(s):  
Zachary Hawes ◽  
H Moriah Sokolowski ◽  
Chuka Bosah Ononye ◽  
Daniel Ansari

Where and under what conditions do spatial and numerical skills converge and diverge in the brain? To address this question, we conducted a meta-analysis of brain regions associated with basic symbolic number processing, arithmetic, and mental rotation. We used Activation Likelihood Estimation (ALE) to construct quantitative meta-analytic maps synthesizing results from 86 neuroimaging papers (~ 30 studies/cognitive process). All three cognitive processes were found to activate bilateral parietal regions in and around the intraparietal sulcus (IPS); a finding consistent with shared processing accounts. Numerical and arithmetic processing were associated with overlap in the left angular gyrus, whereas mental rotation and arithmetic both showed activity in the middle frontal gyri. These patterns suggest regions of cortex potentially more specialized for symbolic number representation and domain-general mental manipulation, respectively. Additionally, arithmetic was associated with unique activity throughout the fronto-parietal network and mental rotation was associated with unique activity in the right superior parietal lobe. Overall, these results provide new insights into the intersection of numerical and spatial thought in the human brain.


2016 ◽  
Vol 28 (1) ◽  
pp. 111-124 ◽  
Author(s):  
Sabrina Walter ◽  
Christian Keitel ◽  
Matthias M. Müller

Visual attention can be focused concurrently on two stimuli at noncontiguous locations while intermediate stimuli remain ignored. Nevertheless, behavioral performance in multifocal attention tasks falters when attended stimuli fall within one visual hemifield as opposed to when they are distributed across left and right hemifields. This “different-hemifield advantage” has been ascribed to largely independent processing capacities of each cerebral hemisphere in early visual cortices. Here, we investigated how this advantage influences the sustained division of spatial attention. We presented six isoeccentric light-emitting diodes (LEDs) in the lower visual field, each flickering at a different frequency. Participants attended to two LEDs that were spatially separated by an intermediate LED and responded to synchronous events at to-be-attended LEDs. Task-relevant pairs of LEDs were either located in the same hemifield (“within-hemifield” conditions) or separated by the vertical meridian (“across-hemifield” conditions). Flicker-driven brain oscillations, steady-state visual evoked potentials (SSVEPs), indexed the allocation of attention to individual LEDs. Both behavioral performance and SSVEPs indicated enhanced processing of attended LED pairs during “across-hemifield” relative to “within-hemifield” conditions. Moreover, SSVEPs demonstrated effective filtering of intermediate stimuli in “across-hemifield” condition only. Thus, despite identical physical distances between LEDs of attended pairs, the spatial profiles of gain effects differed profoundly between “across-hemifield” and “within-hemifield” conditions. These findings corroborate that early cortical visual processing stages rely on hemisphere-specific processing capacities and highlight their limiting role in the concurrent allocation of visual attention to multiple locations.


2006 ◽  
Vol 34 (3) ◽  
pp. 319-331 ◽  
Author(s):  
Nicole Buck ◽  
Merel Kindt ◽  
Marcel van den Hout

Dissociation often occurs after a traumatic experience and has detrimental effects on memory. If these supposed detrimental effects are the result of disturbances in information processing, not only subjectively assessed but also objectively assessed memory disturbances should be observed. Most studies assessing dissociation and memory in the context of trauma have studied trauma victims. However, this study takes a new approach in that the impact of experimentally induced state dissociation on memory is investigated in people with spider phobia. Note that the aim of the present study was not to test the effect of trauma on memory disturbances. We found indeed significant relations between state dissociation and subjectively assessed memory disturbances: intrusions and self-rated memory fragmentation. Moreover, although no relation was found between state dissociation and experimenter-rated memory fragmentation, we observed a relation between state dissociation and experimenter-rated perceptual memory representations. These results show that state dissociation indeed has detrimental effects on the processing of aversive events.


2013 ◽  
Vol 169 (5) ◽  
pp. 639-647 ◽  
Author(s):  
Elizabeth A Lawson ◽  
Laura M Holsen ◽  
Rebecca DeSanti ◽  
McKale Santin ◽  
Erinne Meenaghan ◽  
...  

ObjectiveCorticotrophin-releasing hormone (CRH)-mediated hypercortisolemia has been demonstrated in anorexia nervosa (AN), a psychiatric disorder characterized by food restriction despite low body weight. While CRH is anorexigenic, downstream cortisol stimulates hunger. Using a food-related functional magnetic resonance imaging (fMRI) paradigm, we have demonstrated hypoactivation of brain regions involved in food motivation in women with AN, even after weight recovery. The relationship between hypothalamic–pituitary–adrenal (HPA) axis dysregulation and appetite and the association with food-motivation neurocircuitry hypoactivation are unknown in AN. We investigated the relationship between HPA activity, appetite, and food-motivation neurocircuitry hypoactivation in AN.DesignCross-sectional study of 36 women (13 AN, ten weight-recovered AN (ANWR), and 13 healthy controls (HC)).MethodsPeripheral cortisol and ACTH levels were measured in a fasting state and 30, 60, and 120 min after a standardized mixed meal. The visual analog scale was used to assess homeostatic and hedonic appetite. fMRI was performed during visual processing of food and non-food stimuli to measure the brain activation pre- and post-meal.ResultsIn each group, serum cortisol levels decreased following the meal. Mean fasting, 120 min post-meal, and nadir cortisol levels were high in AN vs HC. Mean postprandial ACTH levels were high in ANWR compared with HC and AN subjects. Cortisol levels were associated with lower fasting homeostatic and hedonic appetite, independent of BMI and depressive symptoms. Cortisol levels were also associated with between-group variance in activation in the food-motivation brain regions (e.g. hypothalamus, amygdala, hippocampus, orbitofrontal cortex, and insula).ConclusionsHPA activation may contribute to the maintenance of AN by the suppression of appetitive drive.


2020 ◽  
Author(s):  
Munendo Fujimichi ◽  
Hiroki Yamamoto ◽  
Jun Saiki

Are visual representations in the human early visual cortex necessary for visual working memory (VWM)? Previous studies suggest that VWM is underpinned by distributed representations across several brain regions, including the early visual cortex. Notably, in these studies, participants had to memorize images under consistent visual conditions. However, in our daily lives, we must retain the essential visual properties of objects despite changes in illumination or viewpoint. The role of brain regions—particularly the early visual cortices—in these situations remains unclear. The present study investigated whether the early visual cortex was essential for achieving stable VWM. Focusing on VWM for object surface properties, we conducted fMRI experiments while male and female participants performed a delayed roughness discrimination task in which sample and probe spheres were presented under varying illumination. By applying multi-voxel pattern analysis to brain activity in regions of interest, we found that the ventral visual cortex and intraparietal sulcus were involved in roughness VWM under changing illumination conditions. In contrast, VWM was not supported as robustly by the early visual cortex. These findings show that visual representations in the early visual cortex alone are insufficient for the robust roughness VWM representation required during changes in illumination.


Sign in / Sign up

Export Citation Format

Share Document