scholarly journals Erratum: Neural Processing of Repeated Search Targets Depends Upon the Stimuli: Real World Stimuli Engage Semantic Processing and Recognition Memory

2018 ◽  
Vol 12 ◽  
Author(s):  
2020 ◽  
Vol 113 ◽  
pp. 104109
Author(s):  
Michael S. Humphreys ◽  
Yanqi Ryan Li ◽  
Jennifer S. Burt ◽  
Shayne Loft

2010 ◽  
Vol 22 (3) ◽  
pp. 590-601 ◽  
Author(s):  
Patric Meyer ◽  
Axel Mecklinger ◽  
Angela D. Friederici

Recognition memory based on familiarity judgments is a form of declarative memory that has been repeatedly associated with the anterior medial temporal lobe. It has been argued that this region sustains familiarity-based recognition not only by retrieving item-specific information but also by coding for those semantic aspects of an event that support later familiarity-based recognition. Here, we used event-related fMRI to directly examine whether the contribution of anterior medial temporal lobe to declarative memory indeed results from its role in processing semantic aspects of an event. For this purpose, a sentence comprehension task was employed which varied the demands of semantic and syntactic processing of the sentence-final word. By presenting those sentence-final words together with new words in a subsequent incidental recognition memory test, we were able to determine the mnemonic consequences of presenting words in different sentential contexts. Results showed that enhanced semantic processing during comprehension activates regions in medial temporal lobe cortex and leads to response suppression in partly overlapping regions when the word is successfully retrieved. Data from a behavioral follow-up study support the view that enhanced semantic processing at study enhances familiarity-based remembering in a subsequent test phase.


2021 ◽  
Vol 32 (2) ◽  
pp. 267-279
Author(s):  
Rebecca Ovalle-Fresa ◽  
Arif Sinan Uslu ◽  
Nicolas Rothen

The levels of processing (LOP) account has inspired thousands of studies with verbal material. The few studies investigating levels of processing with nonverbal stimuli used images with nameable objects that, like meaningful words, lend themselves to semantic processing. Thus, nothing is known about the effects of different levels of processing on basic visual perceptual features, such as color. Across four experiments, we tested 187 participants to investigate whether the LOP framework also applies to basic perceptual features in visual associative memory. For Experiments 1 and 2, we developed a paradigm to investigate recognition memory for associations of basic visual features. Participants had to memorize object–color associations (Experiment 1) and fractal–color associations (Experiment 2, to suppress verbalization). In Experiments 3 and 4, we extended our account to cued recall. All experiments revealed reliable LOP effects for basic perceptual features in visual associative memory. Our findings demonstrate that the LOP account is more universal than the current literature suggests.


2017 ◽  
Author(s):  
Mariana Vega-Mendoza ◽  
Martin John Pickering ◽  
Mante S. Nieuwland

In two ERP experiments, we investigated whether readers prioritize animacy over real-world event-knowledge during sentence comprehension. We used the paradigm of Paczynski and Kuperberg (2012), who argued that animacy is prioritized based on the observations that the ‘related anomaly effect’ (reduced N400s for context-related anomalous words compared to unrelated words) does not occur for animacy violations, and that animacy violations but not relatedness violations elicit P600 effects. Participants read passive sentences with plausible agents (e.g., The prescription for the mental disorder was written by the psychiatrist) or implausible agents that varied in animacy and semantic relatedness (schizophrenic/guard/pill/fence). In Experiment 1 (with a plausibility judgment task), plausible sentences elicited smaller N400s relative to all types of implausible sentences. Moreover, animate words elicited smaller N400s than inanimate words, and related words elicited smaller N400s than unrelated words. Crucially, at the P600 time-window, we observed more positive ERPs for animate than inanimate words and for related than unrelated words at anterior regions. In Experiment 2 (with no judgment task), we observed an N400 effect with animacy violations, but no other effects. Taken together, the results of our experiments fail to support a prioritized role of animacy information over real-world event-knowledge, but they support an interactive, constraint-based view on incremental semantic processing.


2021 ◽  
Author(s):  
Daniel Kaiser ◽  
Radoslaw M. Cichy

During natural vision, our brains are constantly exposed to complex, but regularly structured environments. Real-world scenes are defined by typical part-whole relationships, where the meaning of the whole scene emerges from configurations of localized information present in individual parts of the scene. Such typical part-whole relationships suggest that information from individual scene parts is not processed independently, but that there are mutual influences between the parts and the whole during scene analysis. Here, we review recent research that used a straightforward, but effective approach to study such mutual influences: by dissecting scenes into multiple arbitrary pieces, these studies provide new insights into how the processing of whole scenes is shaped by their consistent parts and, conversely, how the processing of individual parts is determined by their role within the whole scene. We highlight three facets of this research: First, we discuss studies demonstrating that the spatial configuration of multiple scene parts has a profound impact on the neural processing of the whole scene. Second, we review work showing that cortical responses to individual scene parts are shaped by the context in which these parts typically appear within the environment. Third, we discuss studies demonstrating that missing scene parts are interpolated from the surrounding scene context. Bridging these findings, we argue that efficient scene processing relies on an active use of the scene’s part-whole structure, where the visual brain matches scene inputs with internal models of what the world should look like.


2018 ◽  
Author(s):  
Anna Blumenthal ◽  
Bobby Stojanoski ◽  
Chris Martin ◽  
Rhodri Cusack ◽  
Stefan Köhler

ABSTRACTIdentifying what an object is, and whether an object has been encountered before, is a crucial aspect of human behavior. Despite this importance, we do not have a complete understanding of the neural basis of these abilities. Investigations into the neural organization of human object representations have revealed category specific organization in the ventral visual stream in perceptual tasks. Interestingly, these categories fall within broader domains of organization, with distinctions between animate, inanimate large, and inanimate small objects. While there is some evidence for category specific effects in the medial temporal lobe (MTL), it is currently unclear whether domain level organization is also present across these structures. To this end, we used fMRI with a continuous recognition memory task. Stimuli were images of objects from several different categories, which were either animate or inanimate, or large or small within the inanimate domain. We employed representational similarity analysis (RSA) to test the hypothesis that object-evoked responses in MTL structures during recognition-memory judgments also show evidence for domain-level organization along both dimensions. Our data support this hypothesis. Specifically, object representations were shaped by either animacy, real-world size, or both, in perirhinal and parahippocampal cortex, as well as the hippocampus. While sensitivity to these dimensions differed when structures when probed individually, hinting at interesting links to functional differentiation, similarities in organization across MTL structures were more prominent overall. These results argue for continuity in the organization of object representations in the ventral visual stream and the MTL.


2019 ◽  
Vol 14 (4) ◽  
pp. 523-542 ◽  
Author(s):  
R. Nathan Spreng ◽  
Gary R. Turner

Cognitive aging is often described in the context of loss or decline. Emerging research suggests that the story is more complex, with older adults showing both losses and gains in cognitive ability. With increasing age, declines in controlled, or fluid, cognition occur in the context of gains in crystallized knowledge of oneself and the world. This inversion in cognitive capacities, from greater reliance on fluid abilities in young adulthood to increasingly crystallized or semanticized cognition in older adulthood, has profound implications for cognitive and real-world functioning in later life. The shift in cognitive architecture parallels changes in the functional network architecture of the brain. Observations of greater functional connectivity between lateral prefrontal brain regions, implicated in cognitive control, and the default network, implicated in memory and semantic processing, led us to propose the default-executive coupling hypothesis of aging. In this review we provide evidence that these changes in the functional architecture of the brain serve as a neural mechanism underlying the shifting cognitive architecture from younger to older adulthood. We incorporate findings spanning cognitive aging and cognitive neuroscience to present an integrative model of cognitive and brain aging, describing its antecedents, determinants, and implications for real-world functioning.


2019 ◽  
pp. 002383091988021 ◽  
Author(s):  
Heather Kember ◽  
Jiyoun Choi ◽  
Jenny Yu ◽  
Anne Cutler

Prominence, the expression of informational weight within utterances, can be signaled by prosodic highlighting ( head-prominence, as in English) or by position (as in Korean edge-prominence). Prominence confers processing advantages, even if conveyed only by discourse manipulations. Here we compared processing of prominence in English and Korean, using a task that indexes processing success, namely recognition memory. In each language, participants’ memory was tested for target words heard in sentences in which they were prominent due to prosody, position, both or neither. Prominence produced recall advantage, but the relative effects differed across language. For Korean listeners the positional advantage was greater, but for English listeners prosodic and syntactic prominence had equivalent and additive effects. In a further experiment semantic and phonological foils tested depth of processing of the recall targets. Both foil types were correctly rejected, suggesting that semantic processing had not reached the level at which word form was no longer available. Together the results suggest that prominence processing is primarily driven by universal effects of information structure; but language-specific differences in frequency of experience prompt different relative advantages of prominence signal types. Processing efficiency increases in each case, however, creating more accurate and more rapidly contactable memory representations.


Sign in / Sign up

Export Citation Format

Share Document