scholarly journals Behavioral and Neural Effects of Familiarization on Object-Background Associations

2020 ◽  
Vol 11 ◽  
Author(s):  
Oliver Baumann ◽  
Jessica McFadyen ◽  
Michael S. Humphreys

Associative memory is the ability to link together components of stimuli. Previous evidence suggests that prior familiarization with study items affects the nature of the association between stimuli. More specifically, novel stimuli are learned in a more context-dependent fashion than stimuli that have been encountered previously without the current context. In the current study, we first acquired behavioral data from 62 human participants to conceptually replicate this effect. Participants were instructed to memorize multiple object-scene pairs (study phase) and were then tested on their recognition memory for the objects (test phase). Importantly, 1 day prior, participants had been familiarized with half of the object stimuli. During the test phase, the objects were either matched to the same scene as during study (intact pair) or swapped with a different object’s scene (rearranged pair). Our results conceptually replicated the context-dependency effect by showing that breaking up a studied object-context pairing is more detrimental to object recognition performance for non-familiarized objects than for familiarized objects. Second, we used functional magnetic resonance imaging (fMRI) to determine whether medial temporal lobe encoding-related activity patterns are reflective of this familiarity-related context effect. Data acquired from 25 human participants indicated a larger effect of familiarization on encoding-related hippocampal activity for objects presented within a scene context compared to objects presented alone. Our results showed that both retrieval-related accuracy patterns and hippocampal activation patterns were in line with a familiarization-mediated context-dependency effect.

Sensors ◽  
2021 ◽  
Vol 21 (9) ◽  
pp. 3243
Author(s):  
Robert Jackermeier ◽  
Bernd Ludwig

In smartphone-based pedestrian navigation systems, detailed knowledge about user activity and device placement is a key information. Landmarks such as staircases or elevators can help the system in determining the user position when located inside buildings, and navigation instructions can be adapted to the current context in order to provide more meaningful assistance. Typically, most human activity recognition (HAR) approaches distinguish between general activities such as walking, standing or sitting. In this work, we investigate more specific activities that are tailored towards the use-case of pedestrian navigation, including different kinds of stationary and locomotion behavior. We first collect a dataset of 28 combinations of device placements and activities, in total consisting of over 6 h of data from three sensors. We then use LSTM-based machine learning (ML) methods to successfully train hierarchical classifiers that can distinguish between these placements and activities. Test results show that the accuracy of device placement classification (97.2%) is on par with a state-of-the-art benchmark in this dataset while being less resource-intensive on mobile devices. Activity recognition performance highly depends on the classification task and ranges from 62.6% to 98.7%, once again performing close to the benchmark. Finally, we demonstrate in a case study how to apply the hierarchical classifiers to experimental and naturalistic datasets in order to analyze activity patterns during the course of a typical navigation session and to investigate the correlation between user activity and device placement, thereby gaining insights into real-world navigation behavior.


2021 ◽  
Vol 13 (13) ◽  
pp. 7040
Author(s):  
Beat Meier ◽  
Michèle C. Muhmenthaler

Perceptual fluency, that is, the ease with which people perceive information, has diverse effects on cognition and learning. For example, when judging the truth of plausible but incorrect information, easy-to-read statements are incorrectly judged as true while difficult to read statements are not. As we better remember information that is consistent with pre-existing schemata (i.e., schema congruency), statements judged as true should be remembered better, which would suggest that fluency boosts memory. Another line of research suggests that learning information from hard-to-read statements enhances subsequent memory compared to easy-to-read statements (i.e., desirable difficulties). In the present study, we tested these possibilities in two experiments with student participants. In the study phase, they read plausible statements that were either easy or difficult to read and judged their truth. To assess the sustainability of learning, the test phase in which we tested recognition memory for these statements was delayed for 24 h. In Experiment 1, we manipulated fluency by presenting the statements in colors that made them easy or difficult to read. In Experiment 2, we manipulated fluency by presenting the statements in font types that made them easy or difficult to read. Moreover, in Experiment 2, memory was tested either immediately or after a 24 h delay. In both experiments, the results showed a consistent effect of schema congruency, but perceptual fluency did not affect sustainable learning. However, in the immediate test of Experiment 2, perceptual fluency enhanced memory for schema-incongruent materials. Thus, perceptual fluency can boost initial memory for schema-incongruent memory most likely due to short-lived perceptual traces, which are cropped during consolidation, but does not boost sustainable learning. We discuss these results in relation to research on the role of desirable difficulties for student learning, to effects of cognitive conflict on subsequent memory, and more generally in how to design learning methods and environments in a sustainable way.


2004 ◽  
Vol 16 (10) ◽  
pp. 1840-1853 ◽  
Author(s):  
Mikael Johansson ◽  
Axel Mecklinger ◽  
Anne-Cécile Treese

This study examined emotional influences on the hypothesized event-related potential (ERP) correlates of familiarity and recollection (Experiment 1) and the states of awareness (Experiment 2) accompanying recognition memory for faces differing in facial affect. Participants made gender judgments to positive, negative, and neutral faces at study and were in the test phase instructed to discriminate between studied and nonstudied faces. Whereas old–new discrimination was unaffected by facial expression, negative faces were recollected to a greater extent than both positive and neutral faces as reflected in the parietal ERP old–new effect and in the proportion of remember judgments. Moreover, emotion-specific modulations were observed in frontally recorded ERPs elicited by correctly rejected new faces that concurred with a more liberal response criterion for emotional as compared to neutral faces. Taken together, the results are consistent with the view that processes promoting recollection are facilitated for negative events and that emotion may affect recognition performance by influencing criterion setting mediated by the prefrontal cortex.


2021 ◽  
Author(s):  
Yingying Huang ◽  
Frank Pollick ◽  
Ming Liu ◽  
Delong Zhang

Abstract Visual mental imagery and visual perception have been shown to share a hierarchical topological visual structure of neural representation. Meanwhile, many studies have reported a dissociation of neural substrate between mental imagery and perception in function and structure. However, we have limited knowledge about how the visual hierarchical cortex involved into internally generated mental imagery and perception with visual input. Here we used a dataset from previous fMRI research (Horikawa & Kamitani, 2017), which included a visual perception and an imagery experiment with human participants. We trained two types of voxel-wise encoding models, based on Gabor features and activity patterns of high visual areas, to predict activity in the early visual cortex (EVC, i.e., V1, V2, V3) during perception, and then evaluated the performance of these models during mental imagery. Our results showed that during perception and imagery, activities in the EVC could be independently predicted by the Gabor features and activity of high visual areas via encoding models, which suggested that perception and imagery might share neural representation in the EVC. We further found that there existed a Gabor-specific and a non-Gabor-specific neural response pattern to stimuli in the EVC, which were shared by perception and imagery. These findings provide insight into mechanisms of how visual perception and imagery shared representation in the EVC.


2020 ◽  
Vol 32 (9) ◽  
pp. 1780-1795 ◽  
Author(s):  
Nicholas A. Ruiz ◽  
Michael R. Meager ◽  
Sachin Agarwal ◽  
Mariam Aly

The medial temporal lobe (MTL) is traditionally considered to be a system that is specialized for long-term memory. Recent work has challenged this notion by demonstrating that this region can contribute to many domains of cognition beyond long-term memory, including perception and attention. One potential reason why the MTL (and hippocampus specifically) contributes broadly to cognition is that it contains relational representations—representations of multidimensional features of experience and their unique relationship to one another—that are useful in many different cognitive domains. Here, we explore the hypothesis that the hippocampus/MTL plays a critical role in attention and perception via relational representations. We compared human participants with MTL damage to healthy age- and education-matched individuals on attention tasks that varied in relational processing demands. On each trial, participants viewed two images (rooms with paintings). On “similar room” trials, they judged whether the rooms had the same spatial layout from a different perspective. On “similar art” trials, they judged whether the paintings could have been painted by the same artist. On “identical” trials, participants simply had to detect identical paintings or rooms. MTL lesion patients were significantly and selectively impaired on the similar room task. This work provides further evidence that the hippocampus/MTL plays a ubiquitous role in cognition by virtue of its relational and spatial representations and highlights its important contributions to rapid perceptual processes that benefit from attention.


2019 ◽  
Vol 18 (2) ◽  
pp. 283-293 ◽  
Author(s):  
Mark L.C.M. Bruurmijn ◽  
Wouter Schellekens ◽  
Mathijs A.H. Raemaekers ◽  
Nick F. Ramsey

AbstractFor some experimental approaches in brain imaging, the existing normalization techniques are not always sufficient. This may be the case if the anatomical shape of the region of interest varies substantially across subjects, or if one needs to compare the left and right hemisphere in the same subject. Here we propose a new standard representation, building upon existing normalization methods: Cgrid (Cartesian geometric representation with isometric dimensions). Cgrid is based on imposing a Cartesian grid over a cortical region of interest that is bounded by anatomical (atlas-based) landmarks. We applied this new representation to the sensorimotor cortex and we evaluated its performance by studying the similarity of activation patterns for hand, foot and tongue movements between subjects, and similarity between hemispheres within subjects. The Cgrid similarities were benchmarked against the similarities of activation patterns when transformed into standard MNI space using SPM, and to similarities from FreeSurfer’s surface-based normalization. For both between-subject and between-hemisphere comparisons, similarity scores in Cgrid were high, similar to those from FreeSurfer normalization and higher than similarity scores from SPM’s MNI normalization. This indicates that Cgrid allows for a straightforward way of representing and comparing sensorimotor activity patterns across subjects and between hemispheres of the same subjects.


2020 ◽  
Vol 30 (11) ◽  
pp. 5915-5929 ◽  
Author(s):  
Tanya Wen ◽  
Daniel J Mitchell ◽  
John Duncan

Abstract The default mode network (DMN) is engaged in a variety of cognitive settings, including social, semantic, temporal, spatial, and self-related tasks. Andrews-Hanna et al. (2010; Andrews-Hanna 2012) proposed that the DMN consists of three distinct functional–anatomical subsystems—a dorsal medial prefrontal cortex (dMPFC) subsystem that supports social cognition; a medial temporal lobe (MTL) subsystem that contributes to memory-based scene construction; and a set of midline core hubs that are especially involved in processing self-referential information. We examined activity in the DMN subsystems during six different tasks: 1) theory of mind, 2) moral dilemmas, 3) autobiographical memory, 4) spatial navigation, 5) self/other adjective judgment, and 6) a rest condition. At a broad level, we observed similar whole-brain activity maps for the six contrasts, and some response to every contrast in each of the three subsystems. In more detail, both univariate analysis and multivariate activity patterns showed partial functional separation, especially between dMPFC and MTL subsystems, though with less support for common activity across the midline core. Integrating social, spatial, self-related, and other aspects of a cognitive situation or episode, multiple components of the DMN may work closely together to provide the broad context for current mental activity.


2016 ◽  
Vol 113 (4) ◽  
pp. E420-E429 ◽  
Author(s):  
Mariam Aly ◽  
Nicholas B. Turk-Browne

Attention influences what is later remembered, but little is known about how this occurs in the brain. We hypothesized that behavioral goals modulate the attentional state of the hippocampus to prioritize goal-relevant aspects of experience for encoding. Participants viewed rooms with paintings, attending to room layouts or painting styles on different trials during high-resolution functional MRI. We identified template activity patterns in each hippocampal subfield that corresponded to the attentional state induced by each task. Participants then incidentally encoded new rooms with art while attending to the layout or painting style, and memory was subsequently tested. We found that when task-relevant information was better remembered, the hippocampus was more likely to have been in the correct attentional state during encoding. This effect was specific to the hippocampus, and not found in medial temporal lobe cortex, category-selective areas of the visual system, or elsewhere in the brain. These findings provide mechanistic insight into how attention transforms percepts into memories.


2012 ◽  
Vol 24 (7) ◽  
pp. 1806-1821
Author(s):  
Bernard M. C. Stienen ◽  
Konrad Schindler ◽  
Beatrice de Gelder

Given the presence of massive feedback loops in brain networks, it is difficult to disentangle the contribution of feedforward and feedback processing to the recognition of visual stimuli, in this case, of emotional body expressions. The aim of the work presented in this letter is to shed light on how well feedforward processing explains rapid categorization of this important class of stimuli. By means of parametric masking, it may be possible to control the contribution of feedback activity in human participants. A close comparison is presented between human recognition performance and the performance of a computational neural model that exclusively modeled feedforward processing and was engineered to fulfill the computational requirements of recognition. Results show that the longer the stimulus onset asynchrony (SOA), the closer the performance of the human participants was to the values predicted by the model, with an optimum at an SOA of 100 ms. At short SOA latencies, human performance deteriorated, but the categorization of the emotional expressions was still above baseline. The data suggest that, although theoretically, feedback arising from inferotemporal cortex is likely to be blocked when the SOA is 100 ms, human participants still seem to rely on more local visual feedback processing to equal the model's performance.


Sign in / Sign up

Export Citation Format

Share Document