Mother–infant convergence of event‐related potentials elicited by face and object processing

2021 ◽  
Vol 63 (8) ◽  
Author(s):  
Kaylin E. Hill ◽  
Wei Siong Neo ◽  
Erika Deming ◽  
Lisa R. Hamrick ◽  
Bridgette L. Kelleher ◽  
...  
2021 ◽  
Author(s):  
Tim Lauer ◽  
Filipp Schmidt ◽  
Melissa L.-H. Vo

While scene context is known to facilitate object recognition, little is known about whichcontextual “ingredients” are at the heart of this phenomenon. Here, we address the question ofwhether the materials that frequently occur in scenes (e.g., tiles in bathroom) associated withspecific objects (e.g., a perfume) are relevant for processing of that object. To this end, wepresented photographs of consistent and inconsistent objects (e.g., perfume vs. pinecone)superimposed on scenes (e.g., bathroom) and close-ups of materials (e.g., tiles). In Experiment1, consistent objects on scenes were named more accurately than inconsistent ones, while therewas only a marginal consistency effect for objects on materials. Also, we did not find anyconsistency effect for scrambled materials that served as color control condition. In Experiment2, we recorded event-related potentials (ERPs) and found N300/N400 responses – markers ofsemantic violations – for objects on inconsistent relative to consistent scenes. Critically, objectson materials triggered N300/N400 responses of similar magnitudes. Our findings show thatcontextual materials indeed affect object processing – even in the absence of spatial scenestructure and object content – suggesting that material is one of the contextual “ingredients”driving scene context effects.


2002 ◽  
Vol 13 (3) ◽  
pp. 250-257 ◽  
Author(s):  
B. Rossion ◽  
I. Gauthier ◽  
V. Goffaux ◽  
M.J. Tarr ◽  
M. Crommelinck

Scalp event-related potentials (ERPs) in humans indicate that face and object processing differ approximately 170 ms following stimulus presentation, at the point of the N170 occipitotemporal component. The N170 is delayed and enhanced to inverted faces but not to inverted objects. We tested whether this inversion effect reflects early mechanisms exclusive to faces or whether it generalizes to other stimuli as a function of visual expertise. ERPs to upright and inverted faces and novel objects (Greebles) were recorded in 10 participants before and after 2 weeks of expertise training with Greebles. The N170 component was observed for both faces and Greebles. The results are consistent with previous reports in that the N170 was delayed and enhanced for inverted faces at recording sites in both hemispheres. For Greebles, the same effect of inversion was observed only for experts, primarily in the left hemisphere. These results suggest that the mechanisms underlying the electrophysiological face-inversion effect extend to visually homogeneous nonface object categories, at least in the left hemisphere, but only when such mechanisms are recruited by expertise.


2021 ◽  
Author(s):  
Tiziana Vercillo ◽  
Edward G. Freedman ◽  
Joshua B. Ewen ◽  
Sophie Molholm ◽  
John J. Foxe

Multisensory objects that are frequently encountered in the natural environment lead to strong associations across a distributed sensory cortical network, with the end result experience of a unitary percept. Remarkably little is known, however, about the cortical processes sub-serving multisensory object formation and recognition. To advance our understanding in this important domain, the present study investigated the brain processes involved in learning and identification of novel visual-auditory objects. Specifically, we introduce and test a rudimentary three-stage model of multisensory object-formation and processing. Thirty adults were remotely trained for a week to recognize a novel class of multisensory objects (3D shapes paired to complex sounds), and high-density event related potentials (ERPs) were recorded to the corresponding unisensory (shapes or sounds only) and multisensory (shapes and sounds) stimuli, before and after intensive training. We identified three major stages of multisensory processing: 1) an early, multisensory, automatic effect (<100 ms) in occipital areas, related to the detection of simultaneous audiovisual signals and not related to multisensory learning 2) an intermediate object-processing stage (100-200 ms) in occipital and parietal areas, sensitive to the learned multi-sensory associations and 3) a late multisensory processing stage (>250 ms) that appears to be involved in both object recognition and possibly memory consolidation. Results from this study provide support for multiple stages of multisensory object learning and recognition that are subserved by an extended network of cortical areas.


2006 ◽  
Vol 18 (9) ◽  
pp. 1453-1465 ◽  
Author(s):  
Lisa S. Scott ◽  
James W. Tanaka ◽  
David L. Sheinberg ◽  
Tim Curran

Subordinate-level object processing is regarded as a hallmark of perceptual expertise. However, the relative contribution of subordinate- and basic-level category experience in the acquisition of perceptual expertise has not been clearly delineated. In this study, participants learned to classify wading birds and owls at either the basic (e.g., wading bird, owl) or the subordinate (e.g., egret, snowy owl) level. After 6 days of training, behavioral results showed that subordinate-level but not basic-level training improved subordinate discrimination of trained exemplars, novel exemplars, and exemplars from novel species. Event-related potentials indicated that both basic- and subordinate-level training enhanced the early N170 component, but only subordinate-level training amplified the later N250 component. These results are consistent with models positing separate basic and subordinate learning mechanisms, and, contrary to perspectives attempting to explain visual expertise solely in terms of subordinate-level processing, suggest that expertise enhances neural responses of both basic and subordinate processing.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Tim Lauer ◽  
Filipp Schmidt ◽  
Melissa L.-H. Võ

AbstractWhile scene context is known to facilitate object recognition, little is known about which contextual “ingredients” are at the heart of this phenomenon. Here, we address the question of whether the materials that frequently occur in scenes (e.g., tiles in a bathroom) associated with specific objects (e.g., a perfume) are relevant for the processing of that object. To this end, we presented photographs of consistent and inconsistent objects (e.g., perfume vs. pinecone) superimposed on scenes (e.g., a bathroom) and close-ups of materials (e.g., tiles). In Experiment 1, consistent objects on scenes were named more accurately than inconsistent ones, while there was only a marginal consistency effect for objects on materials. Also, we did not find any consistency effect for scrambled materials that served as color control condition. In Experiment 2, we recorded event-related potentials and found N300/N400 responses—markers of semantic violations—for objects on inconsistent relative to consistent scenes. Critically, objects on materials triggered N300/N400 responses of similar magnitudes. Our findings show that contextual materials indeed affect object processing—even in the absence of spatial scene structure and object content—suggesting that material is one of the contextual “ingredients” driving scene context effects.


Sign in / Sign up

Export Citation Format

Share Document