object processing
Recently Published Documents


TOTAL DOCUMENTS

335
(FIVE YEARS 80)

H-INDEX

47
(FIVE YEARS 4)

Author(s):  
Hoi Ming Ken Yip ◽  
Leo Y. T. Cheung ◽  
Vince S. H. Ngan ◽  
Yetta Kwailing Wong ◽  
Alan C. N. Wong
Keyword(s):  

2022 ◽  
Vol 9 (1) ◽  
Author(s):  
Tijl Grootswagers ◽  
Ivy Zhou ◽  
Amanda K. Robinson ◽  
Martin N. Hebart ◽  
Thomas A. Carlson

AbstractThe neural basis of object recognition and semantic knowledge has been extensively studied but the high dimensionality of object space makes it challenging to develop overarching theories on how the brain organises object knowledge. To help understand how the brain allows us to recognise, categorise, and represent objects and object categories, there is a growing interest in using large-scale image databases for neuroimaging experiments. In the current paper, we present THINGS-EEG, a dataset containing human electroencephalography responses from 50 subjects to 1,854 object concepts and 22,248 images in the THINGS stimulus set, a manually curated and high-quality image database that was specifically designed for studying human vision. The THINGS-EEG dataset provides neuroimaging recordings to a systematic collection of objects and concepts and can therefore support a wide array of research to understand visual object processing in the human brain.


Animals ◽  
2022 ◽  
Vol 12 (1) ◽  
pp. 108
Author(s):  
Kirsten D. Gillette ◽  
Erin M. Phillips ◽  
Daniel D. Dilks ◽  
Gregory S. Berns

Previous research to localize face areas in dogs’ brains has generally relied on static images or videos. However, most dogs do not naturally engage with two-dimensional images, raising the question of whether dogs perceive such images as representations of real faces and objects. To measure the equivalency of live and two-dimensional stimuli in the dog’s brain, during functional magnetic resonance imaging (fMRI) we presented dogs and humans with live-action stimuli (actors and objects) as well as videos of the same actors and objects. The dogs (n = 7) and humans (n = 5) were presented with 20 s blocks of faces and objects in random order. In dogs, we found significant areas of increased activation in the putative dog face area, and in humans, we found significant areas of increased activation in the fusiform face area to both live and video stimuli. In both dogs and humans, we found areas of significant activation in the posterior superior temporal sulcus (ectosylvian fissure in dogs) and the lateral occipital complex (entolateral gyrus in dogs) to both live and video stimuli. Of these regions of interest, only the area along the ectosylvian fissure in dogs showed significantly more activation to live faces than to video faces, whereas, in humans, both the fusiform face area and posterior superior temporal sulcus responded significantly more to live conditions than video conditions. However, using the video conditions alone, we were able to localize all regions of interest in both dogs and humans. Therefore, videos can be used to localize these regions of interest, though live conditions may be more salient.


2021 ◽  
Author(s):  
◽  
Sumaya Lamb

<p>A prominent debate in visual perception centers on the nature of mechanisms underlying face processing. One side of this debate argues that faces are processed by specialised mechanisms that are not involved in any form of object processing. By contrast, the other side argues that faces are processed by generic mechanisms common to all objects for which we are experts. To distinguish between these two hypotheses, I investigated whether participants with impaired face processing (developmental prosopagnosia) can acquire expertise with novel objects called greebles. To do so, I recruited 10 developmental prosopagnosics and 10 neurotypical control participants. All participants completed a standard training program for developing expertise with greebles, as well as two similar training programs with upright faces and inverted faces. Prosopagnosics were able to acquire expertise with greebles to the same extent as controls but were impaired when learning upright faces. These results demonstrate that deficits for face processing in individuals with prosopagnosia are dissociated from their ability to gain expertise with objects. Overall, the results support the hypothesis that face processing relies on specialised mechanisms, rather than generic expertise mechanisms. Despite their deficits, though, prosopagnosics still showed some evidence of learning with upright faces and showed better learning with upright faces than inverted faces. These findings suggest that prosopagnosics have face-specific mechanisms that are somewhat functional, and that training could be a useful rehabilitation tool in developmental prosopagnosia. Finally, I found substantial heterogeneity among the patterns of performance of the prosopagnosics, suggesting that further investigations into the subtypes of prosopagnosia are warranted.</p>


2021 ◽  
Author(s):  
◽  
Sumaya Lamb

<p>A prominent debate in visual perception centers on the nature of mechanisms underlying face processing. One side of this debate argues that faces are processed by specialised mechanisms that are not involved in any form of object processing. By contrast, the other side argues that faces are processed by generic mechanisms common to all objects for which we are experts. To distinguish between these two hypotheses, I investigated whether participants with impaired face processing (developmental prosopagnosia) can acquire expertise with novel objects called greebles. To do so, I recruited 10 developmental prosopagnosics and 10 neurotypical control participants. All participants completed a standard training program for developing expertise with greebles, as well as two similar training programs with upright faces and inverted faces. Prosopagnosics were able to acquire expertise with greebles to the same extent as controls but were impaired when learning upright faces. These results demonstrate that deficits for face processing in individuals with prosopagnosia are dissociated from their ability to gain expertise with objects. Overall, the results support the hypothesis that face processing relies on specialised mechanisms, rather than generic expertise mechanisms. Despite their deficits, though, prosopagnosics still showed some evidence of learning with upright faces and showed better learning with upright faces than inverted faces. These findings suggest that prosopagnosics have face-specific mechanisms that are somewhat functional, and that training could be a useful rehabilitation tool in developmental prosopagnosia. Finally, I found substantial heterogeneity among the patterns of performance of the prosopagnosics, suggesting that further investigations into the subtypes of prosopagnosia are warranted.</p>


2021 ◽  
Vol 63 (8) ◽  
Author(s):  
Kaylin E. Hill ◽  
Wei Siong Neo ◽  
Erika Deming ◽  
Lisa R. Hamrick ◽  
Bridgette L. Kelleher ◽  
...  

2021 ◽  
Author(s):  
Tiziana Vercillo ◽  
Edward G. Freedman ◽  
Joshua B. Ewen ◽  
Sophie Molholm ◽  
John J. Foxe

Multisensory objects that are frequently encountered in the natural environment lead to strong associations across a distributed sensory cortical network, with the end result experience of a unitary percept. Remarkably little is known, however, about the cortical processes sub-serving multisensory object formation and recognition. To advance our understanding in this important domain, the present study investigated the brain processes involved in learning and identification of novel visual-auditory objects. Specifically, we introduce and test a rudimentary three-stage model of multisensory object-formation and processing. Thirty adults were remotely trained for a week to recognize a novel class of multisensory objects (3D shapes paired to complex sounds), and high-density event related potentials (ERPs) were recorded to the corresponding unisensory (shapes or sounds only) and multisensory (shapes and sounds) stimuli, before and after intensive training. We identified three major stages of multisensory processing: 1) an early, multisensory, automatic effect (<100 ms) in occipital areas, related to the detection of simultaneous audiovisual signals and not related to multisensory learning 2) an intermediate object-processing stage (100-200 ms) in occipital and parietal areas, sensitive to the learned multi-sensory associations and 3) a late multisensory processing stage (>250 ms) that appears to be involved in both object recognition and possibly memory consolidation. Results from this study provide support for multiple stages of multisensory object learning and recognition that are subserved by an extended network of cortical areas.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Tim Lauer ◽  
Filipp Schmidt ◽  
Melissa L.-H. Võ

AbstractWhile scene context is known to facilitate object recognition, little is known about which contextual “ingredients” are at the heart of this phenomenon. Here, we address the question of whether the materials that frequently occur in scenes (e.g., tiles in a bathroom) associated with specific objects (e.g., a perfume) are relevant for the processing of that object. To this end, we presented photographs of consistent and inconsistent objects (e.g., perfume vs. pinecone) superimposed on scenes (e.g., a bathroom) and close-ups of materials (e.g., tiles). In Experiment 1, consistent objects on scenes were named more accurately than inconsistent ones, while there was only a marginal consistency effect for objects on materials. Also, we did not find any consistency effect for scrambled materials that served as color control condition. In Experiment 2, we recorded event-related potentials and found N300/N400 responses—markers of semantic violations—for objects on inconsistent relative to consistent scenes. Critically, objects on materials triggered N300/N400 responses of similar magnitudes. Our findings show that contextual materials indeed affect object processing—even in the absence of spatial scene structure and object content—suggesting that material is one of the contextual “ingredients” driving scene context effects.


Sign in / Sign up

Export Citation Format

Share Document