scholarly journals A mixture of sparse coding models explaining properties of face neurons related to holistic and parts-based processing

2016 ◽  
Author(s):  
Haruo Hosoya ◽  
Aapo Hyvärinen

AbstractExperimental studies have revealed evidence of both parts-based and holistic representations of objects and faces in the primate visual system. However, it is still a mystery how such seemingly contradictory types of processing can coexist within a single system. Here, we propose a novel theory called mixture of sparse coding models, inspired by the formation of category-specific subregions in the inferotemporal (IT) cortex. We developed a hierarchical network that constructed a mixture of two sparse coding submodels on top of a simple Gabor analysis. The submodels were each trained with face or non-face object images, which resulted in separate representations of facial parts and object parts. Importantly, evoked neural activities were modeled by Bayesian inference, which had a top-down explaining-away effect that enabled recognition of an individual part to depend strongly on the category of the whole input. We show that this explaining-away effect was indeed crucial for the units in the face submodel to exhibit significant selectivity to face images over object images in a similar way to actual face-selective neurons in the macaque IT cortex. Furthermore, the model explained, qualitatively and quantitatively, several tuning properties to facial features found in the middle patch of face processing in IT as documented by Freiwald, Tsao, and Livingstone (2009). These included, in particular, tuning to only a small number of facial features that were often related to geometrically large parts like face outline and hair, preference and anti-preference of extreme facial features (e.g., very large/small inter-eye distance), and reduction of the gain of feature tuning for partial face stimuli compared to whole face stimuli. Thus, we hypothesize that the coding principle of facial features in the middle patch of face processing in the macaque IT cortex may be closely related to mixture of sparse coding models.

2007 ◽  
Vol 97 (2) ◽  
pp. 1671-1683 ◽  
Author(s):  
K. M. Gothard ◽  
F. P. Battaglia ◽  
C. A. Erickson ◽  
K. M. Spitler ◽  
D. G. Amaral

The amygdala is purported to play an important role in face processing, yet the specificity of its activation to face stimuli and the relative contribution of identity and expression to its activation are unknown. In the current study, neural activity in the amygdala was recorded as monkeys passively viewed images of monkey faces, human faces, and objects on a computer monitor. Comparable proportions of neurons responded selectively to images from each category. Neural responses to monkey faces were further examined to determine whether face identity or facial expression drove the face-selective responses. The majority of these neurons (64%) responded both to identity and facial expression, suggesting that these parameters are processed jointly in the amygdala. Large fractions of neurons, however, showed pure identity-selective or expression-selective responses. Neurons were selective for a particular facial expression by either increasing or decreasing their firing rate compared with the firing rates elicited by the other expressions. Responses to appeasing faces were often marked by significant decreases of firing rates, whereas responses to threatening faces were strongly associated with increased firing rate. Thus global activation in the amygdala might be larger to threatening faces than to neutral or appeasing faces.


Perception ◽  
10.1068/p6291 ◽  
2009 ◽  
Vol 38 (5) ◽  
pp. 702-707
Author(s):  
Robert A Johnston ◽  
Eleanor Tomlinson ◽  
Chris Jones ◽  
Alan Weaden

The face-processing skills of people with schizophrenia were compared with those of a group of unimpaired individuals. Participants were asked to make speeded face-classification decisions to faces previously rated as being typical or distinctive. The schizophrenic group responded more slowly than the unimpaired group; however, both groups demonstrated the customary sensitivity to the distinctiveness of the face stimuli. Face-classification latencies made to typical faces were shorter than those made to distinctive faces. The implication of this finding with the schizophrenic group is discussed with reference to accounts of face-processing deficits attributed to these individuals.


1996 ◽  
Vol 2 (3) ◽  
pp. 240-248 ◽  
Author(s):  
Michael R. Polster ◽  
Steven Z. Rapcsak

AbstractWe report the performance of a prosopagnosic patient on face learning tasks under different encoding instructions (i.e., levels of processing manipulations). R.J. performs at chance when given no encoding instructions or when given “shallow” encoding instructions to focus on facial features. By contrast, he performs relatively well with “deep” encoding instructions to rate faces in terms of personality traits or when provided with semantic and name information during the study phase. We propose that the improvement associated with deep encoding instructions may be related to the establishment of distinct visually derived and identity-specific semantic codes. The benefit associated with deep encoding in R.J., however, was found to be restricted to the specific view of the face presented at study and did not generalize to other views of the same face. These observations suggest that deep encoding instructions may enhance memory for concrete or pictorial representations of faces in patients with prosopagnosia, but that these patients cannot compensate for the inability to construct abstract structural codes that normally allow faces to be recognized from different orientations. We postulate further that R.J.'s poor performance on face learning tasks may be attributable to excessive reliance on a feature-based left hemisphere face processing system that operates primarily on view-specific representations. (JINS, 1996, 2, 240–248.)


1998 ◽  
Vol 06 (03) ◽  
pp. 281-298 ◽  
Author(s):  
Terry Huntsberger ◽  
John Rose ◽  
Shashidhar Ramaka

The human face is one of the most important patterns our visual system receives. It establishes a person's identity and also plays a significant role in everyday communication. Humans can recognize familiar faces under varying lighting conditions, different scales, and even after the face has changed due to aging, hair style, glasses, or facial hair. Our ease at recognizing faces is a strong motivation for the investigation of computational models of face processing. This paper presents a newly developed face processing system called Fuzzy-Face that combines wavelet pre-processing of input with a fuzzy self-organizing feature map algorithm. The wavelet-derived face space is partitioned into fuzzy sets which are characterized by face exemplars and membership values to those exemplars. This system learns faces using relatively few training epochs, has total recall for faces it has been shown, generalizes to face images that are acquired under different lighting conditions, and has rudimentary gender discrimination capabilities. We also include the results of some experimental studies.


2018 ◽  
Vol 5 (6) ◽  
pp. 171616
Author(s):  
Chang Hong Liu ◽  
Wenfeng Chen

Facial attractiveness is often studied on the basis of the internal facial features alone. This study investigated how this exclusion of the external features affects the perception of attractiveness. We studied the effects of two most commonly used methods of exclusion, where the shape of an occluding mask was defined by either the facial outline or an oval. Participants rated attractiveness of the same faces under these conditions. Results showed that faces were consistently rated more attractive when they were masked by an oval shape rather than by their outline (Experiment 1). Attractive faces were more strongly affected by this effect than were less attractive faces when participants were able to control the viewing time. However, unattractive faces benefited more from this effect when the same face stimuli were presented briefly for only 20 ms (Experiment 2). Further manipulation confirmed that the effect was mainly due to the occlusion of a larger area of the external features rather than the regular and symmetrical features of the oval shape (Experiment 3) or lacks contextual cues about the face boundary (Experiment 4). The effect was only relative to masked faces, with no advantage over unmasked faces (Experiment 5), and is likely a result of the interaction between the shape of a mask and the internal features of the face. This holistic effect in the appraisal of facial attractiveness is striking, because the oval shape of the mask is not a part of the face but is the edge of an occluding object.


2020 ◽  
Author(s):  
Gillian M Clark ◽  
Claire McNeel ◽  
Felicity J Bigelow ◽  
Peter Gregory Enticott

The investigation of emotional face processing has largely used faces devoid of context, and does not account for within-perceiver differences in empathy. The importance of context in face perception has become apparent in recent years. This study examined the interaction of the contextual factors of facial expression, knowledge of a person’s character, and within-perceiver empathy levels on face processing event-related potentials (ERPs). Forty-two adult participants learned background information about six individuals’ character. Three types of character were described, in which the character was depicted as deliberately causing harm to others, accidentally causing harm to others, or undertaking neutral actions. Subsequently, EEG was recorded while participants viewed the characters’ faces displaying neutral or emotional expressions. Participants’ empathy was assessed using the Empathy Quotient survey. Results showed a significant interaction of character type and empathy on the early posterior negativity (EPN) ERP component. These results suggested that for those with either low or high empathy, more attention was paid to the face stimuli, with more distinction between the different characters. In contrast, those in the middle range of empathy tended to produce smaller EPN with less distinction between character types. Findings highlight the importance of trait empathy in accounting for how faces in context are perceived.


Author(s):  
Galit Yovel

As social primates, one of the most important cognitive tasks we conduct, dozens of times a day, is to look at a face and extract the person's identity. During the last decade, the neural basis of face processing has been extensively investigated in humans with event-related potential (ERP) and functional MRI (fMRI). These two methods provide complementary information about the temporal and spatial aspects of the neural response, with ERPs allowing high temporal resolution of milliseconds but low spatial resolution of the neural generator and fMRI displaying a slow hemodynamic response but better spatial localization of the activated regions. Despite the extensive fMRI and ERP research of faces, only a few studies have assessed the relationship between the two methods and no study to date have collected simultaneous ERP and fMRI responses to face stimuli. In the current paper we will try to assess the spatial and temporal aspects of the neural response to faces by simultaneously collecting functional MRI and event-related potentials (ERP) to face stimuli. Our goals are twofold: 1) ERP and fMRI show a robust selective response to faces. In particular, two well-established face-specific phenomena, the RH superiority and the inversion effect are robustly found with both ERP and fMRI. Despite the extensive research of these effects with ERP and fMRI, it is still unknown to what extent their spatial (fMRI) and temporal (ERP) aspects are associated. In Study 1 we will employ an individual differences approach, to assess the relationship between these ERP and fMRI face-specific responses. 2) Face processing involves several stages starting from structural encoding of the face image through identity processing to storage for later retrieval. This representation undergoes several manipulations that take place at different time points and in different brain regions before the final percept is generated. By simultaneously recording ERP and fMRI we hope to gain a more comprehensive understanding of the timecourse that different brain areas participate in the generation of the face representation.


2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Runnan Cao ◽  
Xin Li ◽  
Nicholas J. Brandmeir ◽  
Shuo Wang

AbstractFaces are salient social stimuli that attract a stereotypical pattern of eye movement. The human amygdala and hippocampus are involved in various aspects of face processing; however, it remains unclear how they encode the content of fixations when viewing faces. To answer this question, we employed single-neuron recordings with simultaneous eye tracking when participants viewed natural face stimuli. We found a class of neurons in the human amygdala and hippocampus that encoded salient facial features such as the eyes and mouth. With a control experiment using non-face stimuli, we further showed that feature selectivity was specific to faces. We also found another population of neurons that differentiated saccades to the eyes vs. the mouth. Population decoding confirmed our results and further revealed the temporal dynamics of face feature coding. Interestingly, we found that the amygdala and hippocampus played different roles in encoding facial features. Lastly, we revealed two functional roles of feature-selective neurons: 1) they encoded the salient region for face recognition, and 2) they were related to perceived social trait judgments. Together, our results link eye movement with neural face processing and provide important mechanistic insights for human face perception.


2010 ◽  
Vol 69 (3) ◽  
pp. 161-167 ◽  
Author(s):  
Jisien Yang ◽  
Adrian Schwaninger

Configural processing has been considered the major contributor to the face inversion effect (FIE) in face recognition. However, most researchers have only obtained the FIE with one specific ratio of configural alteration. It remains unclear whether the ratio of configural alteration itself can mediate the occurrence of the FIE. We aimed to clarify this issue by manipulating the configural information parametrically using six different ratios, ranging from 4% to 24%. Participants were asked to judge whether a pair of faces were entirely identical or different. The paired faces that were to be compared were presented either simultaneously (Experiment 1) or sequentially (Experiment 2). Both experiments revealed that the FIE was observed only when the ratio of configural alteration was in the intermediate range. These results indicate that even though the FIE has been frequently adopted as an index to examine the underlying mechanism of face processing, the emergence of the FIE is not robust with any configural alteration but dependent on the ratio of configural alteration.


2011 ◽  
Author(s):  
Lieke Curfs ◽  
Rob Holland ◽  
Jose Kerstholt ◽  
Daniel Wigboldus
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document