scholarly journals Structural Encoding of Human and Schematic Faces: Holistic and Part-Based Processes

2001 ◽  
Vol 13 (7) ◽  
pp. 937-951 ◽  
Author(s):  
Noam Sagiv ◽  
Shlomo Bentin

The range of specificity and the response properties of the extrastriate face area were investigated by comparing the N170 event-related potential (ERP) component elicited by photographs of natural faces, realistically painted portraits, sketches of faces, schematic faces, and by nonface meaningful and meaningless visual stimuli. Results showed that the N170 distinguished between faces and nonface stimuli when the concept of a face was clearly rendered by the visual stimulus, but it did not distinguish among different face types: Even a schematic face made from simple line fragments triggered the N170. However, in a second experiment, inversion seemed to have a different effect on natural faces in which face components were available and on the pure gestalt-based schematic faces: The N170 amplitude was enhanced when natural faces were presented upside down but reduded when schematic faces were inverted. Inversion delayed the N170 peak latency for both natural and schematic faces. Together, these results suggest that early face processing in the human brain is subserved by a multiple-component neural system in which both whole-face configurations and face parts are processed. The relative involvement of the two perceptual processes is probably determined by whether the physiognomic value of the stimuli depends upon holistic configuration, or whether the individual components can be associated with faces even when presented outside the face context.

2007 ◽  
Vol 19 (11) ◽  
pp. 1815-1826 ◽  
Author(s):  
Roxane J. Itier ◽  
Claude Alain ◽  
Katherine Sedore ◽  
Anthony R. McIntosh

Unlike most other objects that are processed analytically, faces are processed configurally. This configural processing is reflected early in visual processing following face inversion and contrast reversal, as an increase in the N170 amplitude, a scalp-recorded event-related potential. Here, we show that these face-specific effects are mediated by the eye region. That is, they occurred only when the eyes were present, but not when eyes were removed from the face. The N170 recorded to inverted and negative faces likely reflects the processing of the eyes. We propose a neural model of face processing in which face- and eye-selective neurons situated in the superior temporal sulcus region of the human brain respond differently to the face configuration and to the eyes depending on the face context. This dynamic response modulation accounts for the N170 variations reported in the literature. The eyes may be central to what makes faces so special.


Author(s):  
Kamal Naina Soni

Abstract: Human expressions play an important role in the extraction of an individual's emotional state. It helps in determining the current state and mood of an individual, extracting and understanding the emotion that an individual has based on various features of the face such as eyes, cheeks, forehead, or even through the curve of the smile. A survey confirmed that people use Music as a form of expression. They often relate to a particular piece of music according to their emotions. Considering these aspects of how music impacts a part of the human brain and body, our project will deal with extracting the user’s facial expressions and features to determine the current mood of the user. Once the emotion is detected, a playlist of songs suitable to the mood of the user will be presented to the user. This can be a big help to alleviate the mood or simply calm the individual and can also get quicker song according to the mood, saving time from looking up different songs and parallel developing a software that can be used anywhere with the help of providing the functionality of playing music according to the emotion detected. Keywords: Music, Emotion recognition, Categorization, Recommendations, Computer vision, Camera


2019 ◽  
Vol 31 (10) ◽  
pp. 1573-1588 ◽  
Author(s):  
Eelke de Vries ◽  
Daniel Baldauf

We recorded magnetoencephalography using a neural entrainment paradigm with compound face stimuli that allowed for entraining the processing of various parts of a face (eyes, mouth) as well as changes in facial identity. Our magnetic response image-guided magnetoencephalography analyses revealed that different subnodes of the human face processing network were entrained differentially according to their functional specialization. Whereas the occipital face area was most responsive to the rate at which face parts (e.g., the mouth) changed, and face patches in the STS were mostly entrained by rhythmic changes in the eye region, the fusiform face area was the only subregion that was strongly entrained by the rhythmic changes in facial identity. Furthermore, top–down attention to the mouth, eyes, or identity of the face selectively modulated the neural processing in the respective area (i.e., occipital face area, STS, or fusiform face area), resembling behavioral cue validity effects observed in the participants' RT and detection rate data. Our results show the attentional weighting of the visual processing of different aspects and dimensions of a single face object, at various stages of the involved visual processing hierarchy.


2019 ◽  
Vol 30 (5) ◽  
pp. 2986-2996
Author(s):  
Xue Tian ◽  
Ruosi Wang ◽  
Yuanfang Zhao ◽  
Zonglei Zhen ◽  
Yiying Song ◽  
...  

Abstract Previous studies have shown that individuals with developmental prosopagnosia (DP) show specific deficits in face processing. However, the mechanism underlying the deficits remains largely unknown. One hypothesis suggests that DP shares the same mechanism as normal population, though their faces processing is disproportionally impaired. An alternative hypothesis emphasizes a qualitatively different mechanism of DP processing faces. To test these hypotheses, we instructed DP and normal individuals to perceive faces and objects. Instead of calculating accuracy averaging across stimulus items, we used the discrimination accuracy for each item to construct a multi-item discriminability pattern. We found DP’s discriminability pattern was less similar to that of normal individuals when perceiving faces than perceiving objects, suggesting that DP has qualitatively different mechanism in representing faces. A functional magnetic resonance imaging study was conducted to reveal the neural basis and found that multi-voxel activation patterns for faces in the right fusiform face area and occipital face area of DP were deviated away from the mean activation pattern of normal individuals. Further, the face representation was more heterogeneous in DP, suggesting that deficits of DP may come from multiple sources. In short, our study provides the first direct evidence that DP processes faces qualitatively different from normal population.


2021 ◽  
Vol 14 ◽  
Author(s):  
Dongya Wu ◽  
Xin Li ◽  
Jun Feng

Brain connectivity plays an important role in determining the brain region’s function. Previous researchers proposed that the brain region’s function is characterized by that region’s input and output connectivity profiles. Following this proposal, numerous studies have investigated the relationship between connectivity and function. However, this proposal only utilizes direct connectivity profiles and thus is deficient in explaining individual differences in the brain region’s function. To overcome this problem, we proposed that a brain region’s function is characterized by that region’s multi-hops connectivity profile. To test this proposal, we used multi-hops functional connectivity to predict the individual face activation of the right fusiform face area (rFFA) via a multi-layer graph neural network and showed that the prediction performance is essentially improved. Results also indicated that the two-layer graph neural network is the best in characterizing rFFA’s face activation and revealed a hierarchical network for the face processing of rFFA.


2019 ◽  
Vol 8 (4S2) ◽  
pp. 1031-1036

Machine analysis of face detection is an interesting topic for study in Human-Computer Interaction. The existing studies show that discovering the position and scale of the face region is difficult due to significant illumination variation, noise and appearance variation in unconstrained scenarios. This paper suggests a method to detect the location of face area using recently developed YouTube Video face database. In this work, each frame is formulated by normalization technique and separated into overlapping blocks. The Gabor filter is tuned to extract the Gabor features from the individual blocks. The averaged Gabor features are manipulated and local binary pattern histogram features are extracted. The extracted patterns are passed to the classifier with training images for face region identification. Our experimental results on YouTube video face database exhibits promising results and demonstrate a significant performance improvement when compared to the existing techniques. Furthermore, our proposed work is uncaring to head poses and sturdy to variations in illumination, appearance and noisy images


1975 ◽  
Vol 38 (1) ◽  
pp. 146-157 ◽  
Author(s):  
E. S. Luschei ◽  
G. M. Goodwin

Monkeys were trained to produce a low, steady biting force for 0.5-2.5 s, and then a rapid forceful bite in response to a visual stimulus. After large bilateral lesions of the precentral face area, monkeys emitted repetitive forceful bites on the apparatus, but could not perform the force-holding task. They eventually relearned the task, but the force exerted was never as steady as it was prelesion, and often oscillated at about 2 and/or 5-6 Hz. After retraining, two animals with large bilateral lesions of the face area produced median RT responses equal to or only slightly longer than their prelesion performance, indicating that neural pathways not involving the precentral cortex can mediate quick visual RT responses. The variability of RTs was permanently increased, probably as a result of the persistent unsteadiness of the force-holding response. Incomplete bilateral lesions of the precentral face area, a complete unilateral lesion of that area, and bilateral lesions adjacent regions of cortex produced either mild, transient difficulties with the biting taks, or no problems at all. The results indicate that the precentral cortex has a role in the control of voluntary jaw movements. Lesions caused difficulty in controlling, but not producing, closing jaw movements, thereby suggesting that this role is predominantly to inhibit jaw-closing motoneurons or the systems that excite them. Electrical stimulation studies of the face area of the precentral cortex of the unanesthetized monkey point to the same conclusion.


2018 ◽  
Vol 30 (7) ◽  
pp. 963-972 ◽  
Author(s):  
Andrew D. Engell ◽  
Na Yeon Kim ◽  
Gregory McCarthy

Perception of faces has been shown to engage a domain-specific set of brain regions, including the occipital face area (OFA) and the fusiform face area (FFA). It is commonly held that the OFA is responsible for the detection of faces in the environment, whereas the FFA is responsible for processing the identity of the face. However, an alternative model posits that the FFA is responsible for face detection and subsequently recruits the OFA to analyze the face parts in the service of identification. An essential prediction of the former model is that the OFA is not sensitive to the arrangement of internal face parts. In the current fMRI study, we test the sensitivity of the OFA and FFA to the configuration of face parts. Participants were shown faces in which the internal parts were presented in a typical configuration (two eyes above a nose above a mouth) or in an atypical configuration (the locations of individual parts were shuffled within the face outline). Perception of the atypical faces evoked a significantly larger response than typical faces in the OFA and in a wide swath of the surrounding posterior occipitotemporal cortices. Surprisingly, typical faces did not evoke a significantly larger response than atypical faces anywhere in the brain, including the FFA (although some subthreshold differences were observed). We propose that face processing in the FFA results in inhibitory sculpting of activation in the OFA, which accounts for this region's weaker response to typical than to atypical configurations.


2021 ◽  
Vol 12 ◽  
Author(s):  
Pei Liang ◽  
Jiayu Jiang ◽  
Jie Chen ◽  
Liuqing Wei

Facial emotional recognition is something used often in our daily lives. How does the brain process the face search? Can taste modify such a process? This study employed two tastes (sweet and acidic) to investigate the cross-modal interaction between taste and emotional face recognition. The behavior responses (reaction time and correct response ratios) and the event-related potential (ERP) were applied to analyze the interaction between taste and face processing. Behavior data showed that when detecting a negative target face with a positive face as a distractor, the participants perform the task faster with an acidic taste than with sweet. No interaction effect was observed with correct response ratio analysis. The early (P1, N170) and mid-stage [early posterior negativity (EPN)] components have shown that sweet and acidic tastes modified the ERP components with the affective face search process in the ERP results. No interaction effect was observed in the late-stage (LPP) component. Our data have extended the understanding of the cross-modal mechanism and provided electrophysiological evidence that affective facial processing could be influenced by sweet and acidic tastes.


Sign in / Sign up

Export Citation Format

Share Document