target face
Recently Published Documents


TOTAL DOCUMENTS

51
(FIVE YEARS 21)

H-INDEX

6
(FIVE YEARS 1)

Author(s):  
HONG ZHANG ◽  
YAORU SUN

Neural activation of the motor cortex has been consistently reported to be evoked in the emotion processing of facial expressions, but it is poorly understood whether and how the motor system influences the activity of limbic areas during participants’ perceived emotional expressions. In this study, we proposed that motor activations evoked by emotional processing influence the activations in limbic areas such as amygdala during the perception of facial expressions. To examine this issue, a masked priming paradigm was adopted in our fMRI experiment, which could modulate the activation within the motor cortex when healthy participants perceived sad or happy facial expressions. We found that the first presented stimulus (masked prime) in each trial reduced the activations in the premotor cortex and inferior frontal gyrus when the movement of facial muscles implied by the arrows on the prime stimulus was consistent with that implied by the target face expressions (compatible condition), but increased the activations in these two areas when the movements implied by the arrows and the target face expressions were inconsistent (incompatible condition). The superior temporal gyrus, middle cingulate gyrus and amygdala also showed similar response tendency to that in motor cortex. Moreover, psychophysiological interaction (PPI) analysis showed that both right middle cingulate gyrus and bilateral superior temporal gyrus were closely linked to the premotor cortex with inferior frontal gyrus during the incompatible trials compared with the compatible trials. Together with this result and the significant activation correlations between the motor cortex and the limbic areas, this work revealed the modulation effect of motor cortex on brain regions related to emotion perception, suggesting that motor representation of facial movements can affect emotion experience. Our results provide new evidence for the functional role of motor system in the perception of facial emotions, and could contribute to the understanding of the deficit in social interaction for patients with autism or schizophrenia.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
M. Berk Mirza ◽  
Maell Cullen ◽  
Thomas Parr ◽  
Sukhi Shergill ◽  
Rosalyn J. Moran

AbstractHuman social interactions depend on the ability to resolve uncertainty about the mental states of others. The context in which social interactions take place is crucial for mental state attribution as sensory inputs may be perceived differently depending on the context. In this paper, we introduce a mental state attribution task where a target-face with either an ambiguous or an unambiguous emotion is embedded in different social contexts. The social context is determined by the emotions conveyed by other faces in the scene. This task involves mental state attribution to a target-face (either happy or sad) depending on the social context. Using active inference models, we provide a proof of concept that an agent’s perception of sensory stimuli may be altered by social context. We show with simulations that context congruency and facial expression coherency improve behavioural performance in terms of decision times. Furthermore, we show through simulations that the abnormal viewing strategies employed by patients with schizophrenia may be due to (i) an imbalance between the precisions of local and global features in the scene and (ii) a failure to modulate the sensory precision to contextualise emotions.


2021 ◽  
pp. 095679762199666
Author(s):  
Oryah C. Lancry-Dayan ◽  
Matthias Gamer ◽  
Yoni Pertzov

Can you efficiently look for something even without knowing what it looks like? According to theories of visual search, the answer is no: A template of the search target must be maintained in an active state to guide search for potential locations of the target. Here, we tested the need for an active template by assessing a case in which this template is improbable: the search for a familiar face among unfamiliar ones when the identity of the target face is unknown. Because people are familiar with hundreds of faces, an active guiding template seems unlikely in this case. Nevertheless, participants (35 Israelis and 33 Germans) were able to guide their search as long as extrafoveal processing of the target features was possible. These results challenge current theories of visual search by showing that guidance can rely on long-term memory and extrafoveal processing rather than on an active search template.


2021 ◽  
Author(s):  
Karin S Pilz

Our judgement of certain facial characteristics such as emotion, attractiveness or age, is affected by context. Faces that are flanked by younger faces, for example, are perceived as being younger, whereas faces flanked by older faces are perceived as being older. Here, we investigated whether contextual effects in age perception are mediated by an own age bias. On each trial, a target face was presented on the screen, which was flanked by two faces. Flanker faces were either identical to the target face, were 10 years younger or 10 years older than the target face. We asked forty older (64-69 years) and forty-three younger adults (24-29) to estimate the age of the target face.Our results replicate previous studies and showed that context affects age estimation of faces flanked by target faces of different ages. These context effects were more pronounced for younger compared to older flankers but present across both tested age groups. An own-age bias was observed for unflanked faces such that older adults had larger estimation errors for younger faces compared to older faces and younger adults. Flanker effects, however, were not mediated by an own-age bias. It is likely that the increased effect of younger flankers is due to mechanisms related to perceptual averaging.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Adam Eggleston ◽  
Elena Geangu ◽  
Steven P. Tipper ◽  
Richard Cook ◽  
Harriet Over

AbstractPrevious research has demonstrated that the tendency to form first impressions from facial appearance emerges early in development. We examined whether social referencing is one route through which these consistent first impressions are acquired. In Study 1, we show that 5- to 7-year-old children are more likely to choose a target face previously associated with positive non-verbal signals as more trustworthy than a face previously associated with negative non-verbal signals. In Study 2, we show that children generalise this learning to novel faces who resemble those who have previously been the recipients of positive non-verbal behaviour. Taken together, these data show one means through which individuals within a community could acquire consistent, and potentially inaccurate, first impressions of others faces. In doing so, they highlight a route through which cultural transmission of first impressions can occur.


Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Siwei Wu ◽  
Shan Xiao ◽  
Yihua Di ◽  
Cheng Di

In this paper, the latest virtual reconstruction technology is used to conduct in-depth research on 3D movie animation image acquisition and feature processing. This paper firstly proposes a time-division multiplexing method based on subpixel multiplexing technology to improve the resolution of integrated imaging reconstruction images. By studying the degradation effect of the reconstruction process of the 3D integrated imaging system, it is proposed to improve the display resolution by increasing the pixel point information of fixed display array units. According to the subpixel multiplexing, an algorithm to realize the reuse of pixel point information of 3D scene element image gets the element image array with new information; then, through the high frame rate light emitting diode (LED) large screen fast output of the element image array, the human eye temporary retention effect is used, so that this group of element image array information go through a plane display, to increase the limited display array information capacity thus improving the reconstructed image. In this way, the information capacity of the finite display array is increased and the display resolution of the reconstructed image is improved. In this paper, we first use the classification algorithm to determine the gender and expression attributes of the face in the input image and filter the corresponding 3D face data subset in the database according to the gender and expression attributes, then use the sparse representation theory to filter the prototype face like the target face in the data subset, then use the filtered prototype face samples to construct the sparse deformation model, and finally use the target faces. Finally, the target 3D face is reconstructed using the feature points of the target face for model matching. The experimental results show that the algorithm reconstructs faces with high realism and accuracy, and the algorithm can reconstruct expression faces.


Author(s):  
David Kurbel ◽  
Bozana Meinhardt-Injac ◽  
Malte Persike ◽  
Günter Meinhardt

AbstractThe composite face effect—the failure of selective attention toward a target face half—is frequently used to study mechanisms of feature integration in faces. Here we studied how this effect depends on the perceptual fit between attended and unattended halves. We used composite faces that were rated by trained observers as either a seamless fit (i.e., close to a natural and homogeneous face) or as a deliberately bad quality of fit (i.e., unnatural, strongly segregated face halves). In addition, composites created by combining face halves randomly were tested. The composite face effect was measured as the alignment × congruency interaction (Gauthier and Bukach Cognition, 103, 322–330 2007), but also with alternative data analysis procedures (Rossion and Boremanse Journal of Vision, 8, 1–13 2008). We found strong but identical composite effects in all fit conditions. Fit quality neither increased the composite face effect nor was it attenuated by bad or random fit quality. The implications for a Gestalt account of holistic face processing are discussed.


2021 ◽  
Vol 12 ◽  
Author(s):  
Claude Messner ◽  
Mattia Carnelli ◽  
Patrick Stefan Hähener

The cheerleader effect describes the phenomenon whereby faces are perceived as being more attractive when flanked by other faces than when they are perceived in isolation. At least four theories predict the cheerleader effect. Two visual memory processes could cause a cheerleader effect. First, visual information will sometimes be averaged in the visual memory: the averaging of faces could increase the perceived attractiveness of all the faces flanked by other faces. Second, information will often be combined into a higher-order concept. This hierarchical encoding suggests that information processing causes faces to appear more attractive when flanked by highly attractive faces. Two further explanations posit that comparison processes cause the cheerleader effect. While contrast effects predict that a difference between the target face and the flanking faces causes the cheerleader effect due to comparison processes, a change in the evaluation mode, which alters the standard of comparison between joint and separate evaluation of faces, could be sufficient for producing a cheerleader effect. This leads to the prediction that even when there is no contrast between the attractiveness of the target face and the flanking faces, a cheerleader effect could occur. The results of one experiment support this prediction. The findings of this study have practical implications, such as for individuals who post selfies on social media. An individual’s face will appear more attractive in a selfie taken with people of low attractiveness than in a selfie without other people, even when all the faces have equally low levels of attractiveness.


Author(s):  
Xuena Wang ◽  
Shihui Han

Abstract People understand others’ emotions quickly from their facial expressions. However, facial expressions of ingroup and outgroup members may signal different social information and thus be mediated by distinct neural activities. We investigated whether there are distinct neuronal responses to fearful and happy expressions of same-race (SR) and other-race (OR) faces. We recorded electroencephalogram from Chinese adults when viewing an adaptor face (with fearful/neutral expressions in Experiment 1 but happy/neutral expressions in Experiment 2) and a target face (with fearful expressions in Experiment 1 but happy expressions in Experiment 2) presented in rapid succession. We found that both fearful and happy (vs neutral) adaptor faces increased the amplitude of a frontocentral positivity (P2). However, a fearful but not happy (vs neutral) adaptor face decreased the P2 amplitudes to target faces, and this repetition suppression (RS) effect occurred when adaptor and target faces were of the same race but not when of different races. RS was observed on two late parietal/central positive activities to fearful/happy target faces, which, however, occurred regardless of whether adaptor and target faces were of the same or different races. Our findings suggest that early affective processing of fearful expressions may engage distinct neural activities for SR and OR faces.


2021 ◽  
Author(s):  
Joyce Tam ◽  
Michael Mugno ◽  
Ryan Edward O'Donnell ◽  
Brad Wyble

Attribute amnesia (AA) describes a phenomenon in which participants are unable to report an attribute that was just attended to select a target. Most studies investigating this effect used simple stimuli like letters and digits. The few studies using meaningful stimuli, however, found AA only when the target stimuli were used repeatedly. We tested the robustness of this boundary condition with a set of artificially generated faces. Participants were instructed to find the young face among 3 older faces and performed this task for the first 27 trials. On the 28th trial, they were unexpectedly asked to report the identity of the young target face they just saw. The following 4 trials repeated this identity question and the target faces never repeated throughout the experiment. Contrary to the previous findings, we observed AA with the current set of meaningful and unique targets. I.e., The pre-surprise location task performance was at ceiling, but the surprise identity accuracy was significantly lower than the following trial. The current finding expands the boundary conditions of AA and suggests that AA is applicable to visual experience that resembles our day-to-day lives.


Sign in / Sign up

Export Citation Format

Share Document