expression perception
Recently Published Documents


TOTAL DOCUMENTS

50
(FIVE YEARS 13)

H-INDEX

11
(FIVE YEARS 1)

2021 ◽  
Author(s):  
K. Suzuki ◽  
Y. Takeuchi ◽  
J. Heo

In this study, we investigated whether manipulating the lighting environment in videoconferencing changes the readability of facial expressions. In the experiment, the participants were asked to evaluate their impressions of a video that simulated the situation in a videoconference. A total of 12 lighting conditions were used, including three colour temperature conditions and four lighting directions conditions. As a result of the factor analysis, four factors were identified: "Clarity," "Dynamism," "Naturalness," and "Healthiness." The results of ANOVA showed that placing the lighting in front was effective for all factors. And in all of the factors, it showed that lighting from the front was effective for the participants. In addition, while lower colour temperature decreased clarity, it improved naturalness and healthiness and was particularly effective when the lighting was placed in front of the subject.


2021 ◽  
Author(s):  
Liqin Zhou ◽  
Ming Meng ◽  
Ke Zhou

Face identity and expression play critical roles in social communication. Recent research found that the deep convolutional neural networks (DCNNs) trained to recognize facial identities spontaneously learn features that support facial expression recognition, and vice versa, suggesting an integrated representation of facial identity and expression. In the present study, we found that the expression-selective units spontaneously emerged in a VGG-Face trained for facial identity recognition and tuned to distinct basic expressions. Importantly, they exhibited typical hallmarks of human expression perception, i.e., the facial expression confusion effect and categorical perception effect. We then investigated whether the emergence of expression-selective units is attributed to either face-specific experience or domain-general processing, by carrying out the same analysis on a VGG-16 trained for object classification and an untrained VGG-Face without any visual experience, both of them having the identical architecture with the pretrained VGG-Face. Although Similar expression-selective units were found in both DCNNs, they did not exhibit reliable human-like characteristics of facial expression perception. Taken together, our computational findings revealed the necessity of domain-specific visual experience of face identity for the development of facial expression perception, highlighting the contribution of nurture to form human-like facial expression perception. Beyond the weak equivalence between human and DCNNS at the input-output behavior, emerging simulated algorithms between models and humans could be established through domain-specific experience.


Sign in / Sign up

Export Citation Format

Share Document