facial behavior
Recently Published Documents


TOTAL DOCUMENTS

97
(FIVE YEARS 17)

H-INDEX

20
(FIVE YEARS 1)

Author(s):  
Bridget M. Waller ◽  
Eithne Kavanagh ◽  
Jerome Micheletta ◽  
Peter R. Clark ◽  
Jamie Whitehouse

AbstractA wealth of experimental and observational evidence suggests that faces have become increasingly important in the communication system of primates over evolutionary time and that both the static and moveable aspects of faces convey considerable information. Therefore, whenever there is a visual component to any multicomponent signal the face is potentially relevant. However, the role of the face is not always considered in primate multicomponent communication research. We review the literature and make a case for greater focus on the face going forward. We propose that the face can be overlooked for two main reasons: first, due to methodological difficulty. Examination of multicomponent signals in primates is difficult, so scientists tend to examine a limited number of signals in combination. Detailed examination of the subtle and dynamic components of facial signals is particularly hard to achieve in studies of primates. Second, due to a common assumption that the face contains “emotional” content. A priori categorisation of facial behavior as “emotional” ignores the potentially communicative and predictive information present in the face that might contribute to signals. In short, we argue that the face is central to multicomponent signals (and also many multimodal signals) and suggest future directions for investigating this phenomenon.


2021 ◽  
Author(s):  
Manisha Verma ◽  
Yuta Nakashima ◽  
Hirokazu Kobori ◽  
Ryota Takaoka ◽  
Noriko Takemura ◽  
...  

Author(s):  
Bin Xia ◽  
Shangfei Wang

Facial micro-expression recognition has attracted much attention due to its objectiveness to reveal the true emotion of a person. However, the limited micro-expression datasets have posed a great challenge to train a high performance micro-expression classifier. Since micro-expression and macro-expression share some similarities in both spatial and temporal facial behavior patterns, we propose a macro-to-micro transformation framework for micro-expression recognition. Specifically, we first pretrain two-stream baseline model from micro-expression data and macro-expression data respectively, named MiNet and MaNet. Then, we introduce two auxiliary tasks to align the spatial and temporal features learned from micro-expression data and macro-expression data. In spatial domain, we introduce a domain discriminator to align the features of MiNet and MaNet. In temporal domain, we introduce relation classifier to predict the correct relation for temporal features from MaNet and MiNet. Finally, we propose contrastive loss to encourage the MiNet to give closely aligned features to all entries from the same class in each instance. Experiments on three benchmark databases demonstrate the superiority of the proposed method.


2021 ◽  
Author(s):  
Alan S. Cowen ◽  
Gautam Prasad ◽  
Misato Tanaka ◽  
Yukiyasu Kamitani ◽  
Vladimir Kirilyuk ◽  
...  

Core to understanding emotion are subjective experiences and their embodiment in facial behavior. Past studies have focused on six emotions and prototypical facial poses, reflecting limitations in scale and narrow assumptions about emotion. We examine 45,231 reactions to 2,185 evocative videos, largely in North America, Europe, and Japan, collecting participants’ self-reported experiences in English or Japanese and manual/automated annotations of facial movement. We uncover 21 dimensions of emotion underlying experiences reported across languages. Facial expressions predict at least 12 dimensions of experience, despite individual variability. We also identify culture-specific display tendencies—many facial movements differ in intensity in Japan compared to the U.S./Canada and Europe, but represent similar experiences. These results reveal how people actually experience and express emotion: in high-dimensional, categorical, and complex fashion.


2021 ◽  
Vol 10 (8) ◽  
pp. 1776
Author(s):  
Gianpaolo Alvari ◽  
Cesare Furlanello ◽  
Paola Venuti

Time is a key factor to consider in Autism Spectrum Disorder. Detecting the condition as early as possible is crucial in terms of treatment success. Despite advances in the literature, it is still difficult to identify early markers able to effectively forecast the manifestation of symptoms. Artificial intelligence (AI) provides effective alternatives for behavior screening. To this end, we investigated facial expressions in 18 autistic and 15 typical infants during their first ecological interactions, between 6 and 12 months of age. We employed Openface, an AI-based software designed to systematically analyze facial micro-movements in images in order to extract the subtle dynamics of Social Smiles in unconstrained Home Videos. Reduced frequency and activation intensity of Social Smiles was computed for children with autism. Machine Learning models enabled us to map facial behavior consistently, exposing early differences hardly detectable by non-expert naked eye. This outcome contributes to enhancing the potential of AI as a supportive tool for the clinical framework.


Computers ◽  
2021 ◽  
Vol 10 (4) ◽  
pp. 51
Author(s):  
Ilaria Bartolini ◽  
Andrea Di Luzio

Narcolepsy with cataplexy is a severe lifelong disorder characterized, among others, by sudden loss of bilateral face muscle tone triggered by emotions (cataplexy). A recent approach for the diagnosis of the disease is based on a completely manual analysis of video recordings of patients undergoing emotional stimulation made on-site by medical specialists, looking for specific facial behavior motor phenomena. We present here the CAT-CAD tool for automatic detection of cataplexy symptoms, with the double aim of (1) supporting neurologists in the diagnosis/monitoring of the disease and (2) facilitating the experience of patients, allowing them to conduct video recordings at home. CAT-CAD includes a front-end medical interface (for the playback/inspection of patient recordings and the retrieval of videos relevant to the one currently played) and a back-end AI-based video analyzer (able to automatically detect the presence of disease symptoms in the patient recording). Analysis of patients’ videos for discovering disease symptoms is based on the detection of facial landmarks, and an alternative implementation of the video analyzer, exploiting deep-learning techniques, is introduced. Performance of both approaches is experimentally evaluated using a benchmark of real patients’ recordings, demonstrating the effectiveness of the proposed solutions.


Author(s):  
Lezi Wang ◽  
Chongyang Bai ◽  
Maksim Bolonkin ◽  
Judee K. Burgoon ◽  
Norah E. Dunbar ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document