scholarly journals Extending the MaqFACS to measure facial movement in Japanese macaques (Macaca fuscata) reveals a wide repertoire potential

PLoS ONE ◽  
2021 ◽  
Vol 16 (1) ◽  
pp. e0245117
Author(s):  
Catia Correia-Caeiro ◽  
Kathryn Holmes ◽  
Takako Miyabe-Nishiwaki

Facial expressions are complex and subtle signals, central for communication and emotion in social mammals. Traditionally, facial expressions have been classified as a whole, disregarding small but relevant differences in displays. Even with the same morphological configuration different information can be conveyed depending on the species. Due to a hardwired processing of faces in the human brain, humans are quick to attribute emotion, but have difficulty in registering facial movement units. The well-known human FACS (Facial Action Coding System) is the gold standard for objectively measuring facial expressions, and can be adapted through anatomical investigation and functional homologies for cross-species systematic comparisons. Here we aimed at developing a FACS for Japanese macaques, following established FACS methodology: first, we considered the species’ muscular facial plan; second, we ascertained functional homologies with other primate species; and finally, we categorised each independent facial movement into Action Units (AUs). Due to similarities in the rhesus and Japanese macaques’ facial musculature, the MaqFACS (previously developed for rhesus macaques) was used as a basis to extend the FACS tool to Japanese macaques, while highlighting the morphological and appearance changes differences between the two species. We documented 19 AUs, 15 Action Descriptors (ADs) and 3 Ear Action Units (EAUs) in Japanese macaques, with all movements of MaqFACS found in Japanese macaques. New movements were also observed, indicating a slightly larger repertoire than in rhesus or Barbary macaques. Our work reported here of the MaqFACS extension for Japanese macaques, when used together with the MaqFACS, comprises a valuable objective tool for the systematic and standardised analysis of facial expressions in Japanese macaques. The MaqFACS extension for Japanese macaques will now allow the investigation of the evolution of communication and emotion in primates, as well as contribute to improving the welfare of individuals, particularly in captivity and laboratory settings.

2014 ◽  
Vol 20 (3) ◽  
pp. 302-312 ◽  
Author(s):  
Aleksey I. Dumer ◽  
Harriet Oster ◽  
David McCabe ◽  
Laura A. Rabin ◽  
Jennifer L. Spielman ◽  
...  

AbstractGiven associations between facial movement and voice, the potential of the Lee Silverman Voice Treatment (LSVT) to alleviate decreased facial expressivity, termed hypomimia, in Parkinson's disease (PD) was examined. Fifty-six participants—16 PD participants who underwent LSVT, 12 PD participants who underwent articulation treatment (ARTIC), 17 untreated PD participants, and 11 controls without PD—produced monologues about happy emotional experiences at pre- and post-treatment timepoints (“T1” and “T2,” respectively), 1 month apart. The groups of LSVT, ARTIC, and untreated PD participants were matched on demographic and health status variables. The frequency and variability of facial expressions (Frequency and Variability) observable on 1-min monologue videorecordings were measured using the Facial Action Coding System (FACS). At T1, the Frequency and Variability of participants with PD were significantly lower than those of controls. Frequency and Variability increases of LSVT participants from T1 to T2 were significantly greater than those of ARTIC or untreated participants. Whereas the Frequency and Variability of ARTIC participants at T2 were significantly lower than those of controls, LSVT participants did not significantly differ from controls on these variables at T2. The implications of these findings, which suggest that LSVT reduces parkinsonian hypomimia, for PD-related psychosocial problems are considered. (JINS, 2014, 20, 1–11)


2018 ◽  
Vol 7 (3.20) ◽  
pp. 284
Author(s):  
Hamimah Ujir ◽  
Irwandi Hipiny ◽  
D N.F. Awang Iskandar

Most works in quantifying facial deformation are based on action units (AUs) provided by the Facial Action Coding System (FACS) which describes facial expressions in terms of forty-six component movements. AU corresponds to the movements of individual facial muscles. This paper presents a rule based approach to classify the AU which depends on certain facial features. This work only covers deformation of facial features based on posed Happy and the Sad expression obtained from the BU-4DFE database. Different studies refer to different combination of AUs that form Happy and Sad expression. According to the FACS rules lined in this work, an AU has more than one facial property that need to be observed. The intensity comparison and analysis on the AUs involved in Sad and Happy expression are presented. Additionally, dynamic analysis for AUs is studied to determine the temporal segment of expressions, i.e. duration of onset, apex and offset time. Our findings show that AU15, for sad expression, and AU12, for happy expression, show facial features deformation consistency for all properties during the expression period. However for AU1 and AU4, their properties’ intensity is different during the expression period. 


2018 ◽  
Vol 4 (10) ◽  
pp. 119 ◽  
Author(s):  
Adrian Davison ◽  
Walied Merghani ◽  
Moi Yap

Micro-expressions are brief spontaneous facial expressions that appear on a face when a person conceals an emotion, making them different to normal facial expressions in subtlety and duration. Currently, emotion classes within the CASME II dataset (Chinese Academy of Sciences Micro-expression II) are based on Action Units and self-reports, creating conflicts during machine learning training. We will show that classifying expressions using Action Units, instead of predicted emotion, removes the potential bias of human reporting. The proposed classes are tested using LBP-TOP (Local Binary Patterns from Three Orthogonal Planes), HOOF (Histograms of Oriented Optical Flow) and HOG 3D (3D Histogram of Oriented Gradient) feature descriptors. The experiments are evaluated on two benchmark FACS (Facial Action Coding System) coded datasets: CASME II and SAMM (A Spontaneous Micro-Facial Movement). The best result achieves 86.35% accuracy when classifying the proposed 5 classes on CASME II using HOG 3D, outperforming the result of the state-of-the-art 5-class emotional-based classification in CASME II. Results indicate that classification based on Action Units provides an objective method to improve micro-expression recognition.


2001 ◽  
Vol 25 (3) ◽  
pp. 268-278 ◽  
Author(s):  
Dario Galati ◽  
Renato Miceli ◽  
Barbara Sini

We investigate the facial expression of emotions in very young congenitally blind children to ” nd out whether these are objectively and subjectively recognisable. We also try to see whether the adequacy of the facial expression of emotions changes as the children get older. We video recorded the facial expressions of 10 congenitally blind children and 10 sighted children (as a control group) in seven everyday situations considered as emotion elicitors. The recorded sequences were analysed according to the Maximally Discriminative Facial Movement Coding System (Max; Izard, 1979) and then judged by 280 decoders who used four scales (two dimensional and two categorical) for their answers. The results showed that all the subjects (both the blind and the sighted) were able to express their emotions facially, though not always according to the theoretically expected pattern. Recognition of the various expressions was fairly accurate, but some emotions were systematically confused with others. The decoders’ answers to the dimensional and categorical scales were similar for both blind and sighted subjects. Our ” ndings on objective and subjective judgements show that there was no decrease in the facial expressiveness of the blind children in the period of development considered.


2015 ◽  
Vol 738-739 ◽  
pp. 666-669
Author(s):  
Yao Feng Xue ◽  
Hua Li Sun ◽  
Ye Duan

The Candide face model and the Face Action Coding System (FACS) are introduced in the paper. The relations of the positions of feature points of Candide-3 model and the action units of FACS are studied. The application system for computing the facial expressions of students in the experiment teaching process is developed. The feasibility of the application system is demonstrated.


2021 ◽  
Vol 1 (1) ◽  
pp. 104-111
Author(s):  
Shaimaa H. Abd ◽  
Ivan A. Hashim ◽  
Ali S. Jalal

Deception detection is becoming an interesting filed in different areas related to security, criminal investigation, law enforcement and terrorism detection. Recently non-verbal features have become essential features for deception detection process. One of the most important kind of these features is facial expression. The importance of these expressions come from the idea that Human face contain different expressions each of which is directly related to a certain state. In this research paper, facial expressions' data are collected for 102 participants (25 women and 77 men) as video clips. There are 504 clips for lie response and 384 for truth response (total 888 video clips). Facial expressions in a form of Action Units (AUs) are extracted for each frame with video clip. The AUs are encoded based on Facial Action Coding System (FACS) which are 18 AUs. These are: AU 1, 2, 4, 5, 6, 7, 9, 10, 12, 14, 15, 17, 20, 23, 25, 26, 28 and 45. Based on the collected data, only six AUs are the most effective and have a direct impact on the discrimination process between liar and truth teller. These AUs are AU 6, 7, 10, 12, 14 and 28


Primates ◽  
2021 ◽  
Author(s):  
Madeleine Geiger

AbstractHuman impact influences morphological variation in animals, as documented in many captive and domestic animal populations. However, there are different levels of human impact, and their influence on the pattern and rate of morphological variation remains unclear. This study contributes to the ongoing debate via the examination of cranial and mandibular shape and size variation and pace of change in Japanese macaques (Macaca fuscata). This species is ideal for tackling such questions because different wild, wild-provisioned, and captive populations have been monitored and collected over seven decades. Linear measurements were taken on 70 skulls from five populations, grouped into three ‘human impact groups’ (wild, wild-provisioned, and captive). This made it possible to investigate the pattern and pace of skull form changes among the human impact groups as well as over time within the populations. It was found that the overall skull shape tends to differ among the human impact groups, with captive macaques having relatively longer rostra than wild ones. Whether these differences are a result of geographic variation or variable human impact, related to nutritional supply and mechanical properties of the diet, is unclear. However, this pattern of directed changes did not seem to hold when the single captive populations were examined in detail. Although environmental conditions have probably been similar for the two examined captive populations (same captive locality), skull shape changes over the first generations in captivity were mostly different. This varying pattern, together with a consistent decrease in body size in the captive populations over generations, points to genetic drift playing a role in shaping skull shape and body size in captivity. In the captive groups investigated here, the rates of change were found to be high compared to literature records from settings featuring different degrees of human impact in different species, although they still lie in the range of field studies in a natural context. This adds to the view that human impact might not necessarily lead to particularly fast rates of change.


2010 ◽  
Vol 35 (1) ◽  
pp. 1-16 ◽  
Author(s):  
Etienne B. Roesch ◽  
Lucas Tamarit ◽  
Lionel Reveret ◽  
Didier Grandjean ◽  
David Sander ◽  
...  

PLoS ONE ◽  
2021 ◽  
Vol 16 (8) ◽  
pp. e0255570
Author(s):  
Motonori Kurosumi ◽  
Koji Mizukoshi ◽  
Maya Hongo ◽  
Miyuki G. Kamachi

We form impressions of others by observing their constant and dynamically-shifting facial expressions during conversation and other daily life activities. However, conventional aging research has mainly considered the changing characteristics of the skin, such as wrinkles and age-spots, within very limited states of static faces. In order to elucidate the range of aging impressions that we make in daily life, it is necessary to consider the effects of facial movement. This study investigated the effects of facial movement on age impressions. An age perception test using Japanese women as face models was employed to verify the effects of the models’ age-dependent facial movements on age impression in 112 participants (all women, aged 20–49 years) as observers. Further, the observers’ gaze was analyzed to identify the facial areas of interests during age perception. The results showed that cheek movement affects age impressions, and that the impressions increase depending on the model’s age. These findings will facilitate the development of new means of provoking a more youthful impression by approaching anti-aging from a different viewpoint of facial movement.


Sign in / Sign up

Export Citation Format

Share Document