facial motion
Recently Published Documents


TOTAL DOCUMENTS

207
(FIVE YEARS 35)

H-INDEX

21
(FIVE YEARS 3)

2021 ◽  
Author(s):  
Rémi Rigal ◽  
Jacques Chodorowski ◽  
Benoît Zerr

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Gunwoo Park ◽  
Kyoung Min Lee ◽  
Seungbum Koo

AbstractGait, the style of human walking, has been studied as a behavioral characteristic of an individual. Several studies have utilized gait to identify individuals with the aid of machine learning and computer vision techniques. However, there is a lack of studies on the nature of gait, such as the identification power or the uniqueness. This study aims to quantify the uniqueness of gait in a cohort. Three-dimensional full-body joint kinematics were obtained during normal walking trials from 488 subjects using a motion capture system. The joint angles of the gait cycle were converted into gait vectors. Four gait vectors were obtained from each subject, and all the gait vectors were pooled together. Two gait vectors were randomly selected from the pool and tested if they could be accurately classified if they were from the same person or not. The gait from the cohort was classified with an accuracy of 99.71% using the support vector machine with a radial basis function kernel as a classifier. Gait of a person is as unique as his/her facial motion and finger impedance, but not as unique as fingerprints.


2021 ◽  
Author(s):  
Connor Tom Keating ◽  
Sophie Sowden ◽  
Jennifer Cook

Recent developments suggest that autistic individuals may require static and dynamic angry expressions to be of higher emotional intensity in order for them to be successfully identified. In the case of dynamic stimuli, autistic individuals require angry facial motion to have a higher speed. Therefore, it is plausible that autistic individuals do not have a ‘deficit’ in angry expression recognition, but rather their internal representation of these expressions is characterized by very high-speed movement. In this (pre-registered) study, 25 autistic and 25 non-autistic adults matched on age, gender, non-verbal reasoning and alexithymia completed a novel emotion-based task which employed dynamic displays of happy, angry and sad point light facial (PLF) expressions. On each trial, participants moved a slider to manipulate the speed of a PLF stimulus such that it moved at a speed that, in their ‘mind’s eye’, was typical of happy, angry or sad expressions. Results showed that participants attributed the highest speeds to angry, then happy, then sad, facial motion. Participants increased the speed of angry and happy expressions by 41% and 27% respectively and decreased the speed of sad expressions by 18%. This suggests that participants have ‘caricatured’ internal representations of emotion, wherein emotion-related kinematic cues are over-emphasized. There were no differences between autistic and non-autistic individuals in the speeds attributed to full-face and partial-face (those only showing the eyes or mouth) angry, happy and sad facial motion respectively. Consequently, we find no evidence that autistic adults possess atypical fast internal representations of angry expressions.


2021 ◽  
Author(s):  
Guangcheng Wang ◽  
Zhongyuan Wang ◽  
Kui Jiang ◽  
Baojin Huang ◽  
Zheng He ◽  
...  

Author(s):  
Connor T. Keating ◽  
Dagmar S. Fraser ◽  
Sophie Sowden ◽  
Jennifer L. Cook

AbstractTo date, studies have not established whether autistic and non-autistic individuals differ in emotion recognition from facial motion cues when matched in terms of alexithymia. Here, autistic and non-autistic adults (N = 60) matched on age, gender, non-verbal reasoning ability and alexithymia, completed an emotion recognition task, which employed dynamic point light displays of emotional facial expressions manipulated in terms of speed and spatial exaggeration. Autistic participants exhibited significantly lower accuracy for angry, but not happy or sad, facial motion with unmanipulated speed and spatial exaggeration. Autistic, and not alexithymic, traits were predictive of accuracy for angry facial motion with unmanipulated speed and spatial exaggeration. Alexithymic traits, in contrast, were predictive of the magnitude of both correct and incorrect emotion ratings.


PLoS ONE ◽  
2021 ◽  
Vol 16 (4) ◽  
pp. e0249961
Author(s):  
Donghoon Lee ◽  
Chihiro Tanikawa ◽  
Takashi Yamashiro

Patients with repaired unilateral cleft lip with palate (UCLP) often show dysmorphology and distorted facial motion clinically, which can cause psychological issues. However, no report has clarified the details concerning distorted facial motion and the corresponding possible causative factors. In this study, we hypothesized that the physical properties of the scar and surrounding facial soft tissue might affect facial displacement while smiling in patients with UCLP (Cleft group). We thus examined the three-dimensional (3D) facial displacement while smiling in the Cleft and Control groups in order to determine whether or not the physical properties of facial soft tissues differ between the Cleft and Control groups and to examine the relationship between the physical properties of facial soft tissues on 3D facial displacement while smiling. Three-dimensional images at rest and while smiling as well as the facial physical properties (e.g. viscoelasticity) of both groups were recorded. Differences in terms of physical properties and facial displacement while smiling between the two groups were examined. To examine the relationship between facial surface displacement while smiling and physical properties, a canonical correlation analysis (CCA) was conducted. As a result, three typical abnormal features of smiling in the Cleft group compared with the Control group were noted: less upward and backward displacement on the scar area, downward movement of the lower lip, and a greater asymmetric displacement, including greater lateral displacement of the subalar on the cleft side while smiling and greater alar backward displacement on the non-cleft side. The Cleft group also showed greater elastic modulus at the upper lip on the cleft side, suggesting hardened soft tissue at the scar. The CCA showed that this hard scar significantly affected facial displacement, inducing less upward and backward displacement on the scar area and downward movement of the lower lip in patients with UCLP (correlation coefficient = 0.82, p = 0.04); however, there was no significant relationship between greater nasal alar lateral movement and physical properties of the skin at the scar. Based on these results, personalizing treatment options for dysfunction in facial expression generation may require quantification of the 3D facial morphology and physical properties of facial soft tissues.


Author(s):  
Manasi Kshirsagar ◽  
Bhagyashree B Hoite ◽  
Prashika Sonawane ◽  
Pooja Malpure

Speech driven facial animation can be regarded as a speech-to-face translation. Speech driven facial motion synthesis involves Speech analysis and face modeling. This method makes use of still image of a person and speech signals to produce an animation of a talking character. Our method makes use of GAN classifier to obtain better lip synchronizing with audio. GAN methodology also helps to obtain realistic facial expressions thereby making a talking character more effective. Factors such as lip-syncing accuracy, sharpness, and ability to create high -quality faces and natural blinks are taken into consideration by this system. GANs are mainly used in case of image generation as adversarial loss generates sharper and more depictive images. Along with images, GANs can also handle videos easily.


Author(s):  
Xinyu Li ◽  
Guangshun Wei ◽  
Jie Wang ◽  
Yuanfeng Zhou

AbstractMicro-expression recognition is a substantive cross-study of psychology and computer science, and it has a wide range of applications (e.g., psychological and clinical diagnosis, emotional analysis, criminal investigation, etc.). However, the subtle and diverse changes in facial muscles make it difficult for existing methods to extract effective features, which limits the improvement of micro-expression recognition accuracy. Therefore, we propose a multi-scale joint feature network based on optical flow images for micro-expression recognition. First, we generate an optical flow image that reflects subtle facial motion information. The optical flow image is then fed into the multi-scale joint network for feature extraction and classification. The proposed joint feature module (JFM) integrates features from different layers, which is beneficial for the capture of micro-expression features with different amplitudes. To improve the recognition ability of the model, we also adopt a strategy for fusing the feature prediction results of the three JFMs with the backbone network. Our experimental results show that our method is superior to state-of-the-art methods on three benchmark datasets (SMIC, CASME II, and SAMM) and a combined dataset (3DB).


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Alan Johnston ◽  
Ben B. Brown ◽  
Ryan Elson

AbstractWe asked how dynamic facial features are perceptually grouped. To address this question, we varied the timing of mouth movements relative to eyebrow movements, while measuring the detectability of a small temporal misalignment between a pair of oscillating eyebrows—an eyebrow wave. We found eyebrow wave detection performance was worse for synchronous movements of the eyebrows and mouth. Subsequently, we found this effect was specific to stimuli presented to the right visual field, implicating the involvement of left lateralised visual speech areas. Adaptation has been used as a tool in low-level vision to establish the presence of separable visual channels. Adaptation to moving eyebrows and mouths with various relative timings reduced eyebrow wave detection but only when the adapting mouth and eyebrows moved asynchronously. Inverting the face led to a greater reduction in detection after adaptation particularly for asynchronous facial motion at test. We conclude that synchronous motion binds dynamic facial features whereas asynchronous motion releases them, allowing adaptation to impair eyebrow wave detection.


Sign in / Sign up

Export Citation Format

Share Document