Development of a New Documentation System for Facial Movements as a Basis for the International Registry for Neuromuscular Reconstruction in the Face

1994 ◽  
Vol 93 (7) ◽  
pp. 1334 ◽  
Author(s):  
Manfred Frey ◽  
Andreas Jenny ◽  
Pietro Giovanoli ◽  
Edgar St??ssi
Author(s):  
David L Freytag ◽  
Michael G Alfertshofer ◽  
Konstantin Frank ◽  
Dmitry V Melnikov ◽  
Nicholas Moellhoff ◽  
...  

Abstract Background Our understanding of the functional anatomy of the face is constantly improving. To date, it is unclear whether the anatomic location of the line of ligaments has any functional importance during normal facial movements such as smiling. Objectives It is the objective of the present study to identify differences in facial movements between the medial and lateral midface by means of skin vector displacement analyses derived from 3D imaging and to further ascertain whether the line of ligaments has both a structural and functional significance in these movements. Methods The study sample consisted of 21 healthy volunteers (9 females & 12 males) of Caucasian ethnic background with a mean age of 30.6 (8.3) years and a mean BMI of 22.57 (2.5) kg/m 2. 3D images of the volunteers’ faces in repose and during smiling (Duchenne type) were taken. 3D imaging-based skin vector displacement analyses were conducted. Results The mean horizontal skin displacement was 0.08 (2.0) mm in the medial midface (lateral movement) and was -0.08 (1.96) mm in the lateral midface (medial movement) (p = 0.711). The mean vertical skin displacement (cranial movement of skin toward the forehead/temple) was 6.68 (2.4) mm in the medial midface whereas it was 5.20 (2.07) mm in the lateral midface (p = 0.003). Conclusions The results of this study provide objective evidence for an antagonistic skin movement between the medial and the lateral midface. The functional boundary identified by 3D imaging corresponds to the anatomic location of the line of ligaments.


2021 ◽  
Vol 11 (4) ◽  
pp. 1428
Author(s):  
Haopeng Wu ◽  
Zhiying Lu ◽  
Jianfeng Zhang ◽  
Xin Li ◽  
Mingyue Zhao ◽  
...  

This paper addresses the problem of Facial Expression Recognition (FER), focusing on unobvious facial movements. Traditional methods often cause overfitting problems or incomplete information due to insufficient data and manual selection of features. Instead, our proposed network, which is called the Multi-features Cooperative Deep Convolutional Network (MC-DCN), maintains focus on the overall feature of the face and the trend of key parts. The processing of video data is the first stage. The method of ensemble of regression trees (ERT) is used to obtain the overall contour of the face. Then, the attention model is used to pick up the parts of face that are more susceptible to expressions. Under the combined effect of these two methods, the image which can be called a local feature map is obtained. After that, the video data are sent to MC-DCN, containing parallel sub-networks. While the overall spatiotemporal characteristics of facial expressions are obtained through the sequence of images, the selection of keys parts can better learn the changes in facial expressions brought about by subtle facial movements. By combining local features and global features, the proposed method can acquire more information, leading to better performance. The experimental results show that MC-DCN can achieve recognition rates of 95%, 78.6% and 78.3% on the three datasets SAVEE, MMI, and edited GEMEP, respectively.


Author(s):  
Alexander Mielke ◽  
Bridget M. Waller ◽  
Claire Pérez ◽  
Alan V. Rincon ◽  
Julie Duboscq ◽  
...  

AbstractUnderstanding facial signals in humans and other species is crucial for understanding the evolution, complexity, and function of the face as a communication tool. The Facial Action Coding System (FACS) enables researchers to measure facial movements accurately, but we currently lack tools to reliably analyse data and efficiently communicate results. Network analysis can provide a way to use the information encoded in FACS datasets: by treating individual AUs (the smallest units of facial movements) as nodes in a network and their co-occurrence as connections, we can analyse and visualise differences in the use of combinations of AUs in different conditions. Here, we present ‘NetFACS’, a statistical package that uses occurrence probabilities and resampling methods to answer questions about the use of AUs, AU combinations, and the facial communication system as a whole in humans and non-human animals. Using highly stereotyped facial signals as an example, we illustrate some of the current functionalities of NetFACS. We show that very few AUs are specific to certain stereotypical contexts; that AUs are not used independently from each other; that graph-level properties of stereotypical signals differ; and that clusters of AUs allow us to reconstruct facial signals, even when blind to the underlying conditions. The flexibility and widespread use of network analysis allows us to move away from studying facial signals as stereotyped expressions, and towards a dynamic and differentiated approach to facial communication.


Author(s):  
S. Monini ◽  
S. Ripoli ◽  
C. Filippi ◽  
I. Fatuzzo ◽  
G. Salerno ◽  
...  

Abstract Purpose To propose a new objective, video recording method for the classification of unilateral peripheral facial palsy (UPFP) that relies on mathematical algorithms allowing the software to recognize numerical points on the two sides of the face surface that would be indicative of facial nerve impairment without positioning of markers on the face. Methods Patients with UPFP of different House–Brackmann (HB) degrees ranging from II to V were evaluated after video recording during two selected facial movements (forehead frowning and smiling) using a software trained to recognize the face points as numbers. Numerical parameters in millimeters were obtained as indicative values of the shifting of the face points, of the shift differences of the two face sides and the shifting ratio between the healthy (denominator) and the affected side (numerator), i.e., the asymmetry index for the two movements. Results For each HB grade, specific asymmetry index ranges were identified with a positive correlation for shift differences and negative correlation for asymmetry indexes. Conclusions The use of the present objective system enabled the identification of numerical ranges of asymmetry between the healthy and the affected side that were consistent with the outcome from the subjective methods currently in use.


ACTA IMEKO ◽  
2014 ◽  
Vol 2 (2) ◽  
pp. 78 ◽  
Author(s):  
Ville Rantanen ◽  
Pekka Kumpulainen ◽  
Hanna Venesvirta ◽  
Jarmo Verho ◽  
Oleg Spakov ◽  
...  

A wide range of applications can benefit from the measurement of facial activity. The current study presents a method that can be used to detect and classify the movements of different parts of the face and the expressions the movements form. The method is based on capacitive measurement of facial movements. It uses principal component analysis on the measured data to identify active areas of the face in offline analysis, and hierarchical clustering as a basis for classifying the movements offline and in real-time. Experiments involving a set of voluntary facial movements were carried out with 10 participants. The results show that the principal component analysis of the measured data could be applied with almost perfect performance to offline mapping of the vertical location of the facial activity of movements such as raising and lowering eyebrows, opening mouth, raising mouth corners, and lowering mouth corners. The presented classification method also performed very well in classifying the same movements both with the offline and the real-time implementations.


Author(s):  
Kayley Birch-Hurst ◽  
Magdalena Rychlowska ◽  
Michael B. Lewis ◽  
Ross E. Vanderwert

AbstractPeople tend to automatically imitate others’ facial expressions of emotion. That reaction, termed “facial mimicry” has been linked to sensorimotor simulation—a process in which the observer’s brain recreates and mirrors the emotional experience of the other person, potentially enabling empathy and deep, motivated processing of social signals. However, the neural mechanisms that underlie sensorimotor simulation remain unclear. This study tests how interfering with facial mimicry by asking participants to hold a pen in their mouth influences the activity of the human mirror neuron system, indexed by the desynchronization of the EEG mu rhythm. This response arises from sensorimotor brain areas during observed and executed movements and has been linked with empathy. We recorded EEG during passive viewing of dynamic facial expressions of anger, fear, and happiness, as well as nonbiological moving objects. We examine mu desynchronization under conditions of free versus altered facial mimicry and show that desynchronization is present when adult participants can freely move but not when their facial movements are inhibited. Our findings highlight the importance of motor activity and facial expression in emotion communication. They also have important implications for behaviors that involve occupying or hiding the lower part of the face.


2003 ◽  
Vol 33 (8) ◽  
pp. 1453-1462 ◽  
Author(s):  
R. MERGL ◽  
M. VOGEL ◽  
P. MAVROGIORGOU ◽  
C. GÖBEL ◽  
M. ZAUDIG ◽  
...  

Background. Motor function is deficient in many patients with obsessive–compulsive disorder (OCD), especially in the face. To investigate subtle motor dysfunction, kinematical analysis of emotional facial expressions can be used. Our aim was to investigate facial movements in response to humorous film stimuli in OCD patients.Method. Kinematical analysis of facial movements was performed. Ultrasound markers at defined points of the face provided exact measurement of facial movements, while subjects watched a humorous movie (‘Mr Bean’). Thirty-four OCD patients (19 male, 15 female; mean (S.D.) age: 35·8 (11·5) years; mean (S.D.) total Y-BOCS score: 25·5 (5·9)) were studied in unmedicated state and after a 10-week treatment with the SSRI sertraline. Thirty-four healthy controls (19 male, 15 female; mean (S.D.) age: 37·5 (13·1) years) were also investigated.Results. At baseline, OCD patients showed significantly slower velocity at the beginning of laughing than healthy controls and a reduced laughing frequency. There was a significant negative correlation between laughing frequency and severity of OCD symptoms. Ten weeks later a significant increase of laughing frequency and initial velocity during laughing was found.Conclusions. Execution of adequate facial reactions to humour is abnormally slow in OCD patients. Susceptibility of OCD patients with regard to emotional stimuli is less pronounced than in healthy subjects. This phenomenon is closely correlated to OCD symptoms and is state-dependent.


2021 ◽  
Vol 9 ◽  
Author(s):  
Edin Šabić ◽  
Michael C. Hout ◽  
Justin A. MacDonald ◽  
Daniel Henning ◽  
Hunter Myüz ◽  
...  

Understanding people when they are speaking seems to be an activity that we do only with our ears. Why, then, do we usually look at the face of the person we are listening to? Could it be that our eyes are also involved in understanding speech? We designed an experiment in which we asked people to try to comprehend speech in different listening conditions, such as someone speaking amid loud background noise. It turns out that we can use our eyes to help understand speech, especially when that speech is difficult to hear clearly. Looking at a person when they speak is helpful because their mouth and facial movements provide useful clues about what is being said. In this article, we explore how visual information influences how we understand speech and show that understanding speech can be the work of both the ears and the eyes!


1998 ◽  
Vol 35 (1) ◽  
pp. 16-25 ◽  
Author(s):  
Carroll-Ann Trotman ◽  
Christian S. Stohler ◽  
Lysle E. Johnston

Objective The assessment of facial mobility is a key element in the treatment of patients with facial motor deficits. In this study, we explored the utility of a three-dimensional tracking system in the measurement of facial movements. Methods and Results First, the three-dimensional movement of potentially stable facial soft-tissue, headcap, and dental landmarks was measured with respect to a fixed space frame. Based on the assumption that the dental landmarks are stable, their motion during a series of standardized facial animations was subtracted from that of the facial and headcap landmarks to estimate their movement within the face. This residual movement was used to determine which points are relatively stable (≤1.5 mm of movement) and which are not (≥1.5 mm of movement). Headcap landmarks were found to be suitable as references during smile, cheek puff, and lip purse animations, and during talking. In contrast, skinbased landmarks were unsuitable as references because of their considerable and highly variable movement during facial animation. Second, the facial movements of patients with obvious facial deformities were compared with those of matched controls to characterize the face validity of three-dimensional tracking. In all instances, pictures that appear to be characteristic of the various functional deficits emerged. Conclusions Our results argue that tracking instrumentation is a potentially useful tool in the measurement of facial mobility.


2011 ◽  
Vol 16 (1) ◽  
pp. 10-12 ◽  
Author(s):  
Martin Schiavenato ◽  
Meggan Butler-O’Hara ◽  
Paul Scovanner

BACKGROUND: Facial expression is widely used to judge pain in neonates. However, little is known about the relationship between intensity of the painful stimulus and the nature of the expression in term neonates.OBJECTIVES: To describe differences in the movement of key facial areas between two groups of term neonates experiencing painful stimuli of different intensities.METHODS: Video recordings from two previous studies were used to select study subjects. Four term neonates undergoing circumcision without analgesia were compared with four similar male term neonates undergoing a routine heel stick. Facial movements were measured with a computer using a previously developed ‘point-pair’ system that focuses on movement in areas implicated in neonatal pain expression. Measurements were expressed in pixels, standardized to percentage of individual infant face width.RESULTS: Point pairs measuring eyebrow and eye movement were similar, as was the sum of change across the face (41.15 in the circumcision group versus 40.33 in the heel stick group). Point pair 4 (horizontal change of the mouth) was higher for the heel stick group at 9.09 versus 3.93 for the circumcision group, while point pair 5 (vertical change of the mouth) was higher for the circumcision group (23.32) than for the heel stick group (15.53).CONCLUSION: Little difference was noted in eye and eyebrow movement between pain intensities. The mouth opened wider (vertically) in neonates experiencing the higher pain stimulus. Qualitative differences in neonatal facial expression to pain intensity may exist, and the mouth may be an area in which to detect them. Further study of the generalizability of these findings is needed.


Sign in / Sign up

Export Citation Format

Share Document