scholarly journals Exploring the Association Between Pain Intensity and Facial Display in Term Newborns

2011 ◽  
Vol 16 (1) ◽  
pp. 10-12 ◽  
Author(s):  
Martin Schiavenato ◽  
Meggan Butler-O’Hara ◽  
Paul Scovanner

BACKGROUND: Facial expression is widely used to judge pain in neonates. However, little is known about the relationship between intensity of the painful stimulus and the nature of the expression in term neonates.OBJECTIVES: To describe differences in the movement of key facial areas between two groups of term neonates experiencing painful stimuli of different intensities.METHODS: Video recordings from two previous studies were used to select study subjects. Four term neonates undergoing circumcision without analgesia were compared with four similar male term neonates undergoing a routine heel stick. Facial movements were measured with a computer using a previously developed ‘point-pair’ system that focuses on movement in areas implicated in neonatal pain expression. Measurements were expressed in pixels, standardized to percentage of individual infant face width.RESULTS: Point pairs measuring eyebrow and eye movement were similar, as was the sum of change across the face (41.15 in the circumcision group versus 40.33 in the heel stick group). Point pair 4 (horizontal change of the mouth) was higher for the heel stick group at 9.09 versus 3.93 for the circumcision group, while point pair 5 (vertical change of the mouth) was higher for the circumcision group (23.32) than for the heel stick group (15.53).CONCLUSION: Little difference was noted in eye and eyebrow movement between pain intensities. The mouth opened wider (vertically) in neonates experiencing the higher pain stimulus. Qualitative differences in neonatal facial expression to pain intensity may exist, and the mouth may be an area in which to detect them. Further study of the generalizability of these findings is needed.

2021 ◽  
Vol 11 (4) ◽  
pp. 1428
Author(s):  
Haopeng Wu ◽  
Zhiying Lu ◽  
Jianfeng Zhang ◽  
Xin Li ◽  
Mingyue Zhao ◽  
...  

This paper addresses the problem of Facial Expression Recognition (FER), focusing on unobvious facial movements. Traditional methods often cause overfitting problems or incomplete information due to insufficient data and manual selection of features. Instead, our proposed network, which is called the Multi-features Cooperative Deep Convolutional Network (MC-DCN), maintains focus on the overall feature of the face and the trend of key parts. The processing of video data is the first stage. The method of ensemble of regression trees (ERT) is used to obtain the overall contour of the face. Then, the attention model is used to pick up the parts of face that are more susceptible to expressions. Under the combined effect of these two methods, the image which can be called a local feature map is obtained. After that, the video data are sent to MC-DCN, containing parallel sub-networks. While the overall spatiotemporal characteristics of facial expressions are obtained through the sequence of images, the selection of keys parts can better learn the changes in facial expressions brought about by subtle facial movements. By combining local features and global features, the proposed method can acquire more information, leading to better performance. The experimental results show that MC-DCN can achieve recognition rates of 95%, 78.6% and 78.3% on the three datasets SAVEE, MMI, and edited GEMEP, respectively.


2018 ◽  
Vol 9 (2) ◽  
pp. 31-38
Author(s):  
Fransisca Adis ◽  
Yohanes Merci Widiastomo

Facial expression is one of some aspects that can deliver story and character’s emotion in 3D animation. To achieve that, we need to plan the character facial from very beginning of the production. At early stage, the character designer need to think about the expression after theu done the character design. Rigger need to create a flexible rigging to achieve the design. Animator can get the clear picture how they animate the facial. Facial Action Coding System (FACS) that originally developed by Carl-Herman Hjortsjo and adopted by Paul Ekman and Wallace V. can be used to identify emotion in a person generally. This paper is going to explain how the Writer use FACS to help designing the facial expression in 3D characters. FACS will be used to determine the basic characteristic of basic shapes of the face when show emotions, while compare with actual face reference. Keywords: animation, facial expression, non-dialog


Sensors ◽  
2021 ◽  
Vol 21 (9) ◽  
pp. 3046
Author(s):  
Shervin Minaee ◽  
Mehdi Minaei ◽  
Amirali Abdolrashidi

Facial expression recognition has been an active area of research over the past few decades, and it is still challenging due to the high intra-class variation. Traditional approaches for this problem rely on hand-crafted features such as SIFT, HOG, and LBP, followed by a classifier trained on a database of images or videos. Most of these works perform reasonably well on datasets of images captured in a controlled condition but fail to perform as well on more challenging datasets with more image variation and partial faces. In recent years, several works proposed an end-to-end framework for facial expression recognition using deep learning models. Despite the better performance of these works, there are still much room for improvement. In this work, we propose a deep learning approach based on attentional convolutional network that is able to focus on important parts of the face and achieves significant improvement over previous models on multiple datasets, including FER-2013, CK+, FERG, and JAFFE. We also use a visualization technique that is able to find important facial regions to detect different emotions based on the classifier’s output. Through experimental results, we show that different emotions are sensitive to different parts of the face.


Author(s):  
David L Freytag ◽  
Michael G Alfertshofer ◽  
Konstantin Frank ◽  
Dmitry V Melnikov ◽  
Nicholas Moellhoff ◽  
...  

Abstract Background Our understanding of the functional anatomy of the face is constantly improving. To date, it is unclear whether the anatomic location of the line of ligaments has any functional importance during normal facial movements such as smiling. Objectives It is the objective of the present study to identify differences in facial movements between the medial and lateral midface by means of skin vector displacement analyses derived from 3D imaging and to further ascertain whether the line of ligaments has both a structural and functional significance in these movements. Methods The study sample consisted of 21 healthy volunteers (9 females & 12 males) of Caucasian ethnic background with a mean age of 30.6 (8.3) years and a mean BMI of 22.57 (2.5) kg/m 2. 3D images of the volunteers’ faces in repose and during smiling (Duchenne type) were taken. 3D imaging-based skin vector displacement analyses were conducted. Results The mean horizontal skin displacement was 0.08 (2.0) mm in the medial midface (lateral movement) and was -0.08 (1.96) mm in the lateral midface (medial movement) (p = 0.711). The mean vertical skin displacement (cranial movement of skin toward the forehead/temple) was 6.68 (2.4) mm in the medial midface whereas it was 5.20 (2.07) mm in the lateral midface (p = 0.003). Conclusions The results of this study provide objective evidence for an antagonistic skin movement between the medial and the lateral midface. The functional boundary identified by 3D imaging corresponds to the anatomic location of the line of ligaments.


2021 ◽  
pp. 174702182199299
Author(s):  
Mohamad El Haj ◽  
Emin Altintas ◽  
Ahmed A Moustafa ◽  
Abdel Halim Boudoukha

Future thinking, which is the ability to project oneself forward in time to pre-experience an event, is intimately associated with emotions. We investigated whether emotional future thinking can activate emotional facial expressions. We invited 43 participants to imagine future scenarios, cued by the words “happy,” “sad,” and “city.” Future thinking was video recorded and analysed with a facial analysis software to classify whether facial expressions (i.e., happy, sad, angry, surprised, scared, disgusted, and neutral facial expression) of participants were neutral or emotional. Analysis demonstrated higher levels of happy facial expressions during future thinking cued by the word “happy” than “sad” or “city.” In contrast, higher levels of sad facial expressions were observed during future thinking cued by the word “sad” than “happy” or “city.” Higher levels of neutral facial expressions were observed during future thinking cued by the word “city” than “happy” or “sad.” In the three conditions, the neutral facial expressions were high compared with happy and sad facial expressions. Together, emotional future thinking, at least for future scenarios cued by “happy” and “sad,” seems to trigger the corresponding facial expression. Our study provides an original physiological window into the subjective emotional experience during future thinking.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2003 ◽  
Author(s):  
Xiaoliang Zhu ◽  
Shihao Ye ◽  
Liang Zhao ◽  
Zhicheng Dai

As a sub-challenge of EmotiW (the Emotion Recognition in the Wild challenge), how to improve performance on the AFEW (Acted Facial Expressions in the wild) dataset is a popular benchmark for emotion recognition tasks with various constraints, including uneven illumination, head deflection, and facial posture. In this paper, we propose a convenient facial expression recognition cascade network comprising spatial feature extraction, hybrid attention, and temporal feature extraction. First, in a video sequence, faces in each frame are detected, and the corresponding face ROI (range of interest) is extracted to obtain the face images. Then, the face images in each frame are aligned based on the position information of the facial feature points in the images. Second, the aligned face images are input to the residual neural network to extract the spatial features of facial expressions corresponding to the face images. The spatial features are input to the hybrid attention module to obtain the fusion features of facial expressions. Finally, the fusion features are input in the gate control loop unit to extract the temporal features of facial expressions. The temporal features are input to the fully connected layer to classify and recognize facial expressions. Experiments using the CK+ (the extended Cohn Kanade), Oulu-CASIA (Institute of Automation, Chinese Academy of Sciences) and AFEW datasets obtained recognition accuracy rates of 98.46%, 87.31%, and 53.44%, respectively. This demonstrated that the proposed method achieves not only competitive performance comparable to state-of-the-art methods but also greater than 2% performance improvement on the AFEW dataset, proving the significant outperformance of facial expression recognition in the natural environment.


2020 ◽  
Author(s):  
Andrew Langbehn ◽  
Dasha Yermol ◽  
Fangyun Zhao ◽  
Christopher Thorstenson ◽  
Paula Niedenthal

Abstract According to the familiar axiom, the eyes are the window to the soul. However, wearing masks to prevent the spread of COVID-19 involves occluding a large portion of the face. Do the eyes carry all of the information we need to perceive each other’s emotions? We addressed this question in two studies. In the first, 162 Amazon Mechanical Turk (MTurk) workers saw videos of human faces displaying expressions of happiness, disgust, anger, and surprise that were fully visible or covered by N95, surgical, or cloth masks and rated the extent to which the expressions conveyed each of the four emotions. Across mask conditions, participants perceived significantly lower levels of the expressed (target) emotion and this was particularly true for expressions composed of greater facial action in the lower part of the faces. Furthermore, higher levels of other (non-target) emotions were perceived in masked compared to visible faces. In the second study, 60 MTurk workers rated the extent to which three types of smiles (reward, affiliation, and dominance smiles), either visible or masked, conveyed positive feelings, reassurance, and superiority. They reported that masked smiles communicated less of the target signal than visible faces, but not more of other possible signals. Political attitudes were not systematically associated with disruptions in the processing of facial expression caused by masking the face.


2017 ◽  
Vol 20 (3) ◽  
Author(s):  
Elaine Cristina Sousa Dos Santos ◽  
Diego Jesus Bradariz Pimentel ◽  
Laís Lopes Machado De Matos ◽  
Laís Valencise Magri ◽  
Ana Maria Bettoni Rodrigues Da Silva ◽  
...  

<p><strong>Objective: </strong>To compare the proportion and linear measurement indexes between Brazilian and Peruvian population through 3D stereophotogrammetry and to stablish the face profile of these two Latin American populations. <strong>Material and Methods: </strong>40 volunteers (Brazilian n=21– 10 males and 11 females; Peruvian n=19 – 8 males and 11 females) aged between 18 and 40 years (mean of 28.7±9.1) had landmarks marked on the face. Then, 3D images were obtained (VECTRA M3) and the indexes of proportion and linear measurement (face, nose, and lips) were calculated. The data were statistically analyzed by One-Way ANOVA (p&lt;0.05). <strong>Results: </strong>The proportion indexes did not reveal marked differences either between the studied populations or genders (p&gt;0.05). The following linear measurements showed intergroup statistically significant differences: face width and height, nose width and height, upper facial height, mouth width, protrusion of the nose tip (p&lt;0.05). The Brazilian females showed the smallest significant differences. <strong>Conclusions: </strong>Despite the different ethnic compositions, the Brazilian and Peruvian populations did not differ regarding the proportions of the face, nose, and lips. The differences observed in Brazilian females may be related to gender and/or to the Caucasian heritage of the Brazilian sample.</p><p><strong>Keywords</strong></p><p>Photogrammetry; Face; Tridimensional Image.<strong></strong></p>


F1000Research ◽  
2019 ◽  
Vol 8 ◽  
pp. 702 ◽  
Author(s):  
Jin Hyun Cheong ◽  
Sawyer Brooks ◽  
Luke J. Chang

Advances in computer vision and machine learning algorithms have enabled researchers to extract facial expression data from face video recordings with greater ease and speed than standard manual coding methods, which has led to a dramatic increase in the pace of facial expression research. However, there are many limitations in recording facial expressions in laboratory settings.  Conventional video recording setups using webcams, tripod-mounted cameras, or pan-tilt-zoom cameras require making compromises between cost, reliability, and flexibility. As an alternative, we propose the use of a mobile head-mounted camera that can be easily constructed from our open-source instructions and blueprints at a fraction of the cost of conventional setups. The head-mounted camera framework is supported by the open source Python toolbox FaceSync, which provides an automated method for synchronizing videos. We provide four proof-of-concept studies demonstrating the benefits of this recording system in reliably measuring and analyzing facial expressions in diverse experimental setups, including group interaction experiments.


Author(s):  
Alexander Mielke ◽  
Bridget M. Waller ◽  
Claire Pérez ◽  
Alan V. Rincon ◽  
Julie Duboscq ◽  
...  

AbstractUnderstanding facial signals in humans and other species is crucial for understanding the evolution, complexity, and function of the face as a communication tool. The Facial Action Coding System (FACS) enables researchers to measure facial movements accurately, but we currently lack tools to reliably analyse data and efficiently communicate results. Network analysis can provide a way to use the information encoded in FACS datasets: by treating individual AUs (the smallest units of facial movements) as nodes in a network and their co-occurrence as connections, we can analyse and visualise differences in the use of combinations of AUs in different conditions. Here, we present ‘NetFACS’, a statistical package that uses occurrence probabilities and resampling methods to answer questions about the use of AUs, AU combinations, and the facial communication system as a whole in humans and non-human animals. Using highly stereotyped facial signals as an example, we illustrate some of the current functionalities of NetFACS. We show that very few AUs are specific to certain stereotypical contexts; that AUs are not used independently from each other; that graph-level properties of stereotypical signals differ; and that clusters of AUs allow us to reconstruct facial signals, even when blind to the underlying conditions. The flexibility and widespread use of network analysis allows us to move away from studying facial signals as stereotyped expressions, and towards a dynamic and differentiated approach to facial communication.


Sign in / Sign up

Export Citation Format

Share Document