scholarly journals Reply: Using Artificial Intelligence to Measure Facial Expression following Facial Reanimation Surgery

2021 ◽  
Vol Publish Ahead of Print ◽  
Author(s):  
Thanapoom Boonipat ◽  
Malke Asaad ◽  
Jason Lin ◽  
Graeme E. Glass ◽  
Samir Mardini ◽  
...  
2020 ◽  
Vol 146 (5) ◽  
pp. 1147-1150
Author(s):  
Thanapoom Boonipat ◽  
Malke Asaad ◽  
Jason Lin ◽  
Graeme E. Glass ◽  
Samir Mardini ◽  
...  

FACE ◽  
2021 ◽  
pp. 273250162110228
Author(s):  
David T. Mitchell ◽  
David Z. Allen ◽  
Matthew R. Greives ◽  
Phuong D. Nguyen

Machine learning is a rapidly growing subset of artificial intelligence (AI) which involves computer algorithms that automatically build mathematical models based on sample data. Systems can be taught to learn from patterns in existing data in order to make similar conclusions from new data. The use of AI in facial emotion recognition (FER) has become an area of increasing interest for providers who wish to quantify facial emotion before and after interventions such as facial reanimation surgery. While FER deep learning algorithms are less subjective when compared to layperson assessments, the databases used to train them can greatly alter their outputs. There are currently many well-established modalities for assessing facial paralysis, but there is also increasing interest in a more objective and universal measurement system to allow for consistent assessments between practitioners. The purpose of this article is to review the development of AI, examine its existing uses in facial paralysis assessment, and discuss the future directions of its implications.


Author(s):  
Ralph Reilly ◽  
Andrew Nyaboga ◽  
Carl Guynes

<p class="MsoNormal" style="text-align: justify; margin: 0in 0.5in 0pt;"><span style="layout-grid-mode: line; font-family: &quot;Times New Roman&quot;,&quot;serif&quot;;"><span style="font-size: x-small;">Facial Information Science is becoming a discipline in its own right, attracting not only computer scientists, but graphic animators and psychologists, all of whom require knowledge to understand how people make and interpret facial expressions. (Zeng, 2009). Computer advancements enhance the ability of researchers to study facial expression. Digitized computer-displayed faces can now be used in studies. Current advancements are facilitating not only the researcher&rsquo;s ability to accurately display information, but recording the subject&rsquo;s reaction automatically.<span style="mso-spacerun: yes;">&nbsp; </span><span style="mso-bidi-font-weight: bold;"><span style="mso-spacerun: yes;">&nbsp;</span></span>With increasing interest in Artificial Intelligence and man-machine communications, what importance does the gender of the user play in the design of today&rsquo;s multi-million dollar applications? Does research suggest that men and women respond to the &ldquo;gender&rdquo; of computer displayed images differently? Can this knowledge be used effectively to design applications specifically for use by men or women? This research is an attempt to understand these questions while studying whether automatic, or pre-attentive, processing plays a part in the identification of the facial expressions.</span></span></p>


2020 ◽  
pp. 57-63
Author(s):  
admin admin ◽  
◽  
◽  
◽  
◽  
...  

The human facial emotions recognition has attracted interest in the field of Artificial Intelligence. The emotions on a human face depicts what’s going on inside the mind. Facial expression recognition is the part of Facial recognition which is gaining more importance and need for it increases tremendously. Though there are methods to identify expressions using machine learning and Artificial Intelligence techniques, this work attempts to use convolution neural networks to recognize expressions and classify the expressions into 6 emotions categories. Various datasets are investigated and explored for training expression recognition models are explained in this paper and the models which are used in this paper are VGG 19 and RESSNET 18. We included facial emotional recognition with gender identification also. In this project we have used fer2013 and ck+ dataset and ultimately achieved 73% and 94% around accuracies respectively.


2021 ◽  
Vol 2083 (3) ◽  
pp. 032030
Author(s):  
Cui Dong ◽  
Rongfu Wang ◽  
Yuanqin Hang

Abstract With the development of artificial intelligence, facial expression recognition based on deep learning has become a current research hotspot. The article analyzes and improves the VGG16 network. First, the three fully connected layers of the original network are changed to two convolutional layers and one fully connected layer, which reduces the complexity of the network; Then change the maximum pooling in the network to local-based adaptive pooling to help the network select feature information that is more conducive to facial expression recognition, so that the network can be used on the facial expression datasets RAF-DB and SFEW. The recognition rate increased by 4.7% and 7% respectively.


2021 ◽  
Vol 10 (6) ◽  
pp. 3802-3805
Author(s):  
Akshata Raut

Precise face detection analysis is a crucial element for a social interaction review. To the viewer, producing the facial features that correspond to the thoughts and feelings which succeed in arousing the sensation or enhancing of the emotional sensitivity. The study is based on Virtual Reality (VR), to evaluate facial expression using Azure Kinect in adults with Class I molar relationship. The study will be conducted in Human Research Lab, on participants with Class I molar relationship, by using Azure Kinect. 196 participants will be selected of age above 18 as per the eligibility criteria. This research would demonstrate the different tools and applications available by testing their precision and relevance to determine the facial expressions.


10.2196/30439 ◽  
2021 ◽  
Vol 8 (12) ◽  
pp. e30439
Author(s):  
Urška Smrke ◽  
Izidor Mlakar ◽  
Simon Lin ◽  
Bojan Musil ◽  
Nejc Plohl

Background Cancer survivors often experience disorders from the depressive spectrum that remain largely unrecognized and overlooked. Even though screening for depression is recognized as essential, several barriers prevent its successful implementation. It is possible that better screening options can be developed. New possibilities have been opening up with advances in artificial intelligence and increasing knowledge on the connection of observable cues and psychological states. Objective The aim of this scoping meta-review was to identify observable features of depression that can be intercepted using artificial intelligence in order to provide a stepping stone toward better recognition of depression among cancer survivors. Methods We followed a methodological framework for scoping reviews. We searched SCOPUS and Web of Science for relevant papers on the topic, and data were extracted from the papers that met inclusion criteria. We used thematic analysis within 3 predefined categories of depression cues (ie, language, speech, and facial expression cues) to analyze the papers. Results The search yielded 1023 papers, of which 9 met the inclusion criteria. Analysis of their findings resulted in several well-supported cues of depression in language, speech, and facial expression domains, which provides a comprehensive list of observable features that are potentially suited to be intercepted by artificial intelligence for early detection of depression. Conclusions This review provides a synthesis of behavioral features of depression while translating this knowledge into the context of artificial intelligence–supported screening for depression in cancer survivors.


Author(s):  
Vishal P. Tank ◽  
S. K. Hadia

In the last couple of years emotion recognition has proven its significance in the area of artificial intelligence and man machine communication. Emotion recognition can be done using speech and image (facial expression), this paper deals with SER (speech emotion recognition) only. For emotion recognition emotional speech database is essential. In this paper we have proposed emotional database which is developed in Gujarati language, one of the official’s language of India. The proposed speech corpus bifurcate six emotional states as: sadness, surprise, anger, disgust, fear, happiness. To observe effect of different emotions, analysis of proposed Gujarati speech database is carried out using efficient speech parameters like pitch, energy and MFCC using MATLAB Software.


Sign in / Sign up

Export Citation Format

Share Document