facial landmark localization
Recently Published Documents


TOTAL DOCUMENTS

93
(FIVE YEARS 30)

H-INDEX

12
(FIVE YEARS 2)

2021 ◽  
Author(s):  
Diego L. Guarin ◽  
Andrea Bandini ◽  
Aidan Dempster ◽  
Henry Wang ◽  
Siavash Rezaei ◽  
...  

Background: Automatic facial landmark localization is an essential component in many computer vision applications, including video-based detection of neurological diseases. Machine learning models for facial landmarks localization are typically trained on faces of healthy individuals, and we found that model performance is inferior when applied to faces of people with neurological diseases. Fine-tuning pre-trained models with representative images improves performance on clinical populations significantly. However, questions related to the characteristics of the database used to fine-tune the model and the clinical impact of the improved model remain. Methods: We employed the Toronto NeuroFace dataset – a dataset consisting videos of Healthy Controls (HC), individuals Post-Stroke, and individuals with Amyotrophic Lateral Sclerosis performing speech and non-speech tasks with thousands of manually annotated frames - to fine-tune a well-known deep learning-based facial landmark localization model. The pre-trained and fine-tuned models were used to extract landmark-based facial features from videos, and the facial features were used to discriminate clinical groups from HC. Results: Fine-tuning a facial landmark localization model with a diverse database that includes HC and individuals with neurological disorders resulted in significantly improved performance for all groups. Our results also showed that fine-tuning the model with representative data greatly improved the ability of the subsequent classifier to classify clinical groups vs. HC from videos. Conclusions: Using a diverse database for model fine-tuning might result in better model performance for HC and clinical groups. We demonstrated that fine-tuning a model for landmark localization with representative data results in improved detection of neurological diseases.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Chentao Zhang ◽  
Habte Tadesse Likassa ◽  
Peidong Liang ◽  
Jielong Guo

In this paper, we developed a new robust part-based model for facial landmark localization and detection via affine transformation. In contrast to the existing works, the new algorithm incorporates affine transformations with the robust regression to tackle the potential effects of outliers and heavy sparse noises, occlusions and illuminations. As such, the distorted or misaligned objects can be rectified by affine transformations and the patterns of occlusions and outliers can be explicitly separated from the true underlying objects in big data. Moreover, the search of the optimal parameters and affine transformations is cast as a constrained optimization programming. To mitigate the computations, a new set of equations is derived to update the parameters involved and the affine transformations iteratively in a round-robin manner. Our way to update the parameters compared to the state of the art of the works is relatively better, as we employ a fast alternating direction method for multiplier (ADMM) algorithm that solves the parameters separately. Simulations show that the proposed method outperforms the state-of-the-art works on facial landmark localization and detection on the COFW, HELEN, and LFPW datasets.


2021 ◽  
pp. 108325
Author(s):  
Congcong Zhu ◽  
Xiaoqiang Li ◽  
Jide Li ◽  
Songmin Dai ◽  
Weiqin Tong

2021 ◽  
Vol 115 ◽  
pp. 107893
Author(s):  
Boyu Chen ◽  
Wenlong Guan ◽  
Peixia Li ◽  
Naoki Ikeda ◽  
Kosuke Hirasawa ◽  
...  

Author(s):  
Dinesh Kumar P ◽  
Dr. B. Rosiline Jeetha

Facial expression, as one of the most significant means for human beings to show their emotions and intensions in the process of communication, plays a significant role in human interfaces. In recent years, facial expression recognition has been under especially intensive investigation, due conceivably to its vital applications in various fields including virtual reality, intelligent tutoring system, health-care and data driven animation. The main target of facial expression recognition is to identify the human emotional state (e.g., anger, contempt, disgust, fear, happiness, sadness, and surprise ) based on the given facial images. This paper deals with the Facial expression detection and recognition through Viola-jones algorithm and HCNN using LSTM method. It improves the hypothesis execution enough and meanwhile inconceivably reduces the computational costs. In feature matching, the author proposes Hybrid Scale-Invariant Feature Transform (SIFT) with double δ-LBP (Dδ-LBP) and it utilizes the fixed facial landmark localization approach and SIFT’s orientation assignment, to obtain the features that are illumination and pose independent. For face detection, basically we utilize the face detection Viola-Jones algorithm and it recognizes the occluded face and it helps to perform the feature selection through the whale optimization algorithm, once after compression and further, it minimizes the feature vector given into the Hybrid Convolutional Neural Network (HCNN) and Long Short-Term Memory (LSTM) model for identifying the facial expression in efficient manner.The experimental result confirms that the HCNN-LSTM Model beats traditional deep-learning and machine-learning techniques with respect to precision, recall, f-measure, and accuracy using CK+ database. Proposes Hybrid Scale-Invariant Feature Transform (SIFT) with double δ-LBP (Dδ-LBP) and it utilizes the fixed facial landmark localization approach and SIFT’s orientation assignment, to obtain the features that are illumination and pose independent. And HCNN and LSTM model for identifying the facial expression.


Author(s):  
Romain Belmonte ◽  
Benjamin Allaert ◽  
Pierre Tirilly ◽  
Ioan Marius Bilasco ◽  
Chaabane Djeraba ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document