facial feature extraction
Recently Published Documents


TOTAL DOCUMENTS

221
(FIVE YEARS 17)

H-INDEX

19
(FIVE YEARS 1)

Author(s):  
Saksham Gosain

Abstract: This research paper presents a study of concealed weapon detection using image processing and machine learning. In order to attempt to replace the traditional method of detecting hidden weapons i.e. x-ray method with an automated and possibly a less error prone procedure, potential alternate techniques such as neural networks and image fusion have been studied and explored to identify the best possible solution. We propose a method to fuse Thermal/IR image with the conventional RGB image or HSV image in order to reduce image noise and retain all the critical features of the image to achieve both weapon detection and facial feature extraction. Keywords: Image fusion; concealed weapon; feature extraction; neural network; thermal imaging


2021 ◽  
Vol 39 (1B) ◽  
pp. 117-128
Author(s):  
Shaymaa M. Hamandi ◽  
Abdul Monem S. Rahma ◽  
Rehab F. Hassan

Robust facial feature extraction is an effective and important process for face recognition and identification system. The facial features should be invariant to scaling, translation, illumination and rotation, several feature extraction techniques may be used to increase the recognition accuracy. This paper inspects three-moment invariants techniques and then determines how is influenced by the variation which may happen to the various shapes of the face (globally and locally) Globally means the whole face shapes and locally means face part's shape (right eye, left eye, mouth, and nose). The proposed technique is tested using CARL database images. The proposal method of the new method that collects the robust features of each method is trained by a feed-forward neural network. The result has been improved and achieved an accuracy of 99.29%.


2020 ◽  
Vol 1664 ◽  
pp. 012050
Author(s):  
Sahar Adnan ◽  
Fatima Ali ◽  
Ashwan A. Abdulmunem

2020 ◽  
Vol 79 (41-42) ◽  
pp. 31027-31047
Author(s):  
Raj Silwal ◽  
Abeer Alsadoon ◽  
P. W. C. Prasad ◽  
Omar Hisham Alsadoon ◽  
Ammar Al-Qaraghuli

Methods for detection of facial characteristics have again developed greatly in recent times. However, they also argue in the presence of poor lighting conditions for amazing pose or occlusions. A well-established group of strategies for facial feature extraction is the Constrained Local Model (CLM). Recently, they are bringing cascaded regression-built methodologies out of favor. This is because the failure of presenting nearby CLM detectors to model the highly complex special signature look affected to a small degree by voice, illumination, facial hair and make-up. This paper keeps tabs on execution to collect facial features for the Constrained Local Model (CLM). CLM model relies on patch model to collect facial image demand features. In this paper patch model built using Support Vector Regression (SVR) and Constrained Local Neural Field (CLNF). We show that the CLNF model exceeds SVR by a large margin on the LFPW database to identify facial landmarks.


2020 ◽  
Vol 10 (1) ◽  
pp. 259-269
Author(s):  
Akansha Singh ◽  
Surbhi Dewan

AbstractAssistive technology has proven to be one of the most significant inventions to aid people with Autism to improve the quality of their lives. In this study, a real-time emotion recognition system for autistic children has been developed. Emotion recognition is implemented by executing three stages: Face identification, Facial Feature extraction, and feature classification. The objective is to frame a system that includes all three stages of emotion recognition activity that executes expeditiously in real time. Thus, Affectiva SDK is implemented in the application. The propound system detects at most 7 facial emotions: anger, disgust, fear, joy, sadness, contempt, and surprise. The purpose for performing this study is to teach emotions to individuals suffering from autism, as they lack the ability to respond appropriately to others emotions. The proposed application was tested with a group of typical children aged 6–14 years, and positive outcomes were achieved.


2020 ◽  
Vol 1500 ◽  
pp. 012011
Author(s):  
Ahmad Zarkasi ◽  
Siti Nurmaini ◽  
Deris Setiawan ◽  
Ahmad Kuswandi ◽  
Sri Desy Siswanti

Author(s):  
Garima Sharma ◽  
Latika Singh ◽  
Sumanlata Gautam

: The study presents a fuzzy based approach to extract facial features for human emotion classification. The proposed system determines the dimensional attributes (l-attribute and w-attribute) of mouth region extracted from the facial image using viola-jones algortithm. The feature set was generated by applying the proposed approach on a total of 136 images from JAFFE, NimStim and MUG dataset for happy and neutral emotion classes.The classification models were constructed for classifying happy and neutral emotion classes by using linear discriminant analysis, random forest, support vector machine, deep learning and naïve’s bayesian classification techniques. The result show good accuracy of 70% for the grayscale JAFFE and NimStim databases and 95% for the coloured MUG databse. The study can be further extended by considering multiple emotion classes from multiple datasets under different illumination conditions.


Sign in / Sign up

Export Citation Format

Share Document