scholarly journals A Robust Geometric Skin Colour Face Detection Method under Unconstrained Environment of Smartphone Database

2019 ◽  
Vol 892 ◽  
pp. 31-37 ◽  
Author(s):  
Noor Amjed ◽  
Fatimah Khalid ◽  
Rahmita Wirza O.K. Rahmat ◽  
Hizmawati Bint Madzin

Face detection is the primary task in building a vision-based human-computer interaction system and in special applications such as face recognition, face tracking, face identification, expression recognition and also content-based image retrieval. A potent face detection system must be able to detect faces irrespective of illuminations, shadows, cluttered backgrounds, orientation and facial expressions. In previous literature, many approaches for face detection had been proposed. However, face detection in outdoor images with uncontrolled illumination and images with complex background are still a serious problem. Hence, in this paper, we had proposed a Geometric Skin Colour (GSC) method for detecting faces accurately in real world image, under capturing conditions of both indoor and outdoor, and with a variety of illuminations and also in cluttered backgrounds. The selected method was evaluated on two different face video smartphone databases and the obtained results proved the outperformance of the proposed method under the unconstrained environment of these databases.

2013 ◽  
Vol 1 (4) ◽  
pp. 1-15 ◽  
Author(s):  
Hiroki Nomiya ◽  
Atsushi Morikuni ◽  
Teruhisa Hochin

An emotional scene detection method is proposed in order to retrieve impressive scenes from lifelog videos. The proposed method is based on facial expression recognition considering that a wide variety of facial expression could be observed in impressive scenes. Conventional facial expression techniques, which focus on discriminating typical facial expressions, will be inadequate for lifelog video retrieval because of the diversity of facial expressions. The authors thus propose a more flexible and efficient emotional scene detection method using an unsupervised facial expression recognition based on cluster ensembles. The authors' approach does not need to predefine facial expressions and is able to detect emotional scenes containing a wide variety of facial expressions. The detection performance of the proposed method is evaluated through some emotional scene detection experiments.


2011 ◽  
pp. 5-44 ◽  
Author(s):  
Daijin Kim ◽  
Jaewon Sung

Face detection is the most fundamental step for the research on image-based automated face analysis such as face tracking, face recognition, face authentication, facial expression recognition and facial gesture recognition. When a novel face image is given we must know where the face is located, and how large the scale is to limit our concern to the face patch in the image and normalize the scale and orientation of the face patch. Usually, the face detection results are not stable; the scale of the detected face rectangle can be larger or smaller than that of the real face in the image. Therefore, many researchers use eye detectors to obtain stable normalized face images. Because the eyes have salient patterns in the human face image, they can be located stably and used for face image normalization. The eye detection becomes more important when we want to apply model-based face image analysis approaches.


2011 ◽  
Vol 268-270 ◽  
pp. 471-475
Author(s):  
Sungmo Jung ◽  
Seoksoo Kim

Many 3D films use technologies of facial expression recognition. In order to use the existing technologies, a large number of markers shall be attached to a face, a camera is fixed in front of the face, and movements of the markers are calculated. However, the markers calculate only the changes in regions where the markers are attached, which makes difficult realistic recognition of facial expressions. Therefore, this study extracted a preliminary eye region in 320*240 by defining specific location values of the eye. And the final eye region was selected from the preliminary region. This study suggests an improved method of detecting an eye region, reducing errors arising from noise.


2014 ◽  
Vol 490-491 ◽  
pp. 1259-1266 ◽  
Author(s):  
Muralindran Mariappan ◽  
Manimehala Nadarajan ◽  
Rosalyn R. Porle ◽  
Vigneswaran Ramu ◽  
Brendan Khoo Teng Thiam

Biometric identification has advanced vastly since many decades ago. It became a blooming area for research as biometric technology has been used extensively in areas like robotics, surveillance, security and others. Face technology is more preferable due to its reliability and accuracy. By and large, face detection is the first processing stage that is performed before extending to face identification or tracking. The main challenge in face detection is the sensitiveness of the detection to pose, illumination, background and orientation. Thus, it is crucial to design a face detection system that can accommodate those problems. In this paper, a face detection algorithm is developed and designed in LabVIEW that is flexible to adapt changes in background and different face angle. Skin color detection method blending with edge and circle detection is used to improve the accuracy of face detected. The overall system designed in LabVIEW was tested in real time and it achieves accuracy about 97%.


Author(s):  
Tudor Barbu

We propose a robust face detection approach that works for digital color images. Our automatic detection method is based on image skin regions, therefore a skin-based segmentation of RGB images is provided first. Then, we decide for each skin region if it represents a human face or not, using a set of candidate criteria, an edge detection process, a correlation based technique and a threshold-based method. A high face detection rate is obtained using the proposed method.


2008 ◽  
Vol 2008 ◽  
pp. 1-7 ◽  
Author(s):  
Ce Zhan ◽  
Wanqing Li ◽  
Philip Ogunbona ◽  
Farzad Safaei

Multiplayer online games (MOGs) have become increasingly popular because of the opportunity they provide for collaboration, communication, and interaction. However, compared with ordinary human communication, MOG still has several limitations, especially in communication using facial expressions. Although detailed facial animation has already been achieved in a number of MOGs, players have to use text commands to control the expressions of avatars. In this paper, we propose an automatic expression recognition system that can be integrated into an MOG to control the facial expressions of avatars. To meet the specific requirements of such a system, a number of algorithms are studied, improved, and extended. In particular, Viola and Jones face-detection method is extended to detect small-scale key facial components; and fixed facial landmarks are used to reduce the computational load with little performance degradation in the recognition accuracy.


2019 ◽  
Vol 8 (2) ◽  
pp. 2728-2740 ◽  

Facial expressions are the facial changes in light of a man's interior enthusiastic moods, aims, or social interchanges which are investigated by computer frameworks that endeavor to consequently examine and perceive facial movements and facial component changes from visual data. Now and again the facial expression recognition has been mistaken for feeling examination in the computer vision space prompts uncouth backings of acknowledgment process such as face detection, feature recognition and expression recognition in that way bringing about the issues of identifying impediments, enlightenments, posture varieties, acknowledgment, decrease in dimensionality, and so forth. Notwithstanding that, an appropriate computation and forecast of exact outcomes additionally enhances the execution of the facial Expression recognition. Henceforth, a detailed study was required about the strategies and systems utilized for unraveling the issues of facial expressions during the time of face detection, feature recognition and expression recognition. So thepaper displayed different current strategies and afterward basically considered the effort by the different researchers in the area of Facial Expression Recognition.


2019 ◽  
Vol 16 (9) ◽  
pp. 3778-3782 ◽  
Author(s):  
Mamta Santosh ◽  
Avinash Sharma

Facial Expression Recognition has become the preliminary research area due to its importance in human-computer interaction. Facial Expressions conveys the major part of information so it has vast applications in various fields. Many techniques have been developed in the literature but there is still a need to make the current expression recognition methods efficient. This paper represents proposed framework for face detection and recognizing six universal facial expressions such as happy, anger, disgust, fear, surprise, sad along with neutral face. Viola-Jones method and Face Landmark Detection method are used for face detection. Histogram of oriented gradients is used for feature extraction due to its superiority over other methods. To reduce the dimensionality of features Principal Component Analysis is used so that the maximum variation is preserved. Canberra distance classifier is used for classifying the expressions into different emotions. The proposed method is applied on Japanese Female Facial Expression Database and have evaluated that the proposed method outperforms many state-of-the-art techniques.


Author(s):  
Nikolaos Bourbakis

Detecting faces and facial expressions has become a common task in human-computer interaction systems. A face-facial detection system must be able to detect faces under various conditions and extract their facial expressions. Many approaches for face detection have been proposed in the literature mainly dealing with the detection or recognition of faces in still conditions rather than the person’s facial expressions and the reflecting emotional behavior. In this paper, the author describes a synergistic methodology for detecting frontal high-resolution color faces and for recognizing their facial expressions accurately in realistic conditions both indoor and outdoor, and with a variety of conditions (shadows, high-lights, non-white lights). The methodology associates these facial expressions to emotional behavior. It extracts important facial features, such as eyes, eyebrows, nose, mouth (lips) and defines them as the primitive elements of an alphabet of a simple formal language in order to synthesize these facial features and generate emotional expressions. The main goal of this effort is to monitor emotional behavior and learn from it. Illustrative examples are also provided for proving the concept of the methodology.


Sign in / Sign up

Export Citation Format

Share Document