scholarly journals Location of Eyes in Images of Human Faces through Analysis Variance Shine Intensity

Author(s):  
Hussein Ali Alhamzawi

Extraction of facial features is an important step in automatic visual interpretation and recognition of human faces. Among the facial features, the eyes play a major role in the recognition process. In this article, we present an approach to detect and locate the eyes in frontal face images. Eye regions are identified using the technique of voucher detection based on mathematical morphology. After this identification is made a comparison between the variances of three different portions of each candidate region to eye (set of pixels belonging to the candidate region as a whole, set of pixels contained in a minimum rectangle circumscribed to the candidate region and set of pixels of the candidate region belonging to a horizontal band that crosses the center of mass of this region). The calculation of these variances also considers the R, G, and B channels, as well as the gray version of the input image.

2021 ◽  
Vol 11 (6) ◽  
pp. 734
Author(s):  
Tania Akter ◽  
Mohammad Hanif Ali ◽  
Md. Imran Khan ◽  
Md. Shahriare Satu ◽  
Md. Jamal Uddin ◽  
...  

Autism spectrum disorder (ASD) is a complex neuro-developmental disorder that affects social skills, language, speech and communication. Early detection of ASD individuals, especially children, could help to devise and strategize right therapeutic plan at right time. Human faces encode important markers that can be used to identify ASD by analyzing facial features, eye contact, and so on. In this work, an improved transfer-learning-based autism face recognition framework is proposed to identify kids with ASD in the early stages more precisely. Therefore, we have collected face images of children with ASD from the Kaggle data repository, and various machine learning and deep learning classifiers and other transfer-learning-based pre-trained models were applied. We observed that our improved MobileNet-V1 model demonstrates the best accuracy of 90.67% and the lowest 9.33% value of both fall-out and miss rate compared to the other classifiers and pre-trained models. Furthermore, this classifier is used to identify different ASD groups investigating only autism image data using k-means clustering technique. Thus, the improved MobileNet-V1 model showed the highest accuracy (92.10%) for k = 2 autism sub-types. We hope this model will be useful for physicians to detect autistic children more explicitly at the early stage.


Symmetry ◽  
2018 ◽  
Vol 10 (10) ◽  
pp. 442 ◽  
Author(s):  
Dongxue Liang ◽  
Kyoungju Park ◽  
Przemyslaw Krompiec

With the advent of the deep learning method, portrait video stylization has become more popular. In this paper, we present a robust method for automatically stylizing portrait videos that contain small human faces. By extending the Mask Regions with Convolutional Neural Network features (R-CNN) with a CNN branch which detects the contour landmarks of the face, we divided the input frame into three regions: the region of facial features, the region of the inner face surrounded by 36 face contour landmarks, and the region of the outer face. Besides keeping the facial features region as it is, we used two different stroke models to render the other two regions. During the non-photorealistic rendering (NPR) of the animation video, we combined the deformable strokes and optical flow estimation between adjacent frames to follow the underlying motion coherently. The experimental results demonstrated that our method could not only effectively reserve the small and distinct facial features, but also follow the underlying motion coherently.


2017 ◽  
Author(s):  
Chi-Hsun Chang ◽  
Dan Nemrodov ◽  
Andy C. H. Lee ◽  
Adrian Nestor

AbstractVisual memory for faces has been extensively researched, especially regarding the main factors that influence face memorability. However, what we remember exactly about a face, namely, the pictorial content of visual memory, remains largely unclear. The current work aims to elucidate this issue by reconstructing face images from both perceptual and memory-based behavioural data. Specifically, our work builds upon and further validates the hypothesis that visual memory and perception share a common representational basis underlying facial identity recognition. To this end, we derived facial features directly from perceptual data and then used such features for image reconstruction separately from perception and memory data. Successful levels of reconstruction were achieved in both cases for newly-learned faces as well as for familiar faces retrieved from long-term memory. Theoretically, this work provides insights into the content of memory-based representations while, practically, it opens the path to novel applications, such as computer-based ‘sketch artists’.


Author(s):  
Hsueh-Wu Wang ◽  
Ying-Ming Wu ◽  
Yen-Ling Lu ◽  
Ying-Tung Hsiao
Keyword(s):  

Author(s):  
Pawel T. Puslecki

The aim of this chapter is the overall and comprehensive description of the machine face processing issue and presentation of its usefulness in security and forensic applications. The chapter overviews the methods of face processing as the field deriving from various disciplines. After a brief introduction to the field, the conclusions concerning human processing of faces that have been drawn by the psychology researchers and neuroscientists are described. Then the most important tasks related to the computer facial processing are shown: face detection, face recognition and processing of facial features, and the main strategies as well as the methods applied in the related fields are presented. Finally, the applications of digital biometrical processing of human faces are presented.


Symmetry ◽  
2019 ◽  
Vol 11 (5) ◽  
pp. 664 ◽  
Author(s):  
Kun Liu ◽  
Jun-Hong Chen ◽  
Kang-Ming Chang

Many researchers think that the characters in animated cartoons and comics are designed according to the exaggeration or reduction of some features based on the human face. However, the feature distribution of the human face is relatively symmetrical and uniform. Thus, to ensure the characters look exaggerated, but without breaking the principle of symmetry, some questions remain: Which facial features should be exaggerated during the design process? How exaggerated are the faces of cartoon characters compared to real faces? To answer these questions, we selected 100 cartoon characters from American and Japanese animation, collected data from their facial features and the facial features of real people, and then described the features using angles, lengths, and areas. Finally, we compared cartoon characters’ facial features values with real facial features and determined the key parts and degree of facial exaggeration of animated characters. The research results show that American and Japanese cartoon characters both exaggerate the eyes, nose, ears, forehead, and chin. Compared with human faces, taking the eye area as an example, American animation characters are twice as large compared with human faces, whereas Japanese animation characters are 3.4 times larger than human faces. The study results can be used for reference by animation character designers and researchers.


Diagnostics ◽  
2019 ◽  
Vol 9 (2) ◽  
pp. 38 ◽  
Author(s):  
Incheol Kim ◽  
Sivaramakrishnan Rajaraman ◽  
Sameer Antani

Deep learning (DL) methods are increasingly being applied for developing reliable computer-aided detection (CADe), diagnosis (CADx), and information retrieval algorithms. However, challenges in interpreting and explaining the learned behavior of the DL models hinders their adoption and use in real-world systems. In this study, we propose a novel method called “Class-selective Relevance Mapping” (CRM) for localizing and visualizing discriminative regions of interest (ROI) within a medical image. Such visualizations offer improved explanation of the convolutional neural network (CNN)-based DL model predictions. We demonstrate CRM effectiveness in classifying medical imaging modalities toward automatically labeling them for visual information retrieval applications. The CRM is based on linear sum of incremental mean squared errors (MSE) calculated at the output layer of the CNN model. It measures both positive and negative contributions of each spatial element in the feature maps produced from the last convolution layer leading to correct classification of an input image. A series of experiments on a “multi-modality” CNN model designed for classifying seven different types of image modalities shows that the proposed method is significantly better in detecting and localizing the discriminative ROIs than other state of the art class-activation methods. Further, to visualize its effectiveness we generate “class-specific” ROI maps by averaging the CRM scores of images in each modality class, and characterize the visual explanation through their different size, shape, and location for our multi-modality CNN model that achieved over 98% performance on a dataset constructed from publicly available images.


Author(s):  
P. S. HIREMATH ◽  
AJIT DANTI

In this paper, human faces are detected using the skin color information and the Lines-of-Separability (LS) face model. The various skin color spaces based on widely used color models such as RGB, HSV, YCbCr, YUV and YIQ are compared and an appropriate color model is selected for the purpose of skin color segmentation. The proposed approach of skin color segmentation is based on YCbCr color model and sigma control limits for variations in its color components. The segmentation by the proposed method is found to be more efficient in terms of speed and accuracy. Each of the skin segmented regions is then searched for the facial features using the LS face model to detect the face present in it. The LS face model is a geometric approach in which the spatial relationships among the facial features are determined for the purpose of face detection. Hence, the proposed approach based on the combination of skin color segmentation and LS face model is able to detect single as well as multiple faces present in a given image. The experimental results and comparative analysis demonstrate the effectiveness of this approach.


2014 ◽  
Vol 2014 ◽  
pp. 1-9 ◽  
Author(s):  
Yue Liu ◽  
Yibing Li ◽  
Hong Xie ◽  
Dandan Liu

Kernel Fisher discriminant analysis (KFDA) method has demonstrated its success in extracting facial features for face recognition. Compared to linear techniques, it can better describe the complex and nonlinear variations of face images. However, a single kernel is not always suitable for the applications of face recognition which contain data from multiple, heterogeneous sources, such as face images under huge variations of pose, illumination, and facial expression. To improve the performance of KFDA in face recognition, a novel algorithm named multiple data-dependent kernel Fisher discriminant analysis (MDKFDA) is proposed in this paper. The constructed multiple data-dependent kernel (MDK) is a combination of several base kernels with a data-dependent kernel constraint on their weights. By solving the optimization equation based on Fisher criterion and maximizing the margin criterion, the parameter optimization of data-dependent kernel and multiple base kernels is achieved. Experimental results on the three face databases validate the effectiveness of the proposed algorithm.


Author(s):  
Tanjimul Ahad Asif ◽  
Baidya Nath Saha

Instagram is one of the famous and fast-growing media sharing platforms. Instagram allows users to share photos and videos with followers. There are plenty of ways to search for images on Instagram, but one of the most familiar ways is ’hashtag.’ Hashtag search enables the users to find the precise search result on Instagram. However, there are no rules for using the hashtag; that is why it often does not match the uploaded image, and for this reason, Users are unable to find the relevant search results. This research aims to filter any human face images on search results based on hashtags on Instagram. Our study extends the author’s [2] work by implementing image processing techniques that detect human faces and separate the identified images on search results based on hashtags using the face detection technique.


Sign in / Sign up

Export Citation Format

Share Document