Hybrid Features Extraction for Adaptive Face Images Retrieval

2020 ◽  
Vol 11 (1) ◽  
pp. 17-26 ◽  
Author(s):  
Adel Alti

Existing methods of face emotion recognition have been limited in performance in terms of recognition accuracy and execution time. It is highly important to use efficient techniques for improving this performance. In this article, the authors present an automatic facial image retrieval combining the advantages of color normalization by texture estimators with the gradient vector. Starting from a query face image, an efficient algorithm for human face by hybrid feature extraction provides very interesting results.

2020 ◽  
Vol 34 (06) ◽  
pp. 10402-10409
Author(s):  
Tianying Wang ◽  
Wei Qi Toh ◽  
Hao Zhang ◽  
Xiuchao Sui ◽  
Shaohua Li ◽  
...  

Robotic drawing has become increasingly popular as an entertainment and interactive tool. In this paper we present RoboCoDraw, a real-time collaborative robot-based drawing system that draws stylized human face sketches interactively in front of human users, by using the Generative Adversarial Network (GAN)-based style transfer and a Random-Key Genetic Algorithm (RKGA)-based path optimization. The proposed RoboCoDraw system takes a real human face image as input, converts it to a stylized avatar, then draws it with a robotic arm. A core component in this system is the AvatarGAN proposed by us, which generates a cartoon avatar face image from a real human face. AvatarGAN is trained with unpaired face and avatar images only and can generate avatar images of much better likeness with human face images in comparison with the vanilla CycleGAN. After the avatar image is generated, it is fed to a line extraction algorithm and converted to sketches. An RKGA-based path optimization algorithm is applied to find a time-efficient robotic drawing path to be executed by the robotic arm. We demonstrate the capability of RoboCoDraw on various face images using a lightweight, safe collaborative robot UR5.


2014 ◽  
Vol 989-994 ◽  
pp. 4187-4190 ◽  
Author(s):  
Lin Zhang

An adaptive gender recognition method is proposed in this paper. At first, do multiwavlet transform to face image and get its low frequency information, then do feature extraction to the low frequency information using compressive sensing (CS), use extreme learning machine (ELM) to achieve gender recognition finally. In the process of feature extraction, we use genetic algorithm (GA) to get the number of measurements of CS in order to gain the highest recognition rate, so the method can adaptive access optimal performance. Experimental results show that compared with PDA and LDA, the new method improved the recognition accuracy substantially.


2020 ◽  
Vol 20 (06) ◽  
pp. 2050025 ◽  
Author(s):  
XIAOCHEN LIU ◽  
JIZHONG SHEN ◽  
WUFENG ZHAO

Electroencephalogram (EEG) signals are widely used as an effective method for epilepsy analysis and diagnosis. For the establishment of an accurate and efficient epilepsy EEG identification system, it is very important to properly extract the features of EEG signals and select appropriate combination features. This paper proposes an automatic epileptic EEG identification method based on hybrid feature extraction. It uses temporal and frequency domain analysis, nonlinear analysis and one-dimensional local pattern recognition method to extract epileptic EEG features. Gradient energy operator and local speed pattern are proposed to better reflect typical feature in the active EEG signals measured during seizure-free intervals. The genetic algorithm is used to select the obtained hybrid features; then the AdaBoost classifier is used to classify epileptic EEG under various classification conditions. Classification results on the dataset developed by University of Bonn show that the proposed method can be used to classify normal EEG, interictal EEG and seizure activity with only a few features. Compared with related researches using the same dataset, the proposed method can obtain an equally satisfactory classification accuracy while the feature amount is reduced by 61–95%. In particular, the classification accuracy of the interictal and normal EEG can reach 99%.


2011 ◽  
pp. 5-44 ◽  
Author(s):  
Daijin Kim ◽  
Jaewon Sung

Face detection is the most fundamental step for the research on image-based automated face analysis such as face tracking, face recognition, face authentication, facial expression recognition and facial gesture recognition. When a novel face image is given we must know where the face is located, and how large the scale is to limit our concern to the face patch in the image and normalize the scale and orientation of the face patch. Usually, the face detection results are not stable; the scale of the detected face rectangle can be larger or smaller than that of the real face in the image. Therefore, many researchers use eye detectors to obtain stable normalized face images. Because the eyes have salient patterns in the human face image, they can be located stably and used for face image normalization. The eye detection becomes more important when we want to apply model-based face image analysis approaches.


Author(s):  
Debby D. Wang ◽  
Haoran Xie ◽  
Fu Lee Wang ◽  
Ran Wang ◽  
Xuefei Zhe ◽  
...  

2019 ◽  
Vol 892 ◽  
pp. 200-209
Author(s):  
Rayner Pailus ◽  
Rayner Alfred

Adaboost Viola-Jones method is indeed a profound discovery in detecting face images mainly because it is fast, light and one of the easiest methods of detecting face images among other techniques of face detection. Viola Jones uses Haar wavelet filter to detect face images and it produces almost 80%accuracy of face detection. This paper discusses proposed methodology and algorithms that involved larger library of filters used to create more discrimination features among the images by processing the proposed 15 Haar rectangular features (an extension from 4 Haar wavelet filters of Viola Jones) and used them in multiple adaptive ensemble process of detecting face image. After facial detection, the process continues with normalization processes by applying feature extraction such as PCA combined with LDA or LPP to extract our week learners’ wavelet for more classification features. Upon the process of feature extraction proposed feature selection to index these extracted data. These extracted vectors are used for training and creating MADBoost (Multiple Adaptive Diversified Boost)(an improvement of Adaboost, which uses multiple feature extraction methods combined with multiple classifiers) is able to capture, recognize and distinguish face image (s) faster. MADBoost applies the ensemble approach with better weights for classification to produce better face recognition results. Three experiments have been conducted to investigate the performance of the proposed MADBoost with three other classifiers, Neural Network (NN), Support Vector Machines (SVM) and Adaboost classifiers using Principal Component Analysis (PCA) as the feature extraction method. These experiments were tested against obstacles of POIES (Pose, Obstruction, Illumination, Expression, Sizes). Based on the results obtained, Madboost is found to be able to improve the recognition performance in matching failures, incorrect matching, matching success percentages and acceptable time taken to perform the classification task.


Author(s):  
Tang-Tang Yi ◽  

In order to solve the problem of low recognition accuracy in recognition of 3D face images collected by traditional sensors, a face recognition algorithm for 3D point cloud collected by mixed image sensors is proposed. The algorithm first uses the 3D wheelbase to expand the face image edge. According to the 3D wheelbase, the noise of extended image is detected, and median filtering is used to eliminate the detected noise. Secondly, the priority of the boundary pixels to recognize the face image in the denoising image recognition process is determined, and the key parts such as the illuminance line are analyzed, so that the recognition of the 3D point cloud face image is completed. Experiments show that the proposed algorithm improves the recognition accuracy of 3D face images, which recognition time is lower than that of the traditional algorithm by about 4 times, and the recognition efficiency is high.


2015 ◽  
Vol 34 (3) ◽  
pp. 209 ◽  
Author(s):  
Kanthan Muthukannan ◽  
Pitchai Latha

The main objective of this paper is to segment the disease affected portion of a plant leaf and extract the hybrid features for better classification of different disease patterns. A new approach named as Particle Swarm Optimization (PSO) is proposed for image segmentation. PSO is an automatic unsupervised efficient algorithm which is used for better segmentation and better feature extraction. Features extracted after segmentation are important for disease classification so that the hybrid feature extraction components controls the accuracy of classification for different diseases. The approach named as Hybrid Feature Extraction (HFE), which has three components namely color, texture and shape based features. The performance of the preprocessing result was compared and the best result was taken for image segmentation using PSO. Then the hybrid feature parameters were extracted from the gray level co-occurrence matrices of different leaves. The proposed method was tested on different images of disease affected leaves, and the experimental results exhibit its effectiveness.


Author(s):  
WEI-LI FANG ◽  
YING-KUEI YANG ◽  
JUNG-KUEI PAN

Several 2DPCA-based face recognition algorithms have been proposed hoping to achieve the goal of improving recognition rate while mostly at the expense of computation cost. In this paper, an approach named SI2DPCA is proposed to not only reduce the computation cost but also increase recognition performance at the same time. The approach divides a whole face image into smaller sub-images to increase the weight of features for better feature extraction. Meanwhile, the computation cost that mainly comes from the heavy and complicated operations against matrices is reduced due to the smaller size of sub-images. The reduced amount of computation has been analyzed and the integrity of sub-images has been discussed thoroughly in the paper. The experiments have been conducted to make comparisons among several better-known approaches and SI2DPCA. The experimental results have demonstrated that the proposed approach works well on reaching the goals of reducing computation cost and improving recognition performance simultaneously.


Sign in / Sign up

Export Citation Format

Share Document