Modification of Happiness Expression in Face Images

2017 ◽  
Vol 6 (2) ◽  
pp. 58-69 ◽  
Author(s):  
Dao Nam Anh ◽  
Trinh Minh Duc

This article describes how facial expression detection and adjustment in complex psychological aspects of vision is central to a number of visual and cognitive computing applications. This article presents an algorithm for automatically estimating happiness expression of face images whose demographic aspects like race, gender and eye direction are changeable. The method is also broadening for alteration of level of happiness expression for face images. A schema of the weighted modification is proposed for enhancement of happiness expression. The authors employ a robust face representation which combines the color patch similarity and the self-resemblance of image patches. A large set of face images with appearance of the properties is learned in a statistical model for interpreting the facial expression of happiness. The authors will show the experiments of such a model using face features for learning by SVM and analyze the performance.

2017 ◽  
Vol 3 (2) ◽  
pp. 811-814 ◽  
Author(s):  
Erik Rodner ◽  
Marcel Simon ◽  
Joachim Denzler

AbstractWe present an automated approach for rating HER2 over-expressions in given whole-slide images of breast cancer histology slides. The slides have a very high resolution and only a small part of it is relevant for the rating.Our approach is based on Convolutional Neural Networks (CNN), which are directly modelling the whole computer vision pipeline, from feature extraction to classification, with a single parameterized model. CNN models have led to a significant breakthrough in a lot of vision applications and showed promising results for medical tasks. However, the required size of training data is still an issue. Our CNN models are pre-trained on a large set of datasets of non-medical images, which prevents over-fitting to the small annotated dataset available in our case. We assume the selection of the probe in the data with just a single mouse click defining a point of interest. This is reasonable especially for slices acquired together with another sample. We sample image patches around the point of interest and obtain bilinear features by passing them through a CNN and encoding the output of the last convolutional layer with its second-order statistics.Our approach ranked second in the Her2 contest held by the University of Warwick achieving 345 points compared to 348 points of the winning team. In addition to pure classification, our approach would also allow for localization of parts of the slice relevant for visual detection of Her2 over-expression.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2003 ◽  
Author(s):  
Xiaoliang Zhu ◽  
Shihao Ye ◽  
Liang Zhao ◽  
Zhicheng Dai

As a sub-challenge of EmotiW (the Emotion Recognition in the Wild challenge), how to improve performance on the AFEW (Acted Facial Expressions in the wild) dataset is a popular benchmark for emotion recognition tasks with various constraints, including uneven illumination, head deflection, and facial posture. In this paper, we propose a convenient facial expression recognition cascade network comprising spatial feature extraction, hybrid attention, and temporal feature extraction. First, in a video sequence, faces in each frame are detected, and the corresponding face ROI (range of interest) is extracted to obtain the face images. Then, the face images in each frame are aligned based on the position information of the facial feature points in the images. Second, the aligned face images are input to the residual neural network to extract the spatial features of facial expressions corresponding to the face images. The spatial features are input to the hybrid attention module to obtain the fusion features of facial expressions. Finally, the fusion features are input in the gate control loop unit to extract the temporal features of facial expressions. The temporal features are input to the fully connected layer to classify and recognize facial expressions. Experiments using the CK+ (the extended Cohn Kanade), Oulu-CASIA (Institute of Automation, Chinese Academy of Sciences) and AFEW datasets obtained recognition accuracy rates of 98.46%, 87.31%, and 53.44%, respectively. This demonstrated that the proposed method achieves not only competitive performance comparable to state-of-the-art methods but also greater than 2% performance improvement on the AFEW dataset, proving the significant outperformance of facial expression recognition in the natural environment.


2018 ◽  
Vol 2018 ◽  
pp. 1-10 ◽  
Author(s):  
Muhammad Sajid ◽  
Nouman Ali ◽  
Saadat Hanif Dar ◽  
Naeem Iqbal Ratyal ◽  
Asif Raza Butt ◽  
...  

Recently, face datasets containing celebrities photos with facial makeup are growing at exponential rates, making their recognition very challenging. Existing face recognition methods rely on feature extraction and reference reranking to improve the performance. However face images with facial makeup carry inherent ambiguity due to artificial colors, shading, contouring, and varying skin tones, making recognition task more difficult. The problem becomes more confound as the makeup alters the bilateral size and symmetry of the certain face components such as eyes and lips affecting the distinctiveness of faces. The ambiguity becomes even worse when different days bring different facial makeup for celebrities owing to the context of interpersonal situations and current societal makeup trends. To cope with these artificial effects, we propose to use a deep convolutional neural network (dCNN) using augmented face dataset to extract discriminative features from face images containing synthetic makeup variations. The augmented dataset containing original face images and those with synthetic make up variations allows dCNN to learn face features in a variety of facial makeup. We also evaluate the role of partial and full makeup in face images to improve the recognition performance. The experimental results on two challenging face datasets show that the proposed approach can compete with the state of the art.


Author(s):  
Daniel Riccio ◽  
Andrea Casanova ◽  
Gianni Fenu

Face recognition in real world applications is a very difficult task because of image misalignments, pose and illumination variations, or occlusions. Many researchers in this field have investigated both face representation and classification techniques able to deal with these drawbacks. However, none of them is free from limitations. Early proposed algorithms were generally holistic, in the sense they consider the face object as a whole. Recently, challenging benchmarks demonstrated that they are not adequate to be applied in unconstrained environments, despite of their good performances in more controlled conditions. Therefore, the researchers' attention is now turning on local features that have been demonstrated to be more robust to a large set of non-monotonic distortions. Nevertheless, though local operators partially overcome some drawbacks, they are still opening new questions (e.g., Which criteria should be used to select the most representative features?). This is the reason why, among all the others, hybrid approaches are showing a high potential in terms of recognition accuracy when applied in uncontrolled settings, as they integrate complementary information from both local and global features. This chapter explores local, global, and hybrid approaches.


Symmetry ◽  
2020 ◽  
Vol 12 (2) ◽  
pp. 190
Author(s):  
Zuodong Niu ◽  
Handong Li ◽  
Yao Li ◽  
Yingjie Mei ◽  
Jing Yang

Face image inpainting technology is an important research direction in image restoration. When the current image restoration methods repair the damaged areas of face images with weak texture, there are problems such as low accuracy of face image decomposition, unreasonable restoration structure, and degradation of image quality after inpainting. Therefore, this paper proposes an adaptive face image inpainting algorithm based on feature symmetry. Firstly, we locate the feature points of the face, and segment the face into four feature parts based on the feature point distribution to define the feature search range. Then, we construct a new mathematical model, introduce feature symmetry to improve priority calculation, and increase the reliability of priority calculation. After that, in the process of searching for matching blocks, we accurately locate similar feature blocks according to the relative position and symmetry criteria of the target block and various feature parts of the face. Finally, we introduced the HSV (Hue, Saturation, Value) color space to determine the best matching block according to the chroma and brightness of the sample, reduce the repair error, and complete the face image inpainting. During the experiment, we firstly performed visual evaluation and texture analysis on the inpainting face image, and the results show that the face image inpainting by our algorithm maintained the consistency of the face structure, and the visual observation was closer to the real face features. Then, we used the peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) as objective evaluation indicators; among the five sample face images inpainting results given in this paper, our method was better than the reference methods, and the average PSNR value improved from 2.881–5.776 dB using our method when inpainting 100 face images. Additionally, we used the time required for inpainting the unit pixel to evaluate the inpainting efficiency, and it was improved by 12%–49% with our method when inpainting 100 face images. Finally, by comparing the face image inpainting experiments with the generative adversary network (GAN) algorithm, we discuss some of the problems with the method in this paper based on graphics in repairing face images with large areas of missing features.


Optik ◽  
2016 ◽  
Vol 127 (15) ◽  
pp. 6195-6203 ◽  
Author(s):  
Sajid Ali Khan ◽  
Ayyaz Hussain ◽  
Muhammad Usman

Sign in / Sign up

Export Citation Format

Share Document