Facial Expression Generated Automatically Using Triangular Segmentation

2013 ◽  
Vol 479-480 ◽  
pp. 834-838
Author(s):  
Jia Shing Sheu ◽  
Tsu Shien Hsieh ◽  
Ho Nien Shou

The advancement in computer technology provided instant messaging software that makes human interactions possible and dynamic. However, such software cannot convey actual emotions and lack a realistic depiction of feelings. Instant messaging will be more interesting if users’ facial images are integrated into a virtual portrait that can automatically create images with different expressions. This study uses triangular segmentation to generate facial expressions. The application of an image editing technique is introduced to automatically create images with expressions from an expressionless facial image. The probable facial regions are separated from the background of the facial image through skin segmentation and noise filtration morphology. The control points of feature shapes are marked on the image to create facial expressions. Triangular segmentation, image correction, and image interpolation technique are applied. Image processing technology is also used to transform the space of features, thus generating a new expression.

2021 ◽  
Vol 6 (4) ◽  
pp. 345
Author(s):  
Tanmoy Sarkar Pias ◽  
Ashikur Rahman ◽  
Md. Shahedul Haque Shawon ◽  
Nirob Arefin ◽  
Sagor Biswas

2017 ◽  
Author(s):  
Haoming Guan ◽  
Honxu Wei ◽  
Xingyuan He ◽  
Zhibin Ren ◽  
Xin Chen ◽  
...  

Urban forests can attract visitors by the function of well-being improvement, which can be evaluated by analyzing the big-data from the social networking services (SNS). In this study, 935 facial images of visitors to nine urban forest parks were screened and downloaded from check-in records in the SNS platform of Sina Micro-Blog at cities of Changchun, Harbin, and Shenyang in Northeast China. Images were recognized for facial expressions by FaceReaderTM to read out eight emotional expressions: neutral, happy, sad, angry, surprised, scared, disgusted, and contempt. The number of images by women was larger than that by men. Compared to images from Changchun, those from Shenyang harbored higher neutral degree, which showed a positive relationship with the distance of forest park from downtown. In Changchun, the angry, surprised, and disgusted degrees decreased with the increase of distance of forest park from downtown, while the happy and disgusted degrees showed the same trend in Shenyang. In forest parks at city center and remote-rural areas, the neutral degree was positively correlated with the angry, surprised and contempt degrees but negatively correlated with the happy and disgusted degrees. In the sub-urban area the correlation of neutral with both surprised and disgusted degrees disappeared. Our study can be referred to by urban planning to evaluate the perceived well-being in urban forests through analyzing facial expressions of images from SNS.


2021 ◽  
Vol 2021 (3) ◽  
pp. 136-1-136-9
Author(s):  
Franziska Schwarz ◽  
Klaus Schwarz ◽  
Reiner Creutzburg

In recent years, ID controllers have observed an increase in the use of fraudulently obtained ID documents [1]. This often involves deception during the application process to get a genuine document with a manipulated passport photo. One of the methods used by fraudsters is the presentation of a morphed facial image. Face morphing is used to assign multiple identities to a biometric passport photo. It is possible to modify the photo so that two or more persons, usually the known applicant and one or more unknown companions, can use the passport to pass through a border control [2]. In this way, persons prohibited from crossing a border can cross it unnoticed using a face morphing attack and thus acquire a different identity. The face morphing attack aims to weaken the application for an identity card and issue a genuine identity document with a morphed facial image. A survey among experts at the Security Printers Conference revealed that a relevant number of at least 1,000 passports with morphed facial images had been detected in the last five years in Germany alone [1]. Furthermore, there are indications of a high number of unreported cases. This high presumed number of unreported cases can also be explained by the lack of morphed photographs’ detection capabilities. Such identity cards would be recognized if the controllers could recognize the morphed facial images. Various studies have shown that the human eye has a minimal ability to recognize morphed faces as such [2], [3], [4], [5], [6]. This work consists of two parts. Both parts are based on the complete development of a training course for passport control officers to detect morphed facial images. Part one contains the conception and the first test trials of how the training course has to be structured to achieve the desired goals and thus improve the detection of morphed facial images for passport inspectors. The second part of this thesis will include the complete training course and the evaluation of its effectiveness.


Author(s):  
Guozhu Peng ◽  
Shangfei Wang

Current works on facial action unit (AU) recognition typically require fully AU-labeled training samples. To reduce the reliance on time-consuming manual AU annotations, we propose a novel semi-supervised AU recognition method leveraging two kinds of readily available auxiliary information. The method leverages the dependencies between AUs and expressions as well as the dependencies among AUs, which are caused by facial anatomy and therefore embedded in all facial images, independent on their AU annotation status. The other auxiliary information is facial image synthesis given AUs, the dual task of AU recognition from facial images, and therefore has intrinsic probabilistic connections with AU recognition, regardless of AU annotations. Specifically, we propose a dual semi-supervised generative adversarial network for AU recognition from partially AU-labeled and fully expressionlabeled facial images. The proposed network consists of an AU classifier C, an image generator G, and a discriminator D. In addition to minimize the supervised losses of the AU classifier and the face generator for labeled training data, we explore the probabilistic duality between the tasks using adversary learning to force the convergence of the face-AU-expression tuples generated from the AU classifier and the face generator, and the ground-truth distribution in labeled data for all training data. This joint distribution also includes the inherent AU dependencies. Furthermore, we reconstruct the facial image using the output of the AU classifier as the input of the face generator, and create AU labels by feeding the output of the face generator to the AU classifier. We minimize reconstruction losses for all training data, thus exploiting the informative feedback provided by the dual tasks. Within-database and cross-database experiments on three benchmark databases demonstrate the superiority of our method in both AU recognition and face synthesis compared to state-of-the-art works.


2020 ◽  
Vol 20 (1) ◽  
pp. 66-77
Author(s):  
Francisco Adelton Alves Ribeiro ◽  
Álvaro Itauna Schalcher Pereira ◽  
Miguel de Sousa Freitas ◽  
Dina Karla Plácido Nascimento

The game is an educational tool developed by the multidisciplinary team of the Federal Institute of Education Science and Technology of Maranhão, composed of professors, students and volunteers, to be applied in the daily life of children with disorder of Autistic Spectre. It follows the differentiated teaching model, because it aims to assist in the treatment of autistic children through the recognition and interpretations of facial expressions, in their various spectra (mild, moderate or severe), exercising their stimuli and Cognitive ability to recognize distinct facial expressions through mobile devices in a multiple choice environment, allowing the gradual increase of the autistic's sensitivity to external stimuli, a predilection for facial images that are They handle repetitively, developing the wearer's motricity, improving their interpersonal relationship. The methodology used is Applied Behavior Analysis (ABA), commonly associated with the treatment of people with autism spectrum disorders using positive reinforcements, thus contributing to the teaching and practice effectively based on Evidence, because it consists of basic, applied and theoretical research, through social behaviors and patterns. The research was distinguished, in the academic and scientific environment, with three works published in international events.


2021 ◽  
Vol 12 ◽  
Author(s):  
Yu-hang Li ◽  
Xin Tan ◽  
Wei Zhang ◽  
Qing-bin Jiao ◽  
Yu-xing Xu ◽  
...  

This paper focuses on image segmentation, image correction and spatial-spectral dimensional denoising of images in hyperspectral image preprocessing to improve the classification accuracy of hyperspectral images. Firstly, the images were filtered and segmented by using spectral angle and principal component analysis, and the segmented results are intersected and then used to mask the hyperspectral images. Hyperspectral images with a excellent segmentation result was obtained. Secondly, the standard reflectance plates with reflectance of 2 and 98% were used as a priori spectral information for image correction of samples with known true spectral information. The mean square error between the corrected and calibrated spectra is less than 0.0001. Comparing with the black-and-white correction method, the classification model constructed based on this method has higher classification accuracy. Finally, the convolution kernel of the one-dimensional Savitzky-Golay (SG) filter was extended into a two-dimensional convolution kernel to perform joint spatial-spectral dimensional filtering (TSG) on the hyperspectral images. The SG filter (m = 7,n = 3) and TSG filter (m = 3,n = 4) were applied to the hyperspectral image of Pavia University and the quality of the hyperspectral image was evaluated. It was found that the TSG filter retained most of the original features while the noise information of the filtered hyperspectral image was less. The hyperspectral images of sample 1–1 and sample 1–2 were processed by the image segmentation and image correction methods proposed in this paper. Then the classification models based on SG filtering and TSG filtering hyperspectral images were constructed, respectively. The results showed that the TSG filter-based model had higher classification accuracy and the classification accuracy is more than 98%.


2022 ◽  
Vol 8 ◽  
Author(s):  
Niyati Rawal ◽  
Dorothea Koert ◽  
Cigdem Turan ◽  
Kristian Kersting ◽  
Jan Peters ◽  
...  

The ability of a robot to generate appropriate facial expressions is a key aspect of perceived sociability in human-robot interaction. Yet many existing approaches rely on the use of a set of fixed, preprogrammed joint configurations for expression generation. Automating this process provides potential advantages to scale better to different robot types and various expressions. To this end, we introduce ExGenNet, a novel deep generative approach for facial expressions on humanoid robots. ExGenNets connect a generator network to reconstruct simplified facial images from robot joint configurations with a classifier network for state-of-the-art facial expression recognition. The robots’ joint configurations are optimized for various expressions by backpropagating the loss between the predicted expression and intended expression through the classification network and the generator network. To improve the transfer between human training images and images of different robots, we propose to use extracted features in the classifier as well as in the generator network. Unlike most studies on facial expression generation, ExGenNets can produce multiple configurations for each facial expression and be transferred between robots. Experimental evaluations on two robots with highly human-like faces, Alfie (Furhat Robot) and the android robot Elenoide, show that ExGenNet can successfully generate sets of joint configurations for predefined facial expressions on both robots. This ability of ExGenNet to generate realistic facial expressions was further validated in a pilot study where the majority of human subjects could accurately recognize most of the generated facial expressions on both the robots.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2026
Author(s):  
Jung Hwan Kim ◽  
Alwin Poulose ◽  
Dong Seog Han

Facial emotion recognition (FER) systems play a significant role in identifying driver emotions. Accurate facial emotion recognition of drivers in autonomous vehicles reduces road rage. However, training even the advanced FER model without proper datasets causes poor performance in real-time testing. FER system performance is heavily affected by the quality of datasets than the quality of the algorithms. To improve FER system performance for autonomous vehicles, we propose a facial image threshing (FIT) machine that uses advanced features of pre-trained facial recognition and training from the Xception algorithm. The FIT machine involved removing irrelevant facial images, collecting facial images, correcting misplacing face data, and merging original datasets on a massive scale, in addition to the data-augmentation technique. The final FER results of the proposed method improved the validation accuracy by 16.95% over the conventional approach with the FER 2013 dataset. The confusion matrix evaluation based on the unseen private dataset shows a 5% improvement over the original approach with the FER 2013 dataset to confirm the real-time testing.


Sign in / Sign up

Export Citation Format

Share Document