On age prediction from facial images in presence of facial expressions

2021 ◽  
Vol 6 (4) ◽  
pp. 345
Author(s):  
Tanmoy Sarkar Pias ◽  
Ashikur Rahman ◽  
Md. Shahedul Haque Shawon ◽  
Nirob Arefin ◽  
Sagor Biswas
2021 ◽  
Vol 6 (4) ◽  
pp. 345
Author(s):  
Md. Shahedul Haque Shawon ◽  
Sagor Biswas ◽  
Nirob Arefin ◽  
Tanmoy Sarkar Pias ◽  
Ashikur Rahman

2017 ◽  
Author(s):  
Haoming Guan ◽  
Honxu Wei ◽  
Xingyuan He ◽  
Zhibin Ren ◽  
Xin Chen ◽  
...  

Urban forests can attract visitors by the function of well-being improvement, which can be evaluated by analyzing the big-data from the social networking services (SNS). In this study, 935 facial images of visitors to nine urban forest parks were screened and downloaded from check-in records in the SNS platform of Sina Micro-Blog at cities of Changchun, Harbin, and Shenyang in Northeast China. Images were recognized for facial expressions by FaceReaderTM to read out eight emotional expressions: neutral, happy, sad, angry, surprised, scared, disgusted, and contempt. The number of images by women was larger than that by men. Compared to images from Changchun, those from Shenyang harbored higher neutral degree, which showed a positive relationship with the distance of forest park from downtown. In Changchun, the angry, surprised, and disgusted degrees decreased with the increase of distance of forest park from downtown, while the happy and disgusted degrees showed the same trend in Shenyang. In forest parks at city center and remote-rural areas, the neutral degree was positively correlated with the angry, surprised and contempt degrees but negatively correlated with the happy and disgusted degrees. In the sub-urban area the correlation of neutral with both surprised and disgusted degrees disappeared. Our study can be referred to by urban planning to evaluate the perceived well-being in urban forests through analyzing facial expressions of images from SNS.


2020 ◽  
Vol 20 (1) ◽  
pp. 66-77
Author(s):  
Francisco Adelton Alves Ribeiro ◽  
Álvaro Itauna Schalcher Pereira ◽  
Miguel de Sousa Freitas ◽  
Dina Karla Plácido Nascimento

The game is an educational tool developed by the multidisciplinary team of the Federal Institute of Education Science and Technology of Maranhão, composed of professors, students and volunteers, to be applied in the daily life of children with disorder of Autistic Spectre. It follows the differentiated teaching model, because it aims to assist in the treatment of autistic children through the recognition and interpretations of facial expressions, in their various spectra (mild, moderate or severe), exercising their stimuli and Cognitive ability to recognize distinct facial expressions through mobile devices in a multiple choice environment, allowing the gradual increase of the autistic's sensitivity to external stimuli, a predilection for facial images that are They handle repetitively, developing the wearer's motricity, improving their interpersonal relationship. The methodology used is Applied Behavior Analysis (ABA), commonly associated with the treatment of people with autism spectrum disorders using positive reinforcements, thus contributing to the teaching and practice effectively based on Evidence, because it consists of basic, applied and theoretical research, through social behaviors and patterns. The research was distinguished, in the academic and scientific environment, with three works published in international events.


2022 ◽  
Vol 8 ◽  
Author(s):  
Niyati Rawal ◽  
Dorothea Koert ◽  
Cigdem Turan ◽  
Kristian Kersting ◽  
Jan Peters ◽  
...  

The ability of a robot to generate appropriate facial expressions is a key aspect of perceived sociability in human-robot interaction. Yet many existing approaches rely on the use of a set of fixed, preprogrammed joint configurations for expression generation. Automating this process provides potential advantages to scale better to different robot types and various expressions. To this end, we introduce ExGenNet, a novel deep generative approach for facial expressions on humanoid robots. ExGenNets connect a generator network to reconstruct simplified facial images from robot joint configurations with a classifier network for state-of-the-art facial expression recognition. The robots’ joint configurations are optimized for various expressions by backpropagating the loss between the predicted expression and intended expression through the classification network and the generator network. To improve the transfer between human training images and images of different robots, we propose to use extracted features in the classifier as well as in the generator network. Unlike most studies on facial expression generation, ExGenNets can produce multiple configurations for each facial expression and be transferred between robots. Experimental evaluations on two robots with highly human-like faces, Alfie (Furhat Robot) and the android robot Elenoide, show that ExGenNet can successfully generate sets of joint configurations for predefined facial expressions on both robots. This ability of ExGenNet to generate realistic facial expressions was further validated in a pilot study where the majority of human subjects could accurately recognize most of the generated facial expressions on both the robots.


2021 ◽  
pp. 2090-2098
Author(s):  
Wasan. Maddah Alaluosi

Facial expressions are a term that expresses a group of movements of the facial fore muscles that is related to one's own human emotions. Human–computer interaction (HCI) has been considered as one of the most attractive and fastest-growing fields. Adding emotional expression’s recognition to expect the users’ feelings and emotional state can drastically improves HCI. This paper aims to demonstrate the three most important facial expressions (happiness, sadness, and surprise). It contains three stages; first, the preprocessing stage was performed to enhance the facial images. Second, the feature extraction stage depended on Discrete Wavelet Transform (DWT) and Discrete Cosine Transform (DCT) methods. Third, the recognition stage was applied using an artificial neural network, known as Back Propagation Neural Network (BPNN), on database images from Cohen-Kanade. The method was shown to be very efficient, where the total rate of recognition of the three facial expressions was 92.9%.


2017 ◽  
Vol 4 (3) ◽  
pp. 16
Author(s):  
ZABEEN K.T. RESHMA ◽  
SAVITHRI V ◽  
◽  
Keyword(s):  

2013 ◽  
Vol 479-480 ◽  
pp. 834-838
Author(s):  
Jia Shing Sheu ◽  
Tsu Shien Hsieh ◽  
Ho Nien Shou

The advancement in computer technology provided instant messaging software that makes human interactions possible and dynamic. However, such software cannot convey actual emotions and lack a realistic depiction of feelings. Instant messaging will be more interesting if users’ facial images are integrated into a virtual portrait that can automatically create images with different expressions. This study uses triangular segmentation to generate facial expressions. The application of an image editing technique is introduced to automatically create images with expressions from an expressionless facial image. The probable facial regions are separated from the background of the facial image through skin segmentation and noise filtration morphology. The control points of feature shapes are marked on the image to create facial expressions. Triangular segmentation, image correction, and image interpolation technique are applied. Image processing technology is also used to transform the space of features, thus generating a new expression.


2002 ◽  
Vol 45 (1) ◽  
pp. 298-305
Author(s):  
Hiroshi KOBAYASHI ◽  
Seiji SUZUKI ◽  
Hisanori TAKAHASHI ◽  
Akira TANGE ◽  
Kohki KIKUCHI

Author(s):  
Haoming Guan ◽  
Honxu Wei ◽  
Xingyuan He ◽  
Zhibin Ren ◽  
Xin Chen ◽  
...  

Urban forests can attract visitors by the function of well-being improvement, which can be evaluated by analyzing the big-data from the social networking services (SNS). In this study, 935 facial images of visitors to nine urban forest parks were screened and downloaded from check-in records in the SNS platform of Sina Micro-Blog at cities of Changchun, Harbin, and Shenyang in Northeast China. Images were recognized for facial expressions by FaceReaderTM to read out eight emotional expressions: neutral, happy, sad, angry, surprised, scared, disgusted, and contempt. The number of images by women was larger than that by men. Compared to images from Changchun, those from Shenyang harbored higher neutral degree, which showed a positive relationship with the distance of forest park from downtown. In Changchun, the angry, surprised, and disgusted degrees decreased with the increase of distance of forest park from downtown, while the happy and disgusted degrees showed the same trend in Shenyang. In forest parks at city center and remote-rural areas, the neutral degree was positively correlated with the angry, surprised and contempt degrees but negatively correlated with the happy and disgusted degrees. In the sub-urban area the correlation of neutral with both surprised and disgusted degrees disappeared. Our study can be referred to by urban planning to evaluate the perceived well-being in urban forests through analyzing facial expressions of images from SNS.


Sign in / Sign up

Export Citation Format

Share Document