A Real Time Gender Recognition System Using Facial Images and CNN

2019 ◽  
Vol 7 (9) ◽  
pp. 122-126
Author(s):  
Taran Rishit Undru ◽  
CVNS Anuradha
2019 ◽  
Vol 11 (6) ◽  
pp. 2407-2419 ◽  
Author(s):  
Vincenzo Carletti ◽  
Antonio Greco ◽  
Alessia Saggese ◽  
Mario Vento

2021 ◽  
Vol 6 (1) ◽  
pp. 27-45
Author(s):  
Martins E. Irhebhude ◽  
Adeola O. Kolawole ◽  
Hauwa K. Goma

Gender recognition has been seen as an interesting research area that plays important roles in many fields of study. Studies from MIT and Microsoft clearly showed that the female gender was poorly recognized especially among dark-skinned nationals. The focus of this paper is to present a technique that categorise gender among dark-skinned people. The classification was done using SVM on sets of images gathered locally and publicly. Analysis includes; face detection using Viola-Jones algorithm, extraction of Histogram of Oriented Gradient and Rotation Invariant LBP (RILBP) features and trained with SVM classifier. PCA was performed on both the HOG and RILBP descriptors to extract high dimensional features. Various success rates were recorded, however, PCA on RILBP performed best with an accuracy of 99.6% and 99.8% respectively on the public and local datasets. This system will be of immense benefit in application areas like social interaction and targeted advertisement.


In current scenario of technological advancement, human-machine association is becoming sought after and machine needs to comprehend human emotions and feelings. The productivity of an exercise can be improved to a considerable extent, if a machine can distinguish human feelings by understanding the human conduct. Feelings can comprehend by content, vocal, verbal and outward appearances. The major deciding factor in the identification of human emotions is Facial expression. Working with facial images and emotion is real time is a big task. It is also found that confined amount of work has been done in this field. In this paper, we propose a technique for facial landmark detection and feature extraction which is the most crucial prerequisite for emotion recognition system by capturing the facial images in real time. The proposed system is divided into three tightly coupled stages of face detection, landmark detection and feature extraction. This is done by HOG and Linear SVM-based face detector using dlib and OpenCV. The curiosity of our proposed strategy lies in the execution stage. Raspberry Pi III, B+ and a normal exactness of 99.9% is accomplished at ongoing. This paper can be proved as the basis of real time emotion recognition in majority of applications.


2021 ◽  
Vol 11 (11) ◽  
pp. 4758
Author(s):  
Ana Malta ◽  
Mateus Mendes ◽  
Torres Farinha

Maintenance professionals and other technical staff regularly need to learn to identify new parts in car engines and other equipment. The present work proposes a model of a task assistant based on a deep learning neural network. A YOLOv5 network is used for recognizing some of the constituent parts of an automobile. A dataset of car engine images was created and eight car parts were marked in the images. Then, the neural network was trained to detect each part. The results show that YOLOv5s is able to successfully detect the parts in real time video streams, with high accuracy, thus being useful as an aid to train professionals learning to deal with new equipment using augmented reality. The architecture of an object recognition system using augmented reality glasses is also designed.


Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 405
Author(s):  
Marcos Lupión ◽  
Javier Medina-Quero ◽  
Juan F. Sanjuan ◽  
Pilar M. Ortigosa

Activity Recognition (AR) is an active research topic focused on detecting human actions and behaviours in smart environments. In this work, we present the on-line activity recognition platform DOLARS (Distributed On-line Activity Recognition System) where data from heterogeneous sensors are evaluated in real time, including binary, wearable and location sensors. Different descriptors and metrics from the heterogeneous sensor data are integrated in a common feature vector whose extraction is developed by a sliding window approach under real-time conditions. DOLARS provides a distributed architecture where: (i) stages for processing data in AR are deployed in distributed nodes, (ii) temporal cache modules compute metrics which aggregate sensor data for computing feature vectors in an efficient way; (iii) publish-subscribe models are integrated both to spread data from sensors and orchestrate the nodes (communication and replication) for computing AR and (iv) machine learning algorithms are used to classify and recognize the activities. A successful case study of daily activities recognition developed in the Smart Lab of The University of Almería (UAL) is presented in this paper. Results present an encouraging performance in recognition of sequences of activities and show the need for distributed architectures to achieve real time recognition.


2021 ◽  
Vol 11 (4) ◽  
pp. 1933
Author(s):  
Hiroomi Hikawa ◽  
Yuta Ichikawa ◽  
Hidetaka Ito ◽  
Yutaka Maeda

In this paper, a real-time dynamic hand gesture recognition system with gesture spotting function is proposed. In the proposed system, input video frames are converted to feature vectors, and they are used to form a posture sequence vector that represents the input gesture. Then, gesture identification and gesture spotting are carried out in the self-organizing map (SOM)-Hebb classifier. The gesture spotting function detects the end of the gesture by using the vector distance between the posture sequence vector and the winner neuron’s weight vector. The proposed gesture recognition method was tested by simulation and real-time gesture recognition experiment. Results revealed that the system could recognize nine types of gesture with an accuracy of 96.6%, and it successfully outputted the recognition result at the end of gesture using the spotting result.


Sign in / Sign up

Export Citation Format

Share Document