scholarly journals Detecting Human and Classification of Gender using Facial Images MSIFT Features Based GSVM

2019 ◽  
Vol 8 (3) ◽  
pp. 1466-1471 ◽  

Classification of gender using face recognition system is an essential concept for different types of applications in human-computer interaction and computer-aided related applications. It defines a wide range of features from human images to detect male, female and others using real-time data. There are different machine learning approaches were implemented to classify gender and also detects other images during the classification phase, which are not humans based on features extracted from human images datasets. All these existing techniques mostly depend on controlled conditions like features and other representations of the human image. Because of significant and uncertain variations of a particular image, it may be a challenging task in gender classification for real-time image processing application, whether it is male, female and others. So that in this document, we propose a Human detection and Face based gender Recognition System (HDFGR); to investigate male or female classification on real life faces using real world face databases. Our proposed approach consists Multi-Scale Invariant Feature Transform (MSIFT) to describes faces and Gaussian distance-based support vector machine (GSVM) classifier is used to classify gender and objects, i.e. male, female and other from features extracted human image datasets. We obtain an experimental performance of 98.7% by applying DSVM with boosted MSIFT features. Our proposed approach gives better classification accuracy and other performance parameters compared to different existing approaches with benchmark and evaluation of possible databases.

2009 ◽  
Vol 14 (2) ◽  
pp. 109-119 ◽  
Author(s):  
Ulrich W. Ebner-Priemer ◽  
Timothy J. Trull

Convergent experimental data, autobiographical studies, and investigations on daily life have all demonstrated that gathering information retrospectively is a highly dubious methodology. Retrospection is subject to multiple systematic distortions (i.e., affective valence effect, mood congruent memory effect, duration neglect; peak end rule) as it is based on (often biased) storage and recollection of memories of the original experience or the behavior that are of interest. The method of choice to circumvent these biases is the use of electronic diaries to collect self-reported symptoms, behaviors, or physiological processes in real time. Different terms have been used for this kind of methodology: ambulatory assessment, ecological momentary assessment, experience sampling method, and real-time data capture. Even though the terms differ, they have in common the use of computer-assisted methodology to assess self-reported symptoms, behaviors, or physiological processes, while the participant undergoes normal daily activities. In this review we discuss the main features and advantages of ambulatory assessment regarding clinical psychology and psychiatry: (a) the use of realtime assessment to circumvent biased recollection, (b) assessment in real life to enhance generalizability, (c) repeated assessment to investigate within person processes, (d) multimodal assessment, including psychological, physiological and behavioral data, (e) the opportunity to assess and investigate context-specific relationships, and (f) the possibility of giving feedback in real time. Using prototypic examples from the literature of clinical psychology and psychiatry, we demonstrate that ambulatory assessment can answer specific research questions better than laboratory or questionnaire studies.


Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 405
Author(s):  
Marcos Lupión ◽  
Javier Medina-Quero ◽  
Juan F. Sanjuan ◽  
Pilar M. Ortigosa

Activity Recognition (AR) is an active research topic focused on detecting human actions and behaviours in smart environments. In this work, we present the on-line activity recognition platform DOLARS (Distributed On-line Activity Recognition System) where data from heterogeneous sensors are evaluated in real time, including binary, wearable and location sensors. Different descriptors and metrics from the heterogeneous sensor data are integrated in a common feature vector whose extraction is developed by a sliding window approach under real-time conditions. DOLARS provides a distributed architecture where: (i) stages for processing data in AR are deployed in distributed nodes, (ii) temporal cache modules compute metrics which aggregate sensor data for computing feature vectors in an efficient way; (iii) publish-subscribe models are integrated both to spread data from sensors and orchestrate the nodes (communication and replication) for computing AR and (iv) machine learning algorithms are used to classify and recognize the activities. A successful case study of daily activities recognition developed in the Smart Lab of The University of Almería (UAL) is presented in this paper. Results present an encouraging performance in recognition of sequences of activities and show the need for distributed architectures to achieve real time recognition.


2019 ◽  
pp. 245-256
Author(s):  
Chiranji Lal Chowdhary ◽  
Rachit Bhalla ◽  
Esha Kumar ◽  
Gurpreet Singh ◽  
K. Bhagyashree ◽  
...  

2021 ◽  
Vol 11 (22) ◽  
pp. 10540
Author(s):  
Navjot Rathour ◽  
Zeba Khanam ◽  
Anita Gehlot ◽  
Rajesh Singh ◽  
Mamoon Rashid ◽  
...  

There is a significant interest in facial emotion recognition in the fields of human–computer interaction and social sciences. With the advancements in artificial intelligence (AI), the field of human behavioral prediction and analysis, especially human emotion, has evolved significantly. The most standard methods of emotion recognition are currently being used in models deployed in remote servers. We believe the reduction in the distance between the input device and the server model can lead us to better efficiency and effectiveness in real life applications. For the same purpose, computational methodologies such as edge computing can be beneficial. It can also encourage time-critical applications that can be implemented in sensitive fields. In this study, we propose a Raspberry-Pi based standalone edge device that can detect real-time facial emotions. Although this edge device can be used in variety of applications where human facial emotions play an important role, this article is mainly crafted using a dataset of employees working in organizations. A Raspberry-Pi-based standalone edge device has been implemented using the Mini-Xception Deep Network because of its computational efficiency in a shorter time compared to other networks. This device has achieved 100% accuracy for detecting faces in real time with 68% accuracy, i.e., higher than the accuracy mentioned in the state-of-the-art with the FER 2013 dataset. Future work will implement a deep network on Raspberry-Pi with an Intel Movidious neural compute stick to reduce the processing time and achieve quick real time implementation of the facial emotion recognition system.


2020 ◽  
Author(s):  
Eleonora De Filippi ◽  
Mara Wolter ◽  
Bruno Melo ◽  
Carlos J. Tierra-Criollo ◽  
Tiago Bortolini ◽  
...  

AbstractDuring the last decades, neurofeedback training for emotional self-regulation has received significant attention from both the scientific and clinical communities. However, most studies have focused on broader emotional states such as “negative vs. positive”, primarily due to our poor understanding of the functional anatomy of more complex emotions at the electrophysiological level. Our proof-of-concept study aims at investigating the feasibility of classifying two complex emotions that have been implicated in mental health, namely tenderness and anguish, using features extracted from the electroencephalogram (EEG) signal in healthy participants. Electrophysiological data were recorded from fourteen participants during a block-designed experiment consisting of emotional self-induction trials combined with a multimodal virtual scenario. For the within-subject classification, the linear Support Vector Machine was trained with two sets of samples: random cross-validation of the sliding windows of all trials; and 2) strategic cross-validation, assigning all the windows of one trial to the same fold. Spectral features, together with the frontal-alpha asymmetry, were extracted using Complex Morlet Wavelet analysis. Classification results with these features showed an accuracy of 79.3% on average when doing random cross-validation, and 73.3% when applying strategic cross-validation. We extracted a second set of features from the amplitude time-series correlation analysis, which significantly enhanced random cross-validation accuracy while showing similar performance to spectral features when doing strategic cross-validation. These results suggest that complex emotions show distinct electrophysiological correlates, which paves the way for future EEG-based, real-time neurofeedback training of complex emotional states.Significance statementThere is still little understanding about the correlates of high-order emotions (i.e., anguish and tenderness) in the physiological signals recorded with the EEG. Most studies have investigated emotions using functional magnetic resonance imaging (fMRI), including the real-time application in neurofeedback training. However, concerning the therapeutic application, EEG is a more suitable tool with regards to costs and practicability. Therefore, our proof-of-concept study aims at establishing a method for classifying complex emotions that can be later used for EEG-based neurofeedback on emotion regulation. We recorded EEG signals during a multimodal, near-immersive emotion-elicitation experiment. Results demonstrate that intraindividual classification of discrete emotions with features extracted from the EEG is feasible and may be implemented in real-time to enable neurofeedback.


Author(s):  
M. Asif Naeem ◽  
Gillian Dobbie ◽  
Gerald Weber

In order to make timely and effective decisions, businesses need the latest information from big data warehouse repositories. To keep these repositories up to date, real-time data integration is required. An important phase in real-time data integration is data transformation where a stream of updates, which is huge in volume and infinite, is joined with large disk-based master data. Stream processing is an important concept in Big Data, since large volumes of data are often best processed immediately. A well-known algorithm called Mesh Join (MESHJOIN) was proposed to process stream data with disk-based master data, which uses limited memory. MESHJOIN is a candidate for a resource-aware system setup. The problem that the authors consider in this chapter is that MESHJOIN is not very selective. In particular, the performance of the algorithm is always inversely proportional to the size of the master data table. As a consequence, the resource consumption is in some scenarios suboptimal. They present an algorithm called Cache Join (CACHEJOIN), which performs asymptotically at least as well as MESHJOIN but performs better in realistic scenarios, particularly if parts of the master data are used with different frequencies. In order to quantify the performance differences, the authors compare both algorithms with a synthetic dataset of a known skewed distribution as well as TPC-H and real-life datasets.


2019 ◽  
Vol 10 (1) ◽  
pp. 43-54
Author(s):  
Karthik Sudhakaran Menon ◽  
Brinzel Rodrigues ◽  
Akash Prakash Barot ◽  
Prasad Avinash Gharat

In today's world, air pollution has become a common phenomenon everywhere, especially in the urban areas, air pollution is a real-life problem. In urban areas, the increased number of hydrocarbons and diesel vehicles and the presence of industrial areas at the outskirts of the major cities are the main causes of air pollution. The problem is seriously intense within the metropolitan cities. The governments around the world are taking measure in their capability. The main aim of this project is to develop a system which may monitor and measure pollutants in the air in real time, tell the quality of air and log real-time data onto a remote server (Cloud Service). If the value of the parameters exceeds the given threshold value, then an alert message is sent with the GPS coordinates to the registered number of the authority or person so necessary actions can be taken. The Arduino board connects with Thingspeak cloud service platform using ESP8266 Wi-Fi module. The device uses multiple sensors for monitoring the parameters of the air pollution like MQ-135, MQ-7, DHT-22, sound sensor, LCD.


2002 ◽  
Vol 36 (1) ◽  
pp. 29-38 ◽  
Author(s):  
Ray Berkelmans ◽  
Jim C. Hendee ◽  
Paul A. Marshall ◽  
Peter V. Ridd ◽  
Alan R. Orpin ◽  
...  

With recent technological advances and a reduction in the cost of automatic weather stations and data buoys, the potential exists for significant advancement in science and environmental management using near real-time, high-resolution data to predict biological and/or physical events. However, real-world examples of how this potential wealth of data has been used in environmental management are few and far between. We describe in detail two examples where near real-time data are being used for the benefit of science and management. These include a prediction of coral bleaching events using temperature, light and wind as primary predictor variables, and the management of a coastal development where dynamic discharge quality limits are maintained with the aid of wind data as a proxy for turbidity in receiving waters. We argue that the limiting factors for the use of near real-time environmental data in management is frequently not the availability of the data, but the lack of knowledge of the quantitative relationships between biological/physical processes or events and environmental variables. We advocate renewed research into this area and an integrated approach to the use of a wide range of data types to deal with management issues in an innovative, cost-effective manner.


Sign in / Sign up

Export Citation Format

Share Document