human behavior analysis
Recently Published Documents


TOTAL DOCUMENTS

97
(FIVE YEARS 26)

H-INDEX

15
(FIVE YEARS 2)

2021 ◽  
Vol 11 (18) ◽  
pp. 8324
Author(s):  
Bruno Degardin ◽  
Hugo Proença

The visual recognition and understanding of human actions remain an active research domain of computer vision, being the scope of various research works over the last two decades. The problem is challenging due to its many interpersonal variations in appearance and motion dynamics between humans, without forgetting the environmental heterogeneity between different video images. This complexity splits the problem into two major categories: action classification, recognising the action being performed in the scene, and spatiotemporal action localisation, concerning recognising multiple localised human actions present in the scene. Previous surveys mainly focus on the evolution of this field, from handcrafted features to deep learning architectures. However, this survey presents an overview of both categories and respective evolution within each one, the guidelines that should be followed and the current benchmarks employed for performance comparison between the state-of-the-art methods.


2020 ◽  
Vol 24 (4) ◽  
Author(s):  
Alicia Martinez Rebollar ◽  
Miguel Gonzalez Mendoza ◽  
Hugo Estrada Esquivel ◽  
Wilfrido Campos Francisco ◽  
Virginia Campos Ortiz

Information ◽  
2020 ◽  
Vol 11 (10) ◽  
pp. 468
Author(s):  
Yuhi Kaihoko ◽  
Phan Xuan Tan ◽  
Eiji Kamioka

Nowadays, with smartphones, people can easily take photos, post photos to any social networks, and use the photos for various purposes. This leads to a social problem that unintended appearance in photos may threaten the facial privacy of photographed people. Some solutions to protect facial privacy in photos have already been proposed. However, most of them rely on different techniques to de-identify photos which can be done only by photographers, giving no choice to photographed person. To deal with that, we propose an approach that allows a photographed person to proactively detect whether someone is intentionally/unintentionally trying to take pictures of him. Thereby, he can have appropriate reaction to protect the facial privacy. In this approach, we assume that the photographed person uses a wearable camera to record the surrounding environment in real-time. The skeleton information of likely photographers who are captured in the monitoring video is then extracted and put into the calculation of dynamic programming score which is eventually compared with a threshold for recognition of photo-taking behavior. Experimental results demonstrate that by using the proposed approach, the photo-taking behavior is precisely recognized with high accuracy of 92.5%.


2020 ◽  
Vol 10 (15) ◽  
pp. 5333
Author(s):  
Anam Manzoor ◽  
Waqar Ahmad ◽  
Muhammad Ehatisham-ul-Haq ◽  
Abdul Hannan ◽  
Muhammad Asif Khan ◽  
...  

Emotions are a fundamental part of human behavior and can be stimulated in numerous ways. In real-life, we come across different types of objects such as cake, crab, television, trees, etc., in our routine life, which may excite certain emotions. Likewise, object images that we see and share on different platforms are also capable of expressing or inducing human emotions. Inferring emotion tags from these object images has great significance as it can play a vital role in recommendation systems, image retrieval, human behavior analysis and, advertisement applications. The existing schemes for emotion tag perception are based on the visual features, like color and texture of an image, which are poorly affected by lightning conditions. The main objective of our proposed study is to address this problem by introducing a novel idea of inferring emotion tags from the images based on object-related features. In this aspect, we first created an emotion-tagged dataset from the publicly available object detection dataset (i.e., “Caltech-256”) using subject evaluation from 212 users. Next, we used a convolutional neural network-based model to automatically extract the high-level features from object images for recognizing nine (09) emotion categories, such as amusement, awe, anger, boredom, contentment, disgust, excitement, fear, and sadness. Experimental results on our emotion-tagged dataset endorse the success of our proposed idea in terms of accuracy, precision, recall, specificity, and F1-score. Overall, the proposed scheme achieved an accuracy rate of approximately 85% and 79% using top-level and bottom-level emotion tagging, respectively. We also performed a gender-based analysis for inferring emotion tags and observed that male and female subjects have discernment in emotions perception concerning different object categories.


Sign in / Sign up

Export Citation Format

Share Document