group activity recognition
Recently Published Documents


TOTAL DOCUMENTS

85
(FIVE YEARS 46)

H-INDEX

12
(FIVE YEARS 5)

2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Smita S. Kulkarni ◽  
Sangeeta Jadhav

This paper represents the recognition of group activity in public areas, considering personal actions and interactions between people from the field of computer vision. Modeling the interaction relationships between multiple people is essential for recognizing group activity in the video scene. In artificial intelligence applications, identifying group activities based on human interaction is often a challenging task. This paper proposed a model that formulates a group action context (GAC) descriptor. The descriptor was developed by integrating the focal person action descriptor and interaction joint context descriptor of nearby people in the video frame. The model used an efficient optimization principle based on machine learning to learn the discriminative interaction context relations between multiple persons. The proposed novel group action context descriptor is classified by support vector machine (SVM) to recognize group activity. The proposed technique effectiveness is evaluated for group activity recognition by performing experiments on a publicly available collective activity dataset. The proposed approach infers a group action class when multiple persons are together in the video sequence, especially when the interaction between people is confusing. The overall group action recognition model is interrelated with a baseline model to estimate the performance of interaction context information. The experimental result of the proposed group activity recognition model is comparable and outperforms the previous methods.


2021 ◽  
Author(s):  
Lifang Wu ◽  
Zeyu Li ◽  
Ye Xiang ◽  
Meng Jian ◽  
Jialie Shen

Author(s):  
Pranjal Kumar

In this paper, we propose a robust video understanding model for activity recognition by learning the actor’s pair-wise correlations and relational reasoning, exploiting spatial and temporal information. In order to measure the similarity between the pair appearances and construct an actor relations map, the Zero Mean Normalized Cross-Correlation (ZNCC) and the Zero Mean Sum of Absolute Differences(ZSAD) is proposed to allow the Graph Convolution Network (GCN) to learn how to distinguish group actions. We recommend that MNASNet be used as the backbone to retrieve features. Experiments show a 38.50% and 23.7% reduction in training time in the 2-stage training process along with a 1.52% improvement in accuracy against traditional methods.


Sign in / Sign up

Export Citation Format

Share Document