scholarly journals Human Action Recognition using STIP Techniques

The activities of human can be classified into human actions, interactions, object- human interactions and group actions. The recognition of actions in the input video is very much useful in computer vision technology. This system gives application to develop a model that can detect and recognize the actions. The variety of HAR applications are Surveillance environment systems, healthcare systems, Military, patient monitoring systems (PMS), etc., that involve interactions between electronic devices such as human-computer interfaces with persons. Initially collected the videos containing actions or interactions were performed by the humans. The given input videos were converted into number of frames and then these frames were undergone preprocessing stage using by applying median filter. The median filter identifies the noises present in the frame and then which replaces the noise by the median of the neighboring pixels. Through frames desired features were extracted. The recognize of action present in the person of the video using these extracted features. There are three spatial temporal interest point (STIP) techniques such as Harris SPIT, Gabour SPIT and HOG SPIT were used for feature extraction from video frames. SVM algorithm is applied for classifying the extracted feature. The action recognition is based on the colored label identified by classifier. The system performance is measured by calculating the classifier performance which is the Accuracy, Sensitivity and Specificity. The accuracy represents the classifier reliability. The specificity and sensitivity represents how exactly the classifier categorizes it’s features to each correct category and how the classifier rejects the features that are not belonging to the particular correct category

Micromachines ◽  
2021 ◽  
Vol 13 (1) ◽  
pp. 72
Author(s):  
Dengshan Li ◽  
Rujing Wang ◽  
Peng Chen ◽  
Chengjun Xie ◽  
Qiong Zhou ◽  
...  

Video object and human action detection are applied in many fields, such as video surveillance, face recognition, etc. Video object detection includes object classification and object location within the frame. Human action recognition is the detection of human actions. Usually, video detection is more challenging than image detection, since video frames are often more blurry than images. Moreover, video detection often has other difficulties, such as video defocus, motion blur, part occlusion, etc. Nowadays, the video detection technology is able to implement real-time detection, or high-accurate detection of blurry video frames. In this paper, various video object and human action detection approaches are reviewed and discussed, many of them have performed state-of-the-art results. We mainly review and discuss the classic video detection methods with supervised learning. In addition, the frequently-used video object detection and human action recognition datasets are reviewed. Finally, a summarization of the video detection is represented, e.g., the video object and human action detection methods could be classified into frame-by-frame (frame-based) detection, extracting-key-frame detection and using-temporal-information detection; the methods of utilizing temporal information of adjacent video frames are mainly the optical flow method, Long Short-Term Memory and convolution among adjacent frames.


2021 ◽  
Author(s):  
Shibin Xuan ◽  
Kuan Wang ◽  
Lixia Liu ◽  
Chang Liu ◽  
Jiaxiang Li

Skeleton-based human action recognition is a research hotspot in recent years, but most of the research focuses on the spatio-temporal feature extraction by convolutional neural network. In order to improve the correct recognition rate of these models, this paper proposes three strategies: using algebraic method to reduce redundant video frames, adding auxiliary edges into the joint adjacency graph to improve the skeleton graph structure, and adding some virtual classes to disperse the error recognition rate. Experimental results on NTU-RGB-D60, NTU-RGB-D120 and Kinetics Skeleton 400 databases show that the proposed strategy can effectively improve the accuracy of the original algorithm.


2017 ◽  
Vol 10 (13) ◽  
pp. 406
Author(s):  
Ankush Rai ◽  
Jagadeesh Kannan R

Human action recognition is a vital field of computer vision research. Its applications incorporate observation frameworks, patient monitoring frameworks, and an assortment of frameworks that include interactions between persons and electronic gadgets, for example, human-computer interfaces. The vast majority of these applications require an automated recognition of abnormal or anomalistic action states, made out of various straightforward (or nuclear) actions of persons. This study gives an overview of different best in class research papers on human movement recognition. Open datasets intended for the assessment of the recognition procedures are also discussed in this paper too, for comparing results of several methodologies on this datasets. We examine both the approaches produced for basic human actions and those for abnormal action states. These methodologies are taxonomically classified based on looking at the points of interest and constraints of every methodology. Space-time volume approaches and sequential methodologies that represent actions and perceive such action sets straightforwardly from images are discussed. Next, hierarchical recognition approaches for abnormal action states are introduced and looked at. Statistics based methodologies, syntactic methodologies, and description based methodologies for hierarchical recognition is examined in the paper.


2013 ◽  
Vol 18 (2-3) ◽  
pp. 49-60 ◽  
Author(s):  
Damian Dudzńiski ◽  
Tomasz Kryjak ◽  
Zbigniew Mikrut

Abstract In this paper a human action recognition algorithm, which uses background generation with shadow elimination, silhouette description based on simple geometrical features and a finite state machine for recognizing particular actions is described. The performed tests indicate that this approach obtains a 81 % correct recognition rate allowing real-time image processing of a 360 X 288 video stream.


2018 ◽  
Vol 6 (10) ◽  
pp. 323-328
Author(s):  
K.Kiruba . ◽  
D. Shiloah Elizabeth ◽  
C Sunil Retmin Raj

ROBOT ◽  
2012 ◽  
Vol 34 (6) ◽  
pp. 745 ◽  
Author(s):  
Bin WANG ◽  
Yuanyuan WANG ◽  
Wenhua XIAO ◽  
Wei WANG ◽  
Maojun ZHANG

Sign in / Sign up

Export Citation Format

Share Document