scholarly journals Deep Image-to-Video Adaptation and Fusion Networks for Action Recognition

2020 ◽  
Vol 29 ◽  
pp. 3168-3182 ◽  
Author(s):  
Yang Liu ◽  
Zhaoyang Lu ◽  
Jing Li ◽  
Tao Yang ◽  
Chao Yao
2017 ◽  
Vol 47 (4) ◽  
pp. 960-973 ◽  
Author(s):  
Jianguang Zhang ◽  
Yahong Han ◽  
Jinhui Tang ◽  
Qinghua Hu ◽  
Jianmin Jiang

Author(s):  
Rohan Munshi

Given a sequence of images i.e. video, the task given a sequence of images i.e. video, the task of action recognition is to identify the most same action among the action sequences learned by the system. Such human action recognition is based on evidence gathered from videos. It has a lot of applications including surveillance, video indexing, biometrics, telehealth, and human-computer interaction. Vision-based human activity recognition is plagued by numerous challenges thanks to reading changes, occlusion, variation in execution rate, camera motion, and background clutter. In this survey, we provide an overview and report of the existing methods based on their ability to handle these challenges as well as how these methods can be generalized and their ability to detect abnormal actions. Such systematic classification can facilitate researchers to spot the acceptable ways on the market to deal with every one of the challenges visaged and their limitations. In addition to this, we also identify the public datasets and the challenges posed by them. From this survey, we have a tendency to draw conclusions relating to however well a challenge has been resolved, and that we determine potential analysis areas that need more work.


2013 ◽  
Vol 18 (2-3) ◽  
pp. 49-60 ◽  
Author(s):  
Damian Dudzńiski ◽  
Tomasz Kryjak ◽  
Zbigniew Mikrut

Abstract In this paper a human action recognition algorithm, which uses background generation with shadow elimination, silhouette description based on simple geometrical features and a finite state machine for recognizing particular actions is described. The performed tests indicate that this approach obtains a 81 % correct recognition rate allowing real-time image processing of a 360 X 288 video stream.


2018 ◽  
Vol 6 (10) ◽  
pp. 323-328
Author(s):  
K.Kiruba . ◽  
D. Shiloah Elizabeth ◽  
C Sunil Retmin Raj

2019 ◽  
Author(s):  
Giacomo De Rossi ◽  
◽  
Nicola Piccinelli ◽  
Francesco Setti ◽  
Riccardo Muradore ◽  
...  

2011 ◽  
Vol 31 (2) ◽  
pp. 406-409 ◽  
Author(s):  
Ying-jie LI ◽  
Yi-xin YIN ◽  
Fei DENG

ROBOT ◽  
2012 ◽  
Vol 34 (6) ◽  
pp. 745 ◽  
Author(s):  
Bin WANG ◽  
Yuanyuan WANG ◽  
Wenhua XIAO ◽  
Wei WANG ◽  
Maojun ZHANG

Author(s):  
Rajat Khurana ◽  
Alok Kumar Singh Kushwaha

Background & Objective: Identification of human actions from video has gathered much attention in past few years. Most of the computer vision tasks such as Health Care Activity Detection, Suspicious Activity detection, Human Computer Interactions etc. are based on the principle of activity detection. Automatic labelling of activity from videos frames is known as activity detection. Motivation of this work is to use most out of the data generated from sensors and use them for recognition of classes. Recognition of actions from videos sequences is a growing field with the upcoming trends of deep neural networks. Automatic learning capability of Convolutional Neural Network (CNN) make them good choice as compared to traditional handcrafted based approaches. With the increasing demand of RGB-D sensors combination of RGB and depth data is in great demand. This work comprises of the use of dynamic images generated from RGB combined with depth map for action recognition purpose. We have experimented our approach on pre trained VGG-F model using MSR Daily activity dataset and UTD MHAD Dataset. We achieve state of the art results. To support our research, we have calculated different parameters apart from accuracy such as precision, F score, recall. Conclusion: Accordingly, the investigation confirms improvement in term of accuracy, precision, F-Score and Recall. The proposed model is 4 Stream model is prone to occlusion, used in real time and also the data from the RGB-D sensor is fully utilized.


Sign in / Sign up

Export Citation Format

Share Document