A New Use of Doppler Spectrum for Action Recognition with the Help of Optical Flow

Author(s):  
Meropi Pavlidou ◽  
George Zioutas
Author(s):  
André Souza Brito ◽  
Marcelo Bernardes Vieira ◽  
Saulo Moraes Villela ◽  
Hemerson Tacon ◽  
Hugo Lima Chaves ◽  
...  

2015 ◽  
Vol 2015 ◽  
pp. 1-11 ◽  
Author(s):  
Shaoping Zhu ◽  
Limin Xia

A novel method based on hybrid feature is proposed for human action recognition in video image sequences, which includes two stages of feature extraction and action recognition. Firstly, we use adaptive background subtraction algorithm to extract global silhouette feature and optical flow model to extract local optical flow feature. Then we combine global silhouette feature vector and local optical flow feature vector to form a hybrid feature vector. Secondly, in order to improve the recognition accuracy, we use an optimized Multiple Instance Learning algorithm to recognize human actions, in which an Iterative Querying Heuristic (IQH) optimization algorithm is used to train the Multiple Instance Learning model. We demonstrate that our hybrid feature-based action representation can effectively classify novel actions on two different data sets. Experiments show that our results are comparable to, and significantly better than, the results of two state-of-the-art approaches on these data sets, which meets the requirements of stable, reliable, high precision, and anti-interference ability and so forth.


Author(s):  
Satoshi Hoshino ◽  
◽  
Kyohei Niimura

Mobile robots equipped with camera sensors are required to perceive humans and their actions for safe autonomous navigation. For simultaneous human detection and action recognition, the real-time performance of the robot vision is an important issue. In this paper, we propose a robot vision system in which original images captured by a camera sensor are described by the optical flow. These images are then used as inputs for the human and action classifications. For the image inputs, two classifiers based on convolutional neural networks are developed. Moreover, we describe a novel detector (a local search window) for clipping partial images around the target human from the original image. Since the camera sensor moves together with the robot, the camera movement has an influence on the calculation of optical flow in the image, which we address by further modifying the optical flow for changes caused by the camera movement. Through the experiments, we show that the robot vision system can detect humans and recognize the action in real time. Furthermore, we show that a moving robot can achieve human detection and action recognition by modifying the optical flow.


Drones ◽  
2021 ◽  
Vol 5 (3) ◽  
pp. 87
Author(s):  
Ketan Kotecha ◽  
Deepak Garg ◽  
Balmukund Mishra ◽  
Pratik Narang ◽  
Vipual Kumar Mishra

Visual data collected from drones has opened a new direction for surveillance applications and has recently attracted considerable attention among computer vision researchers. Due to the availability and increasing use of the drone for both public and private sectors, it is a critical futuristic technology to solve multiple surveillance problems in remote areas. One of the fundamental challenges in recognizing crowd monitoring videos’ human action is the precise modeling of an individual’s motion feature. Most state-of-the-art methods heavily rely on optical flow for motion modeling and representation, and motion modeling through optical flow is a time-consuming process. This article underlines this issue and provides a novel architecture that eliminates the dependency on optical flow. The proposed architecture uses two sub-modules, FMFM (faster motion feature modeling) and AAR (accurate action recognition), to accurately classify the aerial surveillance action. Another critical issue in aerial surveillance is a deficiency of the dataset. Out of few datasets proposed recently, most of them have multiple humans performing different actions in the same scene, such as a crowd monitoring video, and hence not suitable for directly applying to the training of action recognition models. Given this, we have proposed a novel dataset captured from top view aerial surveillance that has a good variety in terms of actors, daytime, and environment. The proposed architecture has shown the capability to be applied in different terrain as it removes the background before using the action recognition model. The proposed architecture is validated through the experiment with varying investigation levels and achieves a remarkable performance of 0.90 validation accuracy in aerial action recognition.


2020 ◽  
Vol 14 (6) ◽  
pp. 378-390
Author(s):  
Cheng Peng ◽  
Haozhi Huang ◽  
Ah-Chung Tsoi ◽  
Sio-Long Lo ◽  
Yun Liu ◽  
...  

Author(s):  
Laura Sevilla-Lara ◽  
Yiyi Liao ◽  
Fatma Güney ◽  
Varun Jampani ◽  
Andreas Geiger ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document