scholarly journals Movement Human Actions Recognition Based on Machine Learning

2018 ◽  
Vol 14 (04) ◽  
pp. 193 ◽  
Author(s):  
Honghua Xu ◽  
Li Li ◽  
Ming Fang ◽  
Fengrong Zhang

In this paper, the main technologies of foreground detection, feature description and extraction, movement behavior classification and recognition were introduced. Based on optical flow for movement objects detection, optical flow energy image was put forward for movement feature expression and region convolutional neural networks was adopt to choose features and decrease dimension. Then support vector machine classifier was trained and used to classify and recognize actions. After training and testing on public human actions database, the experiment result showed that the method could effectively distinguish human actions and significantly improved the recognition accuracy of human actions. And for the different situations of camera lens drawing near, pulling away or slight movement of camera, the solution had recognition effect as well. At last, this scheme was applied to intelligent video surveillance system, which was used to identify abnormal behavior and alarm. The abnormal behaviors of faint, smashing car, robbery and fighting were defined in the system. In running of the system, it obtained satisfactory recognition results.

2018 ◽  
Vol 7 (3.34) ◽  
pp. 156
Author(s):  
Basavaraj G.M ◽  
Dr Ashok Kusagur

A many of researches have been carried out in the field of the crowd behavior recognition system. Recognizing crowd behavior in videos is most challenging and occlusions because of irregular human movement. This paper gives an overview of optical flow model along with the SVM (Support Vector Machine) classification model. This proposed approach evaluates sudden changes in motion of an event and classifies that event to a category: Normal and Abnormal.  Geometric means of location, direction, and displacement of the feature points of each frame are estimated. Harris corner Detector is used in each frame for tracking a set of feature points. Proposed approach is very effective in real time scenario like public places where security is most important. After analyzing result ROC curve (receiver operating characteristics) is plotted which gives classification accuracy. We also presented frame level comparison with Ground truth and social force model (SFM) techniques. Our proposed approach is giving a promising result compare to all state of art methods.  


2005 ◽  
Vol 02 (04) ◽  
pp. 353-365 ◽  
Author(s):  
YONGSHENG OU ◽  
HUIHUAN QIAN ◽  
XINYU WU ◽  
YANGSHENG XU

This paper introduces a real-time video surveillance system which can track people and detect human abnormal behaviors. In the blob detection part, an optical flow algorithm for crowd environment is studied experimentally and a comparison study with respect to traditional subtraction approach is carried out. The different approaches in segmentation and tracking enable the system to track persons when they change movement unpredictably in occlusion. We developed two methods for the human abnormal behavior analysis. The first one employs Principal Component Analysis for feature selection and Support Vector Machine for classification of human behaviors. The proposed feature selection method is based on the border information of four consecutive blobs. The second approach computes optical flow to obtain the velocity of each pixel for determining whether a human behavior is normal or not. Both algorithms are successfully developed in crowded environments to detect the following human abnormal behaviors: (1) Running people in a crowded environment; (2) falling down movement while most are walking or standing; (3) a person carrying an abnormal bar in a square; (4) a person waving hand in the crowd. Experimental results demonstrate these two methods are robust in detecting human abnormal behaviors.


Action recognition (AR) plays a fundamental role in computer vision and video analysis. We are witnessing an astronomical increase of video data on the web and it is difficult to recognize the action in video due to different view point of camera. For AR in video sequence, it depends upon appearance in frame and optical flow in frames of video. In video spatial and temporal components of video frames features play integral role for better classification of action in videos. In the proposed system, RGB frames and optical flow frames are used for AR with the help of Convolutional Neural Network (CNN) pre-trained model Alex-Net extract features from fc7 layer. Support vector machine (SVM) classifier is used for the classification of AR in videos. For classification purpose, HMDB51 dataset have been used which includes 51 Classes of human action. The dataset is divided into 51 action categories. Using SVM classifier, extracted features are used for classification and achieved best result 95.6% accuracy as compared to other techniques of the state-of- art.v


Human Activity Identification (HAI) in videos is one of the trendiest research fields in the computer visualization. Among various HAI techniques, Joints-pooled 3D-Deep convolutional Descriptors (JDD) have achieved effective performance by learning the body joint and capturing the spatiotemporal characteristics concurrently. However, the time consumption for estimating the locale of body joints by using large-scale dataset and computational cost of skeleton estimation algorithm were high. The recognition accuracy using traditional approaches need to be improved by considering both body joints and trajectory points together. Therefore, the key goal of this work is to improve the recognition accuracy using an optical flow integrated with a two-stream bilinear model, namely Joints and Trajectory-pooled 3D-Deep convolutional Descriptors (JTDD). In this model, an optical flow/trajectory point between video frames is also extracted at the body joint positions as input to the proposed JTDD. For this reason, two-streams of Convolutional 3D network (C3D) multiplied with the bilinear product is used for extracting the features, generating the joint descriptors for video sequences and capturing the spatiotemporal features. Then, the whole network is trained end-to-end based on the two-stream bilinear C3D model to obtain the video descriptors. Further, these video descriptors are classified by linear Support Vector Machine (SVM) to recognize human activities. Based on both body joints and trajectory points, action recognition is achieved efficiently. Finally, the recognition accuracy of the JTDD model and JDD model are compared.


2012 ◽  
Vol 19 (2) ◽  
pp. 257-268 ◽  
Author(s):  
Maciej Smiatacz

Liveness Measurements Using Optical Flow for Biometric Person Authentication Biometric identification systems, i.e. the systems that are able to recognize humans by analyzing their physiological or behavioral characteristics, have gained a lot of interest in recent years. They can be used to raise the security level in certain institutions or can be treated as a convenient replacement for PINs and passwords for regular users. Automatic face recognition is one of the most popular biometric technologies, widely used even by many low-end consumer devices such as netbooks. However, even the most accurate face identification algorithm would be useless if it could be cheated by presenting a photograph of a person instead of the real face. Therefore, the proper liveness measurement is extremely important. In this paper we present a method that differentiates between video sequences showing real persons and their photographs. First we calculate the optical flow of the face region using the Farnebäck algorithm. Then we convert the motion information into images and perform the initial data selection. Finally, we apply the Support Vector Machine to distinguish between real faces and photographs. The experimental results confirm that the proposed approach could be successfully applied in practice.


2018 ◽  
Vol 06 (04) ◽  
pp. 267-275
Author(s):  
Ajay Shankar ◽  
Mayank Vatsa ◽  
P. B. Sujit

Development of low-cost robots with the capability to detect and avoid obstacles along their path is essential for autonomous navigation. These robots have limited computational resources and payload capacity. Further, existing direct range-finding methods have the trade-off of complexity against range. In this paper, we propose a vision-based system for obstacle detection which is lightweight and useful for low-cost robots. Currently, monocular vision approaches used in the literature suffer from various environmental constraints such as texture and color. To mitigate these limitations, a novel algorithm is proposed, termed as Pyramid Histogram of Oriented Optical Flow ([Formula: see text]-HOOF), which distinctly captures motion vectors from local image patches and provides a robust descriptor capable of discriminating obstacles from nonobstacles. A support vector machine (SVM) classifier that uses [Formula: see text]-HOOF for real-time obstacle classification is utilized. To avoid obstacles, a behavior-based collision avoidance mechanism is designed that updates the probability of encountering an obstacle while navigating. The proposed approach depends only on the relative motion of the robot with respect to its surroundings, and therefore is suitable for both indoor and outdoor applications and has been validated through simulated and hardware experiments.


2019 ◽  
Vol 29 (01) ◽  
pp. 2050006 ◽  
Author(s):  
Qiuyu Li ◽  
Jun Yu ◽  
Toru Kurihara ◽  
Haiyan Zhang ◽  
Shu Zhan

Micro-expression is a kind of brief facial movements which could not be controlled by the nervous system. Micro-expression indicates that a person is hiding his true emotion consciously. Micro-expression recognition has various potential applications in public security and clinical medicine. Researches are focused on the automatic micro-expression recognition, because it is hard to recognize the micro-expression by people themselves. This research proposed a novel algorithm for automatic micro-expression recognition which combined a deep multi-task convolutional network for detecting the facial landmarks and a fused deep convolutional network for estimating the optical flow features of the micro-expression. First, the deep multi-task convolutional network is employed to detect facial landmarks with the manifold-related tasks for dividing the facial region. Furthermore, a fused convolutional network is applied for extracting the optical flow features from the facial regions which contain the muscle changes when the micro-expression appears. Because each video clip has many frames, the original optical flow features of the whole video clip will have high number of dimensions and redundant information. This research revises the optical flow features for reducing the redundant dimensions. Finally, a revised optical flow feature is applied for refining the information of the features and a support vector machine classifier is adopted for recognizing the micro-expression. The main contribution of work is combining the deep multi-task learning neural network and the fusion optical flow network for micro-expression recognition and revising the optical flow features for reducing the redundant dimensions. The results of experiments on two spontaneous micro-expression databases prove that our method achieved competitive performance in micro-expression recognition.


Sign in / Sign up

Export Citation Format

Share Document