Integral Optical Flow and its Application for Monitoring Dynamic Objects from a Video Sequence

2017 ◽  
Vol 84 (1) ◽  
pp. 120-128 ◽  
Author(s):  
Ch. Chen ◽  
Sh. Ye ◽  
H. Chen ◽  
O. V. Nedzvedz ◽  
S. V. Ablameyko
2019 ◽  
Vol 86 (3) ◽  
pp. 435-442
Author(s):  
H. Chen ◽  
A. Nedzvedz ◽  
O. Nedzvedz ◽  
Sh. Ye ◽  
Ch. Chen ◽  
...  

2020 ◽  
Vol 86 (6) ◽  
pp. 1146-1146
Author(s):  
H. Chen ◽  
A. Nedzvedz ◽  
O. Nedzvedz ◽  
Sh. Ye ◽  
Ch. Chen ◽  
...  

Action recognition (AR) plays a fundamental role in computer vision and video analysis. We are witnessing an astronomical increase of video data on the web and it is difficult to recognize the action in video due to different view point of camera. For AR in video sequence, it depends upon appearance in frame and optical flow in frames of video. In video spatial and temporal components of video frames features play integral role for better classification of action in videos. In the proposed system, RGB frames and optical flow frames are used for AR with the help of Convolutional Neural Network (CNN) pre-trained model Alex-Net extract features from fc7 layer. Support vector machine (SVM) classifier is used for the classification of AR in videos. For classification purpose, HMDB51 dataset have been used which includes 51 Classes of human action. The dataset is divided into 51 action categories. Using SVM classifier, extracted features are used for classification and achieved best result 95.6% accuracy as compared to other techniques of the state-of- art.v


Sensors ◽  
2020 ◽  
Vol 20 (18) ◽  
pp. 5076
Author(s):  
Huijiao Qiao ◽  
Xue Wan ◽  
Youchuan Wan ◽  
Shengyang Li ◽  
Wanfeng Zhang

Change detection (CD) is critical for natural disaster detection, monitoring and evaluation. Video satellites, new types of satellites being launched recently, are able to record the motion change during natural disasters. This raises a new problem for traditional CD methods, as they can only detect areas with highly changed radiometric and geometric information. Optical flow-based methods are able to detect the pixel-based motion tracking at fast speed; however, they are difficult to determine an optimal threshold for separating the changed from the unchanged part for CD problems. To overcome the above problems, this paper proposed a novel automatic change detection framework: OFATS (optical flow-based adaptive thresholding segmentation). Combining the characteristics of optical flow data, a new objective function based on the ratio of maximum between-class variance and minimum within-class variance has been constructed and two key steps are motion detection based on optical flow estimation using deep learning (DL) method and changed area segmentation based on an adaptive threshold selection. Experiments are carried out using two groups of video sequences, which demonstrated that the proposed method is able to achieve high accuracy with F1 value of 0.98 and 0.94, respectively.


2019 ◽  
Vol 43 (4) ◽  
pp. 647-652 ◽  
Author(s):  
H. Chen ◽  
S. Ye ◽  
A. Nedzvedz ◽  
O. Nedzvedz ◽  
H. Lv ◽  
...  

Road traffic analysis is an important task in many applications and it can be used in video surveillance systems to prevent many undesirable events. In this paper, we propose a new method based on integral optical flow to analyze cars movement in video and detect flow extreme situations in real-world videos. Firstly, integral optical flow is calculated for video sequences based on optical flow, thus random background motion is eliminated; secondly, pixel-level motion maps which describe cars movement from different perspectives are created based on integral optical flow; thirdly, region-level indicators are defined and calculated; finally, threshold segmentation is used to identify different cars movements. We also define and calculate several parameters of moving car flow including direction, speed, density, and intensity without detecting and counting cars. Experimental results show that our method can identify cars directional movement, cars divergence and cars accumulation effectively.


2018 ◽  
Vol 24 (8) ◽  
pp. 427-431
Author(s):  
Jungbeom Lee ◽  
Sungroh Yoon

Sign in / Sign up

Export Citation Format

Share Document