scholarly journals A Novel Change Detection Method for Natural Disaster Detection and Segmentation from Video Sequence

Sensors ◽  
2020 ◽  
Vol 20 (18) ◽  
pp. 5076
Author(s):  
Huijiao Qiao ◽  
Xue Wan ◽  
Youchuan Wan ◽  
Shengyang Li ◽  
Wanfeng Zhang

Change detection (CD) is critical for natural disaster detection, monitoring and evaluation. Video satellites, new types of satellites being launched recently, are able to record the motion change during natural disasters. This raises a new problem for traditional CD methods, as they can only detect areas with highly changed radiometric and geometric information. Optical flow-based methods are able to detect the pixel-based motion tracking at fast speed; however, they are difficult to determine an optimal threshold for separating the changed from the unchanged part for CD problems. To overcome the above problems, this paper proposed a novel automatic change detection framework: OFATS (optical flow-based adaptive thresholding segmentation). Combining the characteristics of optical flow data, a new objective function based on the ratio of maximum between-class variance and minimum within-class variance has been constructed and two key steps are motion detection based on optical flow estimation using deep learning (DL) method and changed area segmentation based on an adaptive threshold selection. Experiments are carried out using two groups of video sequences, which demonstrated that the proposed method is able to achieve high accuracy with F1 value of 0.98 and 0.94, respectively.

Author(s):  
H. J. Qiao ◽  
X. Wan ◽  
J. Z. Xu ◽  
S. Y. Li ◽  
P. P. He

Abstract. Real-time change detection and analysis of natural disasters is of great importance to emergency response and disaster rescue. Recently, a number of video satellites that can record the whole process of natural disasters have been launched. These satellites capture high resolution video image sequences and provide researchers with a large number of image frames, which allows for the implementation of a rapid disaster procedure change detection approach based on deep learning. In this paper, pixel change in image sequences is estimated by optical flow based on FlowNet 2.0 for quick change detection in natural disasters. Experiments are carried out by using image frames from Digital Globe WorldView in Indonesia Earthquake took place on Sept. 28, 2018. In order to test the efficiency of FlowNet 2.0 on natural disaster dataset, 7 state-of-the-art optical flow estimation methods are compared. The experimental results show that FlowNet 2.0 is not only robust to large displacements but small displacements in natural disaster dataset. Two evaluation indicators: Root Mean Square Error (RMSE) and Mean Value are used to record the accuracy. For estimation error of RMSE, FlowNet 2.0 achieves 0.30 and 0.11 pixels in horizontal and vertical direction, respectively. The error in horizontal error is similar to other algorithms but the value in vertical direction is significantly lower than them. And the Mean Value are 1.50 and 0.09 pixels in horizontal and vertical direction, which are most close to the ground truth comparing to other algorithms. Combining the superiority of computing time, the paper proves that only the approach based on FlowNet 2.0 is able to achieve real-time change detection with higher accuracy in the case of natural disasters.


2014 ◽  
pp. 112-117
Author(s):  
Rauf Sadykhov ◽  
Denis Lamovsky

This paper describes a new algorithm to calculate cross-correlation function. We combined box filtering technique for calculation of cross correlation coefficients with parallel processing using MMX/SSE technology of modern general purpose processors. We have used this algorithm for real time optical flow estimation between frames of video sequence. Our algorithm was tested on real world video sequences obtained from the cameras of video surveillance system.


Author(s):  
Antonis Ioannidis ◽  
Vasileios Chasanis ◽  
Aristidis Likas

Most of the existing approaches for camera motion detection are based on optical flow analysis and the use of the affine motion model. However, these methods are computationally expensive due to the cost of optical flow estimation and may be inefficient in the presence of moving objects whose motion is independent of the camera motion. We present an effective approach to detect camera motions by considering four trapezoidal regions in each frame and computing the horizontal and vertical translations of those regions. Then, simple decision rules based on the translations of the regions are employed in order to decide for the existence and the type of camera motion in each frame. In this way, three signals are constructed (pan, tilt, zoom) which are subsequently filtered to improve the robustness of the method. Comparative experiments on a variety of videos indicate that our method efficiently detects any type of camera motion (pan, tilt, zoom), even in the case where moving objects exist in the video sequence.


2021 ◽  
Vol 114 ◽  
pp. 107861
Author(s):  
Mingliang Zhai ◽  
Xuezhi Xiang ◽  
Ning Lv ◽  
Xiangdong Kong

2020 ◽  
Vol 34 (07) ◽  
pp. 10713-10720
Author(s):  
Mingyu Ding ◽  
Zhe Wang ◽  
Bolei Zhou ◽  
Jianping Shi ◽  
Zhiwu Lu ◽  
...  

A major challenge for video semantic segmentation is the lack of labeled data. In most benchmark datasets, only one frame of a video clip is annotated, which makes most supervised methods fail to utilize information from the rest of the frames. To exploit the spatio-temporal information in videos, many previous works use pre-computed optical flows, which encode the temporal consistency to improve the video segmentation. However, the video segmentation and optical flow estimation are still considered as two separate tasks. In this paper, we propose a novel framework for joint video semantic segmentation and optical flow estimation. Semantic segmentation brings semantic information to handle occlusion for more robust optical flow estimation, while the non-occluded optical flow provides accurate pixel-level temporal correspondences to guarantee the temporal consistency of the segmentation. Moreover, our framework is able to utilize both labeled and unlabeled frames in the video through joint training, while no additional calculation is required in inference. Extensive experiments show that the proposed model makes the video semantic segmentation and optical flow estimation benefit from each other and outperforms existing methods under the same settings in both tasks.


Sign in / Sign up

Export Citation Format

Share Document