scholarly journals DEEP LEARNING BASED OPTICAL FLOW ESTIMATION FOR CHANGE DETECTION: A CASE STUDY IN INDONESIA EARTHQUAKE

Author(s):  
H. J. Qiao ◽  
X. Wan ◽  
J. Z. Xu ◽  
S. Y. Li ◽  
P. P. He

Abstract. Real-time change detection and analysis of natural disasters is of great importance to emergency response and disaster rescue. Recently, a number of video satellites that can record the whole process of natural disasters have been launched. These satellites capture high resolution video image sequences and provide researchers with a large number of image frames, which allows for the implementation of a rapid disaster procedure change detection approach based on deep learning. In this paper, pixel change in image sequences is estimated by optical flow based on FlowNet 2.0 for quick change detection in natural disasters. Experiments are carried out by using image frames from Digital Globe WorldView in Indonesia Earthquake took place on Sept. 28, 2018. In order to test the efficiency of FlowNet 2.0 on natural disaster dataset, 7 state-of-the-art optical flow estimation methods are compared. The experimental results show that FlowNet 2.0 is not only robust to large displacements but small displacements in natural disaster dataset. Two evaluation indicators: Root Mean Square Error (RMSE) and Mean Value are used to record the accuracy. For estimation error of RMSE, FlowNet 2.0 achieves 0.30 and 0.11 pixels in horizontal and vertical direction, respectively. The error in horizontal error is similar to other algorithms but the value in vertical direction is significantly lower than them. And the Mean Value are 1.50 and 0.09 pixels in horizontal and vertical direction, which are most close to the ground truth comparing to other algorithms. Combining the superiority of computing time, the paper proves that only the approach based on FlowNet 2.0 is able to achieve real-time change detection with higher accuracy in the case of natural disasters.

Sensors ◽  
2020 ◽  
Vol 20 (18) ◽  
pp. 5076
Author(s):  
Huijiao Qiao ◽  
Xue Wan ◽  
Youchuan Wan ◽  
Shengyang Li ◽  
Wanfeng Zhang

Change detection (CD) is critical for natural disaster detection, monitoring and evaluation. Video satellites, new types of satellites being launched recently, are able to record the motion change during natural disasters. This raises a new problem for traditional CD methods, as they can only detect areas with highly changed radiometric and geometric information. Optical flow-based methods are able to detect the pixel-based motion tracking at fast speed; however, they are difficult to determine an optimal threshold for separating the changed from the unchanged part for CD problems. To overcome the above problems, this paper proposed a novel automatic change detection framework: OFATS (optical flow-based adaptive thresholding segmentation). Combining the characteristics of optical flow data, a new objective function based on the ratio of maximum between-class variance and minimum within-class variance has been constructed and two key steps are motion detection based on optical flow estimation using deep learning (DL) method and changed area segmentation based on an adaptive threshold selection. Experiments are carried out using two groups of video sequences, which demonstrated that the proposed method is able to achieve high accuracy with F1 value of 0.98 and 0.94, respectively.


Author(s):  
Shanshan Zhao ◽  
Xi Li ◽  
Omar El Farouk Bourahla

As an important and challenging problem in computer vision, learning based optical flow estimation aims to discover the intrinsic correspondence structure between two adjacent video frames through statistical learning. Therefore, a key issue to solve in this area is how to effectively model the multi-scale correspondence structure properties in an adaptive end-to-end learning fashion. Motivated by this observation, we propose an end-to-end multi-scale correspondence structure learning (MSCSL) approach for optical flow estimation. In principle, the proposed MSCSL approach is capable of effectively capturing the multi-scale inter-image-correlation correspondence structures within a multi-level feature space from deep learning. Moreover, the proposed MSCSL approach builds a spatial Conv-GRU neural network model to adaptively model the intrinsic dependency relationships among these multi-scale correspondence structures. Finally, the above procedures for correspondence structure learning and multi-scale dependency modeling are implemented in a unified end-to-end deep learning framework. Experimental results on several benchmark datasets demonstrate the effectiveness of the proposed approach.


IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 144122-144135
Author(s):  
Ang Li ◽  
Baoyu Zheng ◽  
Lei Li ◽  
Chen Zhang

2021 ◽  
Vol 3 (3) ◽  
Author(s):  
Syed Tafseer Haider Shah ◽  
Xiang Xuezhi

AbstractOptical Flow Estimation is an essential component for many image processing techniques. This field of research in computer vision has seen an amazing development in recent years. In particular, the introduction of Convolutional Neural Networks for optical flow estimation has shifted the paradigm of research from the classical traditional approach to deep learning side. At present, state of the art techniques for optical flow are based on convolutional neural networks and almost all top performing methods incorporate deep learning architectures in their schemes. This paper presents a brief analysis of optical flow estimation techniques and highlights most recent developments in this field. A comparison of the majority of pertinent traditional and deep learning methodologies has been undertaken resulting the detailed establishment of the respective advantages and disadvantages of the traditional and deep learning categories. An insight is provided into the significant factors that affect the success or failure of the two classes of optical flow estimation. In establishing the foremost existing and inherent challenges with traditional and deep learning schemes, probable solutions have been proposed indeed.


Sign in / Sign up

Export Citation Format

Share Document