Analysis of Recent Advances in Optical Flow Estimation Methods

Author(s):  
Javier Sánchez
Author(s):  
Martha Cejudo-Torres ◽  
Enrique Escamilla-Hernandez ◽  
Mariko Nakano-Miyatake ◽  
Hector Perez Meana

2019 ◽  
Vol 277 ◽  
pp. 02002
Author(s):  
Song Wang ◽  
Zengfu Wang

Traditional image warping methods used in optical flow estimation usually adopt simple interpolation strategies to obtain the warped images. But without considering the characteristic of occluded regions, the traditional methods may result in undesirable ghosting artifacts. To tackle this problem, in this paper we propose a novel image warping method to effectively remove ghosting artifacts. To be Specific, when given a warped image, the ghost regions are firstly discriminated using the optical flow information. Then, we use a new image compensation technique to eliminate the ghosting artifacts. The proposed method can avoid serious distortion in the warped images, therefore can prevent error propagation in the coarse-to-fine optical flow estimation schemes. Meanwhile, our approach can be easily integrated into various optical flow estimation methods. Experimental results on some popular datasets such as Flying Chairs and MPI-Sintel demonstrate that the proposed method can improve the performance of current optical flow estimation methods.


Author(s):  
R. Feng ◽  
X. Li ◽  
H. Shen

<p><strong>Abstract.</strong> Mountainous remote sensing images registration is more complicated than in other areas as geometric distortion caused by topographic relief, which could not be precisely achieved via constructing local mapping functions in the feature-based framework. Optical flow algorithm estimating motion of consecutive frames in computer vision pixel by pixel is introduced for mountainous remote sensing images registration. However, it is sensitive to land cover changes that are inevitable for remote sensing image, resulting in incorrect displacement. To address this problem, we proposed an improved optical flow estimation concentrated on post-processing, namely displacement modification. First of all, the Laplacian of Gaussian (LoG) algorithm is employed to detect the abnormal value in color map of displacement. Then, the abnormal displacement is recalculated in the interpolation surface constructed by the rest accurate displacements. Following the successful coordinate transformation and resampling, the registration outcome is generated. Experiments demonstrated that the proposed method is insensitive in changeable region of mountainous remote sensing image, generating precise registration, outperforming the other local transformation model estimation methods in both visual judgment and quantitative evaluation.</p>


Author(s):  
S. Hosseinyalamdary ◽  
A. Yilmaz

In most Photogrammetry and computer vision tasks, finding the corresponding points among images is required. Among many, the Lucas-Kanade optical flow estimation has been employed for tracking interest points as well as motion vector field estimation. This paper uses the IMU measurements to reconstruct the epipolar geometry and it integrates the epipolar geometry constraint with the brightness constancy assumption in the Lucas-Kanade method. The proposed method has been tested using the KITTI dataset. The results show the improvement in motion vector field estimation in comparison to the Lucas-Kanade optical flow estimation. The same approach has been used in the KLT tracker and it has been shown that using epipolar geometry constraint can improve the KLT tracker. It is recommended that the epipolar geometry constraint is used in advanced variational optical flow estimation methods.


2020 ◽  
Vol 34 (07) ◽  
pp. 10713-10720
Author(s):  
Mingyu Ding ◽  
Zhe Wang ◽  
Bolei Zhou ◽  
Jianping Shi ◽  
Zhiwu Lu ◽  
...  

A major challenge for video semantic segmentation is the lack of labeled data. In most benchmark datasets, only one frame of a video clip is annotated, which makes most supervised methods fail to utilize information from the rest of the frames. To exploit the spatio-temporal information in videos, many previous works use pre-computed optical flows, which encode the temporal consistency to improve the video segmentation. However, the video segmentation and optical flow estimation are still considered as two separate tasks. In this paper, we propose a novel framework for joint video semantic segmentation and optical flow estimation. Semantic segmentation brings semantic information to handle occlusion for more robust optical flow estimation, while the non-occluded optical flow provides accurate pixel-level temporal correspondences to guarantee the temporal consistency of the segmentation. Moreover, our framework is able to utilize both labeled and unlabeled frames in the video through joint training, while no additional calculation is required in inference. Extensive experiments show that the proposed model makes the video semantic segmentation and optical flow estimation benefit from each other and outperforms existing methods under the same settings in both tasks.


Sign in / Sign up

Export Citation Format

Share Document