scholarly journals Ghost-removal image warping for optical flow estimation

2019 ◽  
Vol 277 ◽  
pp. 02002
Author(s):  
Song Wang ◽  
Zengfu Wang

Traditional image warping methods used in optical flow estimation usually adopt simple interpolation strategies to obtain the warped images. But without considering the characteristic of occluded regions, the traditional methods may result in undesirable ghosting artifacts. To tackle this problem, in this paper we propose a novel image warping method to effectively remove ghosting artifacts. To be Specific, when given a warped image, the ghost regions are firstly discriminated using the optical flow information. Then, we use a new image compensation technique to eliminate the ghosting artifacts. The proposed method can avoid serious distortion in the warped images, therefore can prevent error propagation in the coarse-to-fine optical flow estimation schemes. Meanwhile, our approach can be easily integrated into various optical flow estimation methods. Experimental results on some popular datasets such as Flying Chairs and MPI-Sintel demonstrate that the proposed method can improve the performance of current optical flow estimation methods.

Author(s):  
Martha Cejudo-Torres ◽  
Enrique Escamilla-Hernandez ◽  
Mariko Nakano-Miyatake ◽  
Hector Perez Meana

Author(s):  
R. Feng ◽  
X. Li ◽  
H. Shen

<p><strong>Abstract.</strong> Mountainous remote sensing images registration is more complicated than in other areas as geometric distortion caused by topographic relief, which could not be precisely achieved via constructing local mapping functions in the feature-based framework. Optical flow algorithm estimating motion of consecutive frames in computer vision pixel by pixel is introduced for mountainous remote sensing images registration. However, it is sensitive to land cover changes that are inevitable for remote sensing image, resulting in incorrect displacement. To address this problem, we proposed an improved optical flow estimation concentrated on post-processing, namely displacement modification. First of all, the Laplacian of Gaussian (LoG) algorithm is employed to detect the abnormal value in color map of displacement. Then, the abnormal displacement is recalculated in the interpolation surface constructed by the rest accurate displacements. Following the successful coordinate transformation and resampling, the registration outcome is generated. Experiments demonstrated that the proposed method is insensitive in changeable region of mountainous remote sensing image, generating precise registration, outperforming the other local transformation model estimation methods in both visual judgment and quantitative evaluation.</p>


Author(s):  
S. Hosseinyalamdary ◽  
A. Yilmaz

In most Photogrammetry and computer vision tasks, finding the corresponding points among images is required. Among many, the Lucas-Kanade optical flow estimation has been employed for tracking interest points as well as motion vector field estimation. This paper uses the IMU measurements to reconstruct the epipolar geometry and it integrates the epipolar geometry constraint with the brightness constancy assumption in the Lucas-Kanade method. The proposed method has been tested using the KITTI dataset. The results show the improvement in motion vector field estimation in comparison to the Lucas-Kanade optical flow estimation. The same approach has been used in the KLT tracker and it has been shown that using epipolar geometry constraint can improve the KLT tracker. It is recommended that the epipolar geometry constraint is used in advanced variational optical flow estimation methods.


2020 ◽  
Author(s):  
Hengli Wang ◽  
Rui Fan ◽  
Ming Liu

The interpretation of ego motion and scene change is a fundamental task for mobile robots. Optical flow information can be employed to estimate motion in the surroundings. Recently, unsupervised optical flow estimation has become a research hotspot. However, unsupervised approaches are often easy to be unreliable on partially occluded or texture-less regions. To deal with this problem, we propose CoT-AMFlow in this paper, an unsupervised optical flow estimation approach. In terms of the network architecture, we develop an adaptive modulation network that employs two novel module types, flow modulation modules (FMMs) and cost volume modulation modules (CMMs), to remove outliers in challenging regions. As for the training paradigm, we adopt a co-teaching strategy, where two networks simultaneously teach each other about challenging regions to further improve accuracy. Experimental results on the MPI Sintel, KITTI Flow and Middlebury Flow benchmarks demonstrate that our CoT-AMFlow outperforms all other state-of-the-art unsupervised approaches, while still running in real time. Our project page is available at https://sites.google.com/view/cot-amflow.


2020 ◽  
Author(s):  
Hengli Wang ◽  
Rui Fan ◽  
Ming Liu

The interpretation of ego motion and scene change is a fundamental task for mobile robots. Optical flow information can be employed to estimate motion in the surroundings. Recently, unsupervised optical flow estimation has become a research hotspot. However, unsupervised approaches are often easy to be unreliable on partially occluded or texture-less regions. To deal with this problem, we propose CoT-AMFlow in this paper, an unsupervised optical flow estimation approach. In terms of the network architecture, we develop an adaptive modulation network that employs two novel module types, flow modulation modules (FMMs) and cost volume modulation modules (CMMs), to remove outliers in challenging regions. As for the training paradigm, we adopt a co-teaching strategy, where two networks simultaneously teach each other about challenging regions to further improve accuracy. Experimental results on the MPI Sintel, KITTI Flow and Middlebury Flow benchmarks demonstrate that our CoT-AMFlow outperforms all other state-of-the-art unsupervised approaches, while still running in real time. Our project page is available at https://sites.google.com/view/cot-amflow.


2020 ◽  
Author(s):  
Hengli Wang ◽  
Rui Fan ◽  
Ming Liu

The interpretation of ego motion and scene change is a fundamental task for mobile robots. Optical flow information can be employed to estimate motion in the surroundings. Recently, unsupervised optical flow estimation has become a research hotspot. However, unsupervised approaches are often easy to be unreliable on partially occluded or texture-less regions. To deal with this problem, we propose CoT-AMFlow in this paper, an unsupervised optical flow estimation approach. In terms of the network architecture, we develop an adaptive modulation network that employs two novel module types, flow modulation modules (FMMs) and cost volume modulation modules (CMMs), to remove outliers in challenging regions. As for the training paradigm, we adopt a co-teaching strategy, where two networks simultaneously teach each other about challenging regions to further improve accuracy. Experimental results on the MPI Sintel, KITTI Flow and Middlebury Flow benchmarks demonstrate that our CoT-AMFlow outperforms all other state-of-the-art unsupervised approaches, while still running in real time. Our project page is available at https://sites.google.com/view/cot-amflow.


Author(s):  
Claudio S. Ravasio ◽  
Theodoros Pissas ◽  
Edward Bloch ◽  
Blanca Flores ◽  
Sepehr Jalali ◽  
...  

Abstract Purpose Sustained delivery of regenerative retinal therapies by robotic systems requires intra-operative tracking of the retinal fundus. We propose a supervised deep convolutional neural network to densely predict semantic segmentation and optical flow of the retina as mutually supportive tasks, implicitly inpainting retinal flow information missing due to occlusion by surgical tools. Methods As manual annotation of optical flow is infeasible, we propose a flexible algorithm for generation of large synthetic training datasets on the basis of given intra-operative retinal images. We evaluate optical flow estimation by tracking a grid and sparsely annotated ground truth points on a benchmark of challenging real intra-operative clips obtained from an extensive internally acquired dataset encompassing representative vitreoretinal surgical cases. Results The U-Net-based network trained on the synthetic dataset is shown to generalise well to the benchmark of real surgical videos. When used to track retinal points of interest, our flow estimation outperforms variational baseline methods on clips containing tool motions which occlude the points of interest, as is routinely observed in intra-operatively recorded surgery videos. Conclusions The results indicate that complex synthetic training datasets can be used to specifically guide optical flow estimation. Our proposed algorithm therefore lays the foundation for a robust system which can assist with intra-operative tracking of moving surgical targets even when occluded.


Sign in / Sign up

Export Citation Format

Share Document