scholarly journals An Empirical Study of Exhaustive Matching for Improving Motion Field Estimation

Information ◽  
2018 ◽  
Vol 9 (12) ◽  
pp. 320 ◽  
Author(s):  
Vanel Lazcano

Optical flow is defined as the motion field of pixels between two consecutive images. Traditionally, in order to estimate pixel motion field (or optical flow), an energy model is proposed. This energy model is composed of (i) a data term and (ii) a regularization term. The data term is an optical flow error estimation and the regularization term imposes spatial smoothness. Traditional variational models use a linearization in the data term. This linearized version of data term fails when the displacement of the object is larger than its own size. Recently, the precision of the optical flow method has been increased due to the use of additional information, obtained from correspondences computed between two images obtained by different methods such as SIFT, deep-matching, and exhaustive search. This work presents an empirical study in order to evaluate different strategies for locating exhaustive correspondences improving flow estimation. We considered a different location for matching random locations, uniform locations, and locations on maximum gradient magnitude. Additionally, we tested the combination of large and medium gradients with uniform locations. We evaluated our methodology in the MPI-Sintel database, which represents the state-of-the-art evaluation databases. Our results in MPI-Sintel show that our proposal outperforms classical methods such as Horn-Schunk, TV-L1, and LDOF, and our method performs similar to MDP-Flow.

2020 ◽  
Vol 20 (04) ◽  
pp. 2050027
Author(s):  
Luiz Maurílio da Silva Maciel ◽  
Marcelo Bernardes Vieira

Identification of motion in videos is a fundamental task for several computer vision problems. One of the main tools for motion identification is optical flow, which estimates the projection of the 3D velocity of the objects onto the plane of the camera. In this work, we propose a differential optical flow method based on the wave equation. The optical flow is computed by minimizing a functional energy composed by two terms: a data term based on brightness constancy and a regularization term based on energy of the wave. Flow is determined by solving a system of linear equations. The decoupling of the pixels in the solution allows solving the system by a direct or iterative approach and makes the method suitable for parallelization. We present the convergence conditions for our method since it does not converge for all the image points. For comparison purposes, we create a global video descriptor based on histograms of optical flow for the problem of action recognition. Despite its sparsity, results show that our method improves the average motion estimation, compared with classical methods. We also evaluate optical flow error measures in image sequences of a classical dataset for method comparison.


2013 ◽  
Vol 2013 ◽  
pp. 1-12
Author(s):  
Fang-Hsuan Cheng ◽  
Tze-Yun Sung

A method for estimating the depth information of a general monocular image sequence and then creating a 3D stereo video is proposed. Distinguishing between foreground and background is possible without additional information, and then foreground pixels are moved to create the binocular image. The proposed depth estimation method is based on coarse-to-fine strategy. By applying the CID method in the spatial domain, the sharpness and the contrast of an image can be improved by the distance of the region based on its color. Then a coarse depth map of the image can be generated. An optical-flow method based on temporal information is then used to search and compare the block motion status between previous and current frames, and then the distance of the block can be estimated according to the amount of block motion. Finally, the static and motion depth information is integrated to create the fine depth map. By shifting foreground pixels based on the depth information, a binocular image pair can be created. A sense of 3D stereo can be obtained without glasses by an autostereoscopic 3D display.


2007 ◽  
Vol 188 (3) ◽  
pp. W276-W280 ◽  
Author(s):  
Drew A. Torigian ◽  
Warren B. Gefter ◽  
John D. Affuso ◽  
Kiarash Emami ◽  
Lawrence Dougherty

Sign in / Sign up

Export Citation Format

Share Document