scholarly journals Bridging Stereo Matching and Optical Flow via Spatiotemporal Correspondence

Author(s):  
Hsueh-Ying Lai ◽  
Yi-Hsuan Tsai ◽  
Wei-Chen Chiu
Keyword(s):  
Author(s):  
Yong Deng ◽  
Jimin Xiao ◽  
Steven Zhiying Zhou ◽  
Jiashi Feng

Author(s):  
V. V. Kniaz ◽  
V. A. Mizginov ◽  
L. V. Grodzitkiy ◽  
N. A. Fomin ◽  
V. A. Knyaz

Abstract. Structured light scanners are intensively exploited in various applications such as non-destructive quality control at an assembly line, optical metrology, and cultural heritage documentation. While more than 20 companies develop commercially available structured light scanners, structured light technology accuracy has limitations for fast systems. Model surface discrepancies often present if the texture of the object has severe changes in brightness or reflective properties of its texture. The primary source of such discrepancies is errors in the stereo matching caused by complex surface texture. These errors result in ridge-like structures on the surface of the reconstructed 3D model. This paper is focused on the development of a deep neural network LineMatchGAN for error reduction in 3D models produced by a structured light scanner. We use the pix2pix model as a starting point for our research. The aim of our LineMatchGAN is a refinement of the rough optical flow A and generation of an error-free optical flow B̂. We collected a dataset (which we term ZebraScan) consisting of 500 samples to train our LineMatchGAN model. Each sample includes image sequences (Sl, Sr), ground-truth optical flow B and a ground-truth 3D model. We evaluate our LineMatchGAN on a test split of our ZebraScan dataset that includes 50 samples. The evaluation proves that our LineMatchGAN improves the stereo matching accuracy (optical flow end point error, EPE) from 0.05 pixels to 0.01 pixels.


Sensors ◽  
2018 ◽  
Vol 19 (1) ◽  
pp. 53 ◽  
Author(s):  
Abiel Aguilar-González ◽  
Miguel Arias-Estrada ◽  
François Berry

Applications such as autonomous navigation, robot vision, and autonomous flying require depth map information of a scene. Depth can be estimated by using a single moving camera (depth from motion). However, the traditional depth from motion algorithms have low processing speeds and high hardware requirements that limit the embedded capabilities. In this work, we propose a hardware architecture for depth from motion that consists of a flow/depth transformation and a new optical flow algorithm. Our optical flow formulation consists in an extension of the stereo matching problem. A pixel-parallel/window-parallel approach where a correlation function based on the sum of absolute difference (SAD) computes the optical flow is proposed. Further, in order to improve the SAD, the curl of the intensity gradient as a preprocessing step is proposed. Experimental results demonstrated that it is possible to reach higher accuracy (90% of accuracy) compared with previous Field Programmable Gate Array (FPGA)-based optical flow algorithms. For the depth estimation, our algorithm delivers dense maps with motion and depth information on all image pixels, with a processing speed up to 128 times faster than that of previous work, making it possible to achieve high performance in the context of embedded applications.


2021 ◽  
Author(s):  
Yunpeng Li ◽  
Baozhen Ge ◽  
Qingguo Tian ◽  
Lu Qieni ◽  
Jianing Quan ◽  
...  

Author(s):  
Qiwei Xie ◽  
Qian Long ◽  
Seiichi Mita

This paper proposes a novel stereo matching algorithm to solve environment sensing problems. It integrates a non-convex optical flow and Viterbi process. The non-convex optical flow employs a new adaptive weighted non-convex Total Generalized Variation (TGV) model, which can obtain sharp disparity maps. Structural similarity, total variation constraint, and a specific merging strategy are combined with the 4 bi-directional Viterbi process to improve the robustness. In the fusion of the optical flow and Viterbi process, a new occlusion processing method is incorporated in order to get more sharp disparity and more robust result. Extensive experiments are conducted to compare this algorithm with other state-of-the-art methods. Experimental results show the superiority of our algorithm.


2005 ◽  
Vol 44 (S 01) ◽  
pp. S46-S50 ◽  
Author(s):  
M. Dawood ◽  
N. Lang ◽  
F. Büther ◽  
M. Schäfers ◽  
O. Schober ◽  
...  

Summary:Motion in PET/CT leads to artifacts in the reconstructed PET images due to the different acquisition times of positron emission tomography and computed tomography. The effect of motion on cardiac PET/CT images is evaluated in this study and a novel approach for motion correction based on optical flow methods is outlined. The Lukas-Kanade optical flow algorithm is used to calculate the motion vector field on both simulated phantom data as well as measured human PET data. The motion of the myocardium is corrected by non-linear registration techniques and results are compared to uncorrected images.


Sign in / Sign up

Export Citation Format

Share Document