3D reconstruction by a combined structure tensor and Hough transform light field approach

2017 ◽  
Vol 84 (7-8) ◽  
Author(s):  
Alessandro Vianello ◽  
Giulio Manfredi ◽  
Maximilian Diebold ◽  
Bernd Jähne

AbstractDisparity estimation using the structure tensor is a local approach to determine orientation in Epipolar Plane Images. A global extension would lead to more precise and robust estimations. In this work, a novel algorithm for 3D reconstruction from linear light fields is proposed. This method uses a modified version of the Progressive Probabilistic Hough Transform to extract orientations from Epipolar Plane Images, allowing to achieve high quality disparity maps. To this aim, the structure tensor estimates are used to speed up computation and improve the disparity estimation near occlusion boundaries. The new algorithm is evaluated on both synthetic and real light field datasets, and compared with classical local disparity estimation techniques based on the structure tensor.

Author(s):  
Rui M. Lourenco ◽  
Luis M. N. Tavora ◽  
Pedro A. A. Assuncao ◽  
Lucas A. Thomaz ◽  
Rui Fonseca-Pinto ◽  
...  

AbstractDuring the last decade, there has been an increasing number of applications dealing with multidimensional visual information, either for 3D object representation or feature extraction purposes. In this context, recent advances in light field technology, have been driving research efforts in disparity estimation methods. Among the existing ones, those based on the structure tensor have emerged as very promising to estimate disparity maps from Epipolar Plane Images. However, this approach is known to have two intrinsic limitations: (i) silhouette enlargement and (ii) irregularity of surface normal maps as computed from the estimated disparity. To address these problems, this work proposes a new method for improving disparity maps obtained from the structure-tensor approach by enhancing the silhouette and reducing the noise of planar surfaces in light fields. An edge-based approach is initially used for silhouette improvement through refinement of the estimated disparity values around object edges. Then, a plane detection algorithm, based on a seed growth strategy, is used to estimate planar regions, which in turn are used to guide correction of erroneous disparity values detected in object boundaries. The proposed algorithm shows an average improvement of 98.3% in terms of median angle error for plane surfaces, when compared to regular structure-tensor-based methods, outperforming state-of-the-art methods. The proposed framework also presents very competitive results, in terms of mean square error between disparity maps and their ground truth, when compared with their counterparts.


Author(s):  
Rui Lourenco ◽  
Pedro A.A. Assuncao ◽  
Luis M.N. Tavora ◽  
Rui Fonseca-Pinto ◽  
Sergio M.M. Faria

Sensors ◽  
2021 ◽  
Vol 21 (22) ◽  
pp. 7734
Author(s):  
Wei Feng ◽  
Junhui Gao ◽  
Tong Qu ◽  
Shiqi Zhou ◽  
Daxing Zhao

Light field imaging plays an increasingly important role in the field of three-dimensional (3D) reconstruction because of its ability to quickly obtain four-dimensional information (angle and space) of the scene. In this paper, a 3D reconstruction method of light field based on phase similarity is proposed to increase the accuracy of depth estimation and the scope of applicability of epipolar plane image (EPI). The calibration method of the light field camera was used to obtain the relationship between disparity and depth, and the projector calibration was removed to make the experimental procedure more flexible. Then, the disparity estimation algorithm based on phase similarity was designed to effectively improve the reliability and accuracy of disparity calculation, in which the phase information was used instead of the structure tensor, and the morphological processing method was used to denoise and optimize the disparity map. Finally, 3D reconstruction of the light field was realized by combining disparity information with the calibrated relationship. The experimental results showed that the reconstruction standard deviation of the two objects was 0.3179 mm and 0.3865 mm compared with the ground truth of the measured objects, respectively. Compared with the traditional EPI method, our method can not only make EPI perform well in a single scene or blurred texture situations but also maintain good reconstruction accuracy.


2018 ◽  
Vol 26 (6) ◽  
pp. 7598 ◽  
Author(s):  
Zewei Cai ◽  
Xiaoli Liu ◽  
Xiang Peng ◽  
Bruce Z. Gao

2020 ◽  
Vol 34 (07) ◽  
pp. 12095-12103
Author(s):  
Yu-Ju Tsai ◽  
Yu-Lun Liu ◽  
Ming Ouhyoung ◽  
Yung-Yu Chuang

This paper introduces a novel deep network for estimating depth maps from a light field image. For utilizing the views more effectively and reducing redundancy within views, we propose a view selection module that generates an attention map indicating the importance of each view and its potential for contributing to accurate depth estimation. By exploring the symmetric property of light field views, we enforce symmetry in the attention map and further improve accuracy. With the attention map, our architecture utilizes all views more effectively and efficiently. Experiments show that the proposed method achieves state-of-the-art performance in terms of accuracy and ranks the first on a popular benchmark for disparity estimation for light field images.


2010 ◽  
Vol 53 (10) ◽  
pp. 1917-1930 ◽  
Author(s):  
Yu Wang ◽  
XiangYang Ji ◽  
QiongHai Dai

2015 ◽  
Vol 8 (1) ◽  
pp. 371-378 ◽  
Author(s):  
H. Waruna H. Premachandra ◽  
Chinthaka Premachandra ◽  
Chandana Dinesh Parape ◽  
Hiroharu Kawanaka

2021 ◽  
Author(s):  
Luca Palmieri

Microlens-array based plenoptic cameras capture the light field in a single shot, enabling new potential applications but also introducing additional challenges. A plenoptic image consists of thousand of microlens images. Estimating the disparity for each microlens allows to render conventional images, changing the perspective and the focal settings, and to reconstruct the three-dimensional geometry of the scene. The work includes a blur-aware calibration method to model plenoptic cameras, an optimization method to accurately select the best microlenses combination for disparity estimation, an overview of the different types of plenoptic cameras, an analysis of the disparity estimation algorithms, and a robust depth estimation approach for light field microscopy. The research led to the creation of a full framework for plenoptic cameras, which contains the implementation of the algorithms discussed in the work and datasets of both real and synthetic images for comparison, benchmarking and future research.


Sign in / Sign up

Export Citation Format

Share Document