scholarly journals Efficient and Accurate Disparity Estimation from MLA-Based Plenoptic Cameras

2021 ◽  
Author(s):  
Luca Palmieri

Microlens-array based plenoptic cameras capture the light field in a single shot, enabling new potential applications but also introducing additional challenges. A plenoptic image consists of thousand of microlens images. Estimating the disparity for each microlens allows to render conventional images, changing the perspective and the focal settings, and to reconstruct the three-dimensional geometry of the scene. The work includes a blur-aware calibration method to model plenoptic cameras, an optimization method to accurately select the best microlenses combination for disparity estimation, an overview of the different types of plenoptic cameras, an analysis of the disparity estimation algorithms, and a robust depth estimation approach for light field microscopy. The research led to the creation of a full framework for plenoptic cameras, which contains the implementation of the algorithms discussed in the work and datasets of both real and synthetic images for comparison, benchmarking and future research.


Sensors ◽  
2019 ◽  
Vol 19 (3) ◽  
pp. 500 ◽  
Author(s):  
Luca Palmieri ◽  
Gabriele Scrofani ◽  
Nicolò Incardona ◽  
Genaro Saavedra ◽  
Manuel Martínez-Corral ◽  
...  

Light field technologies have seen a rise in recent years and microscopy is a field where such technology has had a deep impact. The possibility to provide spatial and angular information at the same time and in a single shot brings several advantages and allows for new applications. A common goal in these applications is the calculation of a depth map to reconstruct the three-dimensional geometry of the scene. Many approaches are applicable, but most of them cannot achieve high accuracy because of the nature of such images: biological samples are usually poor in features and do not exhibit sharp colors like natural scene. Due to such conditions, standard approaches result in noisy depth maps. In this work, a robust approach is proposed where accurate depth maps can be produced exploiting the information recorded in the light field, in particular, images produced with Fourier integral Microscope. The proposed approach can be divided into three main parts. Initially, it creates two cost volumes using different focal cues, namely correspondences and defocus. Secondly, it applies filtering methods that exploit multi-scale and super-pixels cost aggregation to reduce noise and enhance the accuracy. Finally, it merges the two cost volumes and extracts a depth map through multi-label optimization.



Sensors ◽  
2021 ◽  
Vol 21 (22) ◽  
pp. 7734
Author(s):  
Wei Feng ◽  
Junhui Gao ◽  
Tong Qu ◽  
Shiqi Zhou ◽  
Daxing Zhao

Light field imaging plays an increasingly important role in the field of three-dimensional (3D) reconstruction because of its ability to quickly obtain four-dimensional information (angle and space) of the scene. In this paper, a 3D reconstruction method of light field based on phase similarity is proposed to increase the accuracy of depth estimation and the scope of applicability of epipolar plane image (EPI). The calibration method of the light field camera was used to obtain the relationship between disparity and depth, and the projector calibration was removed to make the experimental procedure more flexible. Then, the disparity estimation algorithm based on phase similarity was designed to effectively improve the reliability and accuracy of disparity calculation, in which the phase information was used instead of the structure tensor, and the morphological processing method was used to denoise and optimize the disparity map. Finally, 3D reconstruction of the light field was realized by combining disparity information with the calibrated relationship. The experimental results showed that the reconstruction standard deviation of the two objects was 0.3179 mm and 0.3865 mm compared with the ground truth of the measured objects, respectively. Compared with the traditional EPI method, our method can not only make EPI perform well in a single scene or blurred texture situations but also maintain good reconstruction accuracy.



2020 ◽  
Vol 34 (07) ◽  
pp. 12095-12103
Author(s):  
Yu-Ju Tsai ◽  
Yu-Lun Liu ◽  
Ming Ouhyoung ◽  
Yung-Yu Chuang

This paper introduces a novel deep network for estimating depth maps from a light field image. For utilizing the views more effectively and reducing redundancy within views, we propose a view selection module that generates an attention map indicating the importance of each view and its potential for contributing to accurate depth estimation. By exploring the symmetric property of light field views, we enforce symmetry in the attention map and further improve accuracy. With the attention map, our architecture utilizes all views more effectively and efficiently. Experiments show that the proposed method achieves state-of-the-art performance in terms of accuracy and ranks the first on a popular benchmark for disparity estimation for light field images.



Author(s):  
Ole Johannsen ◽  
Katrin Honauer ◽  
Bastian Goldluecke ◽  
Anna Alperovich ◽  
Federica Battisti ◽  
...  


Author(s):  
Faezeh Sadat Zakeri ◽  
M. Batz ◽  
Tobias Jaschke ◽  
Joachim Keinert ◽  
Alexandra Chuchvara


2016 ◽  
Vol 10 (2) ◽  
pp. 172-178 ◽  
Author(s):  
Shin Usuki ◽  
◽  
Masaru Uno ◽  
Kenjiro T. Miura ◽  
◽  
...  

In this paper, we propose a digital shape reconstruction method for micro-sized 3D (three-dimensional) objects based on the shape from silhouette (SFS) method that reconstructs the shape of a 3D model from silhouette images taken from multiple viewpoints. In the proposed method, images used in the SFS method are depth images acquired with a light-field microscope by digital refocusing (DR) of a stacked image along the axial direction. The DR can generate refocused images from an acquired image by an inverse ray tracing technique using a microlens array. Therefore, this technique provides fast image stacking with different focal planes. Our proposed method can reconstruct micro-sized object models including edges, convex shapes, and concave shapes on the surface of an object such as micro-sized defects so that damaged structures in the objects can be visualized. Firstly, we introduce the SFS method and the light-field microscope for 3D shape reconstruction that is required in the field of micro-sized manufacturing. Secondly, we show the developed experimental equipment for microscopic image acquisition. Depth calibration using a USAF1951 test target is carried out to convert relative value into actual length. Then 3D modeling techniques including image processing are implemented for digital shape reconstruction. Finally, 3D shape reconstruction results of micro-sized machining tools are shown and discussed.



2016 ◽  
Vol 28 (4) ◽  
pp. 523-532 ◽  
Author(s):  
Akihiro Obara ◽  
◽  
Xu Yang ◽  
Hiromasa Oku ◽  

[abstFig src='/00280004/10.jpg' width='300' text='Concept of SLF generated by two projectors' ] Triangulation is commonly used to restore 3D scenes, but its frame of less than 30 fps due to time-consuming stereo-matching is an obstacle for applications requiring that results be fed back in real time. The structured light field (SLF) our group proposed previously reduced the amount of calculation in 3D restoration, realizing high-speed measurement. Specifically, the SLF estimates depth information by projecting information on distance directly to a target. The SLF synthesized as reported, however, presents difficulty in extracting image features for depth estimation. In this paper, we propose synthesizing the SLF using two projectors with a certain layout. Our proposed SLF’s basic properties are based on an optical model. We evaluated the SLF’s performance using a prototype we developed and applied to the high-speed depth estimation of a target moving randomly at a speed of 1000 Hz. We demonstrate the target’s high-speed tracking based on high-speed depth information feedback.



2020 ◽  
Vol 45 (12) ◽  
pp. 3256
Author(s):  
Zewei Cai ◽  
Giancarlo Pedrini ◽  
Wolfgang Osten ◽  
Xiaoli Liu ◽  
Xiang Peng


Sign in / Sign up

Export Citation Format

Share Document