scholarly journals Enhancement of light field disparity maps by reducing the silhouette effect and plane noise

Author(s):  
Rui M. Lourenco ◽  
Luis M. N. Tavora ◽  
Pedro A. A. Assuncao ◽  
Lucas A. Thomaz ◽  
Rui Fonseca-Pinto ◽  
...  

AbstractDuring the last decade, there has been an increasing number of applications dealing with multidimensional visual information, either for 3D object representation or feature extraction purposes. In this context, recent advances in light field technology, have been driving research efforts in disparity estimation methods. Among the existing ones, those based on the structure tensor have emerged as very promising to estimate disparity maps from Epipolar Plane Images. However, this approach is known to have two intrinsic limitations: (i) silhouette enlargement and (ii) irregularity of surface normal maps as computed from the estimated disparity. To address these problems, this work proposes a new method for improving disparity maps obtained from the structure-tensor approach by enhancing the silhouette and reducing the noise of planar surfaces in light fields. An edge-based approach is initially used for silhouette improvement through refinement of the estimated disparity values around object edges. Then, a plane detection algorithm, based on a seed growth strategy, is used to estimate planar regions, which in turn are used to guide correction of erroneous disparity values detected in object boundaries. The proposed algorithm shows an average improvement of 98.3% in terms of median angle error for plane surfaces, when compared to regular structure-tensor-based methods, outperforming state-of-the-art methods. The proposed framework also presents very competitive results, in terms of mean square error between disparity maps and their ground truth, when compared with their counterparts.

2017 ◽  
Vol 84 (7-8) ◽  
Author(s):  
Alessandro Vianello ◽  
Giulio Manfredi ◽  
Maximilian Diebold ◽  
Bernd Jähne

AbstractDisparity estimation using the structure tensor is a local approach to determine orientation in Epipolar Plane Images. A global extension would lead to more precise and robust estimations. In this work, a novel algorithm for 3D reconstruction from linear light fields is proposed. This method uses a modified version of the Progressive Probabilistic Hough Transform to extract orientations from Epipolar Plane Images, allowing to achieve high quality disparity maps. To this aim, the structure tensor estimates are used to speed up computation and improve the disparity estimation near occlusion boundaries. The new algorithm is evaluated on both synthetic and real light field datasets, and compared with classical local disparity estimation techniques based on the structure tensor.


Author(s):  
Rui Lourenco ◽  
Pedro A.A. Assuncao ◽  
Luis M.N. Tavora ◽  
Rui Fonseca-Pinto ◽  
Sergio M.M. Faria

2021 ◽  
Vol 13 (3) ◽  
pp. 455
Author(s):  
Md Nazrul Islam ◽  
Murat Tahtali ◽  
Mark Pickering

Multispectral polarimetric light field imagery (MSPLFI) contains significant information about a transparent object’s distribution over spectra, the inherent properties of its surface and its directional movement, as well as intensity, which all together can distinguish its specular reflection. Due to multispectral polarimetric signatures being limited to an object’s properties, specular pixel detection of a transparent object is a difficult task because the object lacks its own texture. In this work, we propose a two-fold approach for determining the specular reflection detection (SRD) and the specular reflection inpainting (SRI) in a transparent object. Firstly, we capture and decode 18 different transparent objects with specularity signatures obtained using a light field (LF) camera. In addition to our image acquisition system, we place different multispectral filters from visible bands and polarimetric filters at different orientations to capture images from multisensory cues containing MSPLFI features. Then, we propose a change detection algorithm for detecting specular reflected pixels from different spectra. A Mahalanobis distance is calculated based on the mean and the covariance of both polarized and unpolarized images of an object in this connection. Secondly, an inpainting algorithm that captures pixel movements among sub-aperture images of the LF is proposed. In this regard, a distance matrix for all the four connected neighboring pixels is computed from the common pixel intensities of each color channel of both the polarized and the unpolarized images. The most correlated pixel pattern is selected for the task of inpainting for each sub-aperture image. This process is repeated for all the sub-aperture images to calculate the final SRI task. The experimental results demonstrate that the proposed two-fold approach significantly improves the accuracy of detection and the quality of inpainting. Furthermore, the proposed approach also improves the SRD metrics (with mean F1-score, G-mean, and accuracy as 0.643, 0.656, and 0.981, respectively) and SRI metrics (with mean structural similarity index (SSIM), peak signal-to-noise ratio (PSNR), mean squared error (IMMSE), and mean absolute deviation (MAD) as 0.966, 0.735, 0.073, and 0.226, respectively) for all the sub-apertures of the 18 transparent objects in MSPLFI dataset as compared with those obtained from the methods in the literature considered in this paper. Future work will exploit the integration of machine learning for better SRD accuracy and SRI quality.


2014 ◽  
Vol 571-572 ◽  
pp. 729-734
Author(s):  
Jia Li ◽  
Huan Lin ◽  
Duo Qiang Zhang ◽  
Xiao Lu Xue

Normal vector of 3D surface is important differential geometric property over localized neighborhood, and its abrupt change along the surface directly reflects the variation of geometric morphometric. Based on this observation, this paper presents a novel edge detection algorithm in 3D point clouds, which utilizes the change intensity and change direction of adjacent normal vectors and is composed of three steps. First, a two-dimensional grid is constructed according to the inherent data acquisition sequence so as to build up the topology of points. Second, by this topological structure preliminary edge points are retrieved, and the potential directions of edges passing through them are estimated according to the change of normal vectors between adjacent points. Finally, an edge growth strategy is designed to regain the missing edge points and connect them into complete edge lines. The results of experiment in a real scene demonstrate that the proposed algorithm can extract geometric edges from 3D point clouds robustly, and is able to reduce edge quality’s dependence on user defined parameters.


1999 ◽  
Author(s):  
ZhuLiang Cai ◽  
John Dill ◽  
Shahram Payandeh

Abstract 3D collision detection and modeling techniques can be used in the development of haptic rendering schemes which can be used, for example, in surgical training, virtual assembly, or games. Based on a fast collision detection algorithm (RAPID) and 3D object representation, a practical haptic rendering system has been developed. A sub-system determines detailed collision information. Simulation results are presented to demonstrate the practicality of our results.


2020 ◽  
Vol 34 (07) ◽  
pp. 12095-12103
Author(s):  
Yu-Ju Tsai ◽  
Yu-Lun Liu ◽  
Ming Ouhyoung ◽  
Yung-Yu Chuang

This paper introduces a novel deep network for estimating depth maps from a light field image. For utilizing the views more effectively and reducing redundancy within views, we propose a view selection module that generates an attention map indicating the importance of each view and its potential for contributing to accurate depth estimation. By exploring the symmetric property of light field views, we enforce symmetry in the attention map and further improve accuracy. With the attention map, our architecture utilizes all views more effectively and efficiently. Experiments show that the proposed method achieves state-of-the-art performance in terms of accuracy and ranks the first on a popular benchmark for disparity estimation for light field images.


2013 ◽  
Vol 846-847 ◽  
pp. 1092-1097
Author(s):  
Xiao Guang Hu ◽  
Cheng Qi Cheng ◽  
De Ren Li

In this paper, according to the phenomenon that the retina will strongly respond to large contrast visual stimulation and the generation mechanism of visual information in the primary visual cortex, we propose a method generating saliency map and detecting ship objects in satellite optical images. The method can detect significant contrast objects without considering the shape, edge or other forms of prior knowledge of the objects. In ship detection experiment, the results show the detection method based on visual contrast can effectively concentrate on the objects with greater contrast and achieve good detection results.


2021 ◽  
Author(s):  
Luca Palmieri

Microlens-array based plenoptic cameras capture the light field in a single shot, enabling new potential applications but also introducing additional challenges. A plenoptic image consists of thousand of microlens images. Estimating the disparity for each microlens allows to render conventional images, changing the perspective and the focal settings, and to reconstruct the three-dimensional geometry of the scene. The work includes a blur-aware calibration method to model plenoptic cameras, an optimization method to accurately select the best microlenses combination for disparity estimation, an overview of the different types of plenoptic cameras, an analysis of the disparity estimation algorithms, and a robust depth estimation approach for light field microscopy. The research led to the creation of a full framework for plenoptic cameras, which contains the implementation of the algorithms discussed in the work and datasets of both real and synthetic images for comparison, benchmarking and future research.


Sign in / Sign up

Export Citation Format

Share Document