depth from focus
Recently Published Documents


TOTAL DOCUMENTS

74
(FIVE YEARS 12)

H-INDEX

12
(FIVE YEARS 2)

Author(s):  
Yoichi Matsubara ◽  
Keiichiro Shirai ◽  
Yuya Ito ◽  
Kiyoshi Tanaka

AbstractDepth-from-focus methods can estimate the depth from a set of images taken with different focus settings. We recently proposed a method that uses the relationship of the ratio between the luminance value of a target pixel and the mean value of the neighboring pixels. This relationship has a Poisson distribution. Despite its good performance, the method requires a large amount of memory and computation time because it needs to store focus measurement values for each depth and each window radius on a pixel-wise basis, and filtering to compute the mean value, which is performed twice, makes the relationship among neighboring pixels too strong to parallelize the pixel-wise processing. In this paper, we propose an approximate calculation method that can give almost the same results with a single time filtering operation and enables pixel-wise parallelization. This pixel-wise processing does not require the aforementioned focus measure values to be stored, which reduces the amount of memory. Additionally, utilizing the pixel-wise processing, we propose a method of determining the process window size that can improve noise tolerance and in depth estimation in texture-less regions. Through experiments, we show that our new method can better estimate depth values in a much shorter time.


Author(s):  
Sabato Ceruso ◽  
Sergio Bonaque-González ◽  
Ricardo Oliva-García ◽  
José Manuel Rodríguez-Ramos
Keyword(s):  

Micron ◽  
2021 ◽  
Vol 144 ◽  
pp. 103035
Author(s):  
Yan He ◽  
Na Deng ◽  
Binjie Xin ◽  
Lulu Liu

Author(s):  
Sherzod Salokhiddinov ◽  
Seungkyu Lee

Traditional depth from focus (DFF) methods obtain depth image from a set of differently focused color images. They detect in-focus region at each image by measuring the sharpness of observed color textures. However, estimating sharpness of arbitrary color texture is not a trivial task especially when there are limited color or intensity variations in an image. Recent deep learning based DFF approaches have shown that the collective estimation of sharpness in a set of focus images based on large body of training samples outperforms traditional DFF with challenging target objects with textureless or glaring surfaces. In this article, we propose a deep spatial–focal convolutional neural network that encodes the correlations between consecutive focused images that are fed to the network in order. In this way, our neural network understands the pattern of blur changes of each image pixel from a volumetric input of spatial–focal three-dimensional space. Extensive quantitative and qualitative evaluations on existing three public data sets show that our proposed method outperforms prior methods in depth estimation.


2020 ◽  
Vol 10 (23) ◽  
pp. 8522
Author(s):  
Sherzod Salokhiddinov ◽  
Seungkyu Lee

Estimating the 3D shape of a scene from differently focused set of images has been a practical approach for 3D reconstruction with color cameras. However, reconstructed depth with existing depth from focus (DFF) methods still suffer from poor quality with textureless and object boundary regions. In this paper, we propose an improved depth estimation based on depth from focus iteratively refining 3D shape from uniformly focused image set (UFIS). We investigated the appearance changes in spatial and frequency domains in iterative manner. In order to achieve sub-frame accuracy in depth estimation, optimal location of focused frame in DFF is estimated by fitting a polynomial curve on the dissimilarity measurements. In order to avoid wrong depth values on texture-less regions we propose to build a confidence map and use it to identify erroneous depth estimations. We evaluated our method on public and our own datasets obtained from different types of devices, such as smartphones, medical, and normal color cameras. Quantitative and qualitative evaluations on various test image sets show promising performance of the proposed method in depth estimation.


2020 ◽  
Vol 29 ◽  
pp. 1045-1060 ◽  
Author(s):  
Hae-Gon Jeon ◽  
Jaeheung Surh ◽  
Sunghoon Im ◽  
In So Kweon

Author(s):  
Xinqing Guo ◽  
Zhang Chen ◽  
Siyuan Li ◽  
Yang Yang ◽  
Jingyi Yu

Sign in / Sign up

Export Citation Format

Share Document