scholarly journals Estimating 3D Surface Depth Based on Depth-of-Field Image Fusion

Image Fusion ◽  
10.5772/14661 ◽  
2011 ◽  
Author(s):  
Marcin Denkowski ◽  
Pawel Mikolajczak ◽  
Michal Chlebiej
2021 ◽  
Author(s):  
Pol Martínez ◽  
Carlos Bermudez ◽  
Roger Artigas ◽  
Guillem Carles

2014 ◽  
Vol 900 ◽  
pp. 547-553 ◽  
Author(s):  
Xin Xu ◽  
Rong Wu Wang

It is difficult to capture a completely clear image of nonwoven web which is thicker than the depth of field of a light microscope. This phenomenon will leads to the data loss and the test error. In this paper, a region-based image fusion algorithm based on fibers natural boundaries was proposed. First, the one-pixel-wide boundaries were extracted from the point-based image fusion process. Then, the image fusion regions were formed by the sharpness diffusion from the source points which have the highest sharpness at local gradient within the boundaries of fibers. Finally, a fused clear image of the nonwoven web was constructed by replacing the regions with the corresponding regions which have the highest sharpness gradient from the series images at different focus positions of the light microscope.


Sensors ◽  
2019 ◽  
Vol 19 (6) ◽  
pp. 1409 ◽  
Author(s):  
Hang Liu ◽  
Hengyu Li ◽  
Jun Luo ◽  
Shaorong Xie ◽  
Yu Sun

Multi-focus image fusion is a technique for obtaining an all-in-focus image in which all objects are in focus to extend the limited depth of field (DoF) of an imaging system. Different from traditional RGB-based methods, this paper presents a new multi-focus image fusion method assisted by depth sensing. In this work, a depth sensor is used together with a colour camera to capture images of a scene. A graph-based segmentation algorithm is used to segment the depth map from the depth sensor, and the segmented regions are used to guide a focus algorithm to locate in-focus image blocks from among multi-focus source images to construct the reference all-in-focus image. Five test scenes and six evaluation metrics were used to compare the proposed method and representative state-of-the-art algorithms. Experimental results quantitatively demonstrate that this method outperforms existing methods in both speed and quality (in terms of comprehensive fusion metrics). The generated images can potentially be used as reference all-in-focus images.


2013 ◽  
Author(s):  
Boris Ajdin ◽  
Timo Ahonen
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document