Combining Focus Measures for Three Dimensional Shape Estimation Using Genetic Programming

Author(s):  
Muhammad Tariq Mahmood ◽  
Tae-Sun Choi

Three-dimensional (3D) shape reconstruction is a fundamental problem in machine vision applications. Shape from focus (SFF) is one of the passive optical methods for 3D shape recovery, which uses degree of focus as a cue to estimate 3D shape. In this approach, usually a single focus measure operator is applied to measure the focus quality of each pixel in image sequence. However, the applicability of a single focus measure is limited to estimate accurately the depth map for diverse type of real objects. To address this problem, we introduce the development of optimal composite depth (OCD) function through genetic programming (GP) for accurate depth estimation. The OCD function is developed through optimally combining the primary information extracted using one (homogeneous features) or more focus measures (heterogeneous features). The genetically developed composite function is then used to compute the optimal depth map of objects. The performance of this function is investigated using both synthetic and real world image sequences. Experimental results demonstrate that the proposed estimator is more accurate than existing SFF methods. Further, it is found that heterogeneous function is more effective than homogeneous function.

Author(s):  
AAMIR SAEED MALIK ◽  
TAE-SUN CHOI

There are many factors affecting the depth estimation for 3D shape recovery using passive optical methods. In this paper, we consider the effects of noise, source illumination and texture reflectance for shape from focus technique. We present a focus measure which shows consistent performance for varying noise levels, source illumination levels and different texture reflectance. The focus measure is based on an optical transfer function implemented in the Fourier domain and its results are compared with four other focus measures. The additive Gaussian noise is considered for noise analysis. Three illumination levels are considered for source illumination and three different textures are studied for reflectance analysis.


Sensors ◽  
2019 ◽  
Vol 19 (3) ◽  
pp. 500 ◽  
Author(s):  
Luca Palmieri ◽  
Gabriele Scrofani ◽  
Nicolò Incardona ◽  
Genaro Saavedra ◽  
Manuel Martínez-Corral ◽  
...  

Light field technologies have seen a rise in recent years and microscopy is a field where such technology has had a deep impact. The possibility to provide spatial and angular information at the same time and in a single shot brings several advantages and allows for new applications. A common goal in these applications is the calculation of a depth map to reconstruct the three-dimensional geometry of the scene. Many approaches are applicable, but most of them cannot achieve high accuracy because of the nature of such images: biological samples are usually poor in features and do not exhibit sharp colors like natural scene. Due to such conditions, standard approaches result in noisy depth maps. In this work, a robust approach is proposed where accurate depth maps can be produced exploiting the information recorded in the light field, in particular, images produced with Fourier integral Microscope. The proposed approach can be divided into three main parts. Initially, it creates two cost volumes using different focal cues, namely correspondences and defocus. Secondly, it applies filtering methods that exploit multi-scale and super-pixels cost aggregation to reduce noise and enhance the accuracy. Finally, it merges the two cost volumes and extracts a depth map through multi-label optimization.


Sensors ◽  
2019 ◽  
Vol 19 (20) ◽  
pp. 4434 ◽  
Author(s):  
Sangwon Kim ◽  
Jaeyeal Nam ◽  
Byoungchul Ko

Depth estimation is a crucial and fundamental problem in the computer vision field. Conventional methods re-construct scenes using feature points extracted from multiple images; however, these approaches require multiple images and thus are not easily implemented in various real-time applications. Moreover, the special equipment required by hardware-based approaches using 3D sensors is expensive. Therefore, software-based methods for estimating depth from a single image using machine learning or deep learning are emerging as new alternatives. In this paper, we propose an algorithm that generates a depth map in real time using a single image and an optimized lightweight efficient neural network (L-ENet) algorithm instead of physical equipment, such as an infrared sensor or multi-view camera. Because depth values have a continuous nature and can produce locally ambiguous results, pixel-wise prediction with ordinal depth range classification was applied in this study. In addition, in our method various convolution techniques are applied to extract a dense feature map, and the number of parameters is greatly reduced by reducing the network layer. By using the proposed L-ENet algorithm, an accurate depth map can be generated from a single image quickly and, in a comparison with the ground truth, we can produce depth values closer to those of the ground truth with small errors. Experiments confirmed that the proposed L-ENet can achieve a significantly improved estimation performance over the state-of-the-art algorithms in depth estimation based on a single image.


2016 ◽  
Vol 10 (2) ◽  
pp. 172-178 ◽  
Author(s):  
Shin Usuki ◽  
◽  
Masaru Uno ◽  
Kenjiro T. Miura ◽  
◽  
...  

In this paper, we propose a digital shape reconstruction method for micro-sized 3D (three-dimensional) objects based on the shape from silhouette (SFS) method that reconstructs the shape of a 3D model from silhouette images taken from multiple viewpoints. In the proposed method, images used in the SFS method are depth images acquired with a light-field microscope by digital refocusing (DR) of a stacked image along the axial direction. The DR can generate refocused images from an acquired image by an inverse ray tracing technique using a microlens array. Therefore, this technique provides fast image stacking with different focal planes. Our proposed method can reconstruct micro-sized object models including edges, convex shapes, and concave shapes on the surface of an object such as micro-sized defects so that damaged structures in the objects can be visualized. Firstly, we introduce the SFS method and the light-field microscope for 3D shape reconstruction that is required in the field of micro-sized manufacturing. Secondly, we show the developed experimental equipment for microscopic image acquisition. Depth calibration using a USAF1951 test target is carried out to convert relative value into actual length. Then 3D modeling techniques including image processing are implemented for digital shape reconstruction. Finally, 3D shape reconstruction results of micro-sized machining tools are shown and discussed.


Sensors ◽  
2020 ◽  
Vol 20 (13) ◽  
pp. 3718 ◽  
Author(s):  
Hieu Nguyen ◽  
Yuzeng Wang ◽  
Zhaoyang Wang

Single-shot 3D imaging and shape reconstruction has seen a surge of interest due to the ever-increasing evolution in sensing technologies. In this paper, a robust single-shot 3D shape reconstruction technique integrating the structured light technique with the deep convolutional neural networks (CNNs) is proposed. The input of the technique is a single fringe-pattern image, and the output is the corresponding depth map for 3D shape reconstruction. The essential training and validation datasets with high-quality 3D ground-truth labels are prepared by using a multi-frequency fringe projection profilometry technique. Unlike the conventional 3D shape reconstruction methods which involve complex algorithms and intensive computation to determine phase distributions or pixel disparities as well as depth map, the proposed approach uses an end-to-end network architecture to directly carry out the transformation of a 2D image to its corresponding 3D depth map without extra processing. In the approach, three CNN-based models are adopted for comparison. Furthermore, an accurate structured-light-based 3D imaging dataset used in this paper is made publicly available. Experiments have been conducted to demonstrate the validity and robustness of the proposed technique. It is capable of satisfying various 3D shape reconstruction demands in scientific research and engineering applications.


2020 ◽  
Vol 10 (4) ◽  
pp. 306-315
Author(s):  
Tianting Lai ◽  
Pu Cheng ◽  
Congliao Yan ◽  
Chi Li ◽  
Wenbin Hu ◽  
...  

Abstract A fiber-optic shape sensing based on 7-core fiber Bragg gratings (FBGs) is proposed and experimentally demonstrated. The investigations are presented for two-dimensional and three-dimensional shape reconstruction by distinguishing bending and twisting of 7-core optical fiber with FBGs. The curvature and bending orientation can be calculated by acquiring FBG wavelengths from any two side cores among the six outer cores. And the shape sensing in three-dimensional (3D) space is computed by analytic geometry theory. The experiments corresponding of two-dimensional (2D) and 3D shape sensing are demonstrated and conducted to verify the theoretical principles. The resolution of curvature is about 0.1m−1 for 2D measuring. The error of angle in shape reconstruction is about 1.89° for 3D measuring. The proposed sensing technique based on 7-core FBGs is promising of high feasibility, stability, and repeatability, especially for the distinguishing ability on the bending orientation due to the six symmetrical cores on the cross-section.


Electronics ◽  
2018 ◽  
Vol 7 (11) ◽  
pp. 336 ◽  
Author(s):  
Myeongseob Ko ◽  
Donghyun Kim ◽  
Mingi Kim ◽  
Kwangtaek Kim

A depth estimation has been widely studied with the emergence of a Lytro camera. However, skin depth estimation using a Lytro camera is too sensitive to the influence of illumination due to its low image quality, and thus, when three-dimensional reconstruction is attempted, there are limitations in that either the skin texture information is not properly expressed or considerable numbers of errors occur in the reconstructed shape. To address these issues, we propose a method that enhances the texture information and generates robust images unsusceptible to illumination using a deep learning method, conditional generative adversarial networks (CGANs), in order to estimate the depth of the skin surface more accurately. Because it is difficult to estimate the depth of wrinkles with very few characteristics, we have built two cost volumes using the difference of the pixel intensity and gradient, in two ways. Furthermore, we demonstrated that our method could generate a skin depth map more precisely by preserving the skin texture effectively, as well as by reducing the noise of the final depth map through the final depth-refinement step (CGAN guidance image filtering) to converge into a haptic interface that is sensitive to the small surface noise.


Photonics ◽  
2021 ◽  
Vol 8 (11) ◽  
pp. 459
Author(s):  
Hieu Nguyen ◽  
Zhaoyang Wang

Accurate three-dimensional (3D) shape reconstruction of objects from a single image is a challenging task, yet it is highly demanded by numerous applications. This paper presents a novel 3D shape reconstruction technique integrating a high-accuracy structured-light method with a deep neural network learning scheme. The proposed approach employs a convolutional neural network (CNN) to transform a color structured-light fringe image into multiple triple-frequency phase-shifted grayscale fringe images, from which the 3D shape can be accurately reconstructed. The robustness of the proposed technique is verified, and it can be a promising 3D imaging tool in future scientific and industrial applications.


2017 ◽  
Vol 34 (8) ◽  
pp. 1763-1781 ◽  
Author(s):  
Haruya Minda ◽  
Norio Tsuda ◽  
Yasushi Fujiyoshi

AbstractThis paper describes a Multiangle Snowflake Imager (MSI) designed to capture the pseudo-three-dimensional (3D) shape and the fall velocity of individual snowflakes larger than 1.5 mm in size. Four height-offset line-image scanners estimate fall velocities and the four-angle silhouettes are used to reconstruct the 3D snowflake shapes. The 3D shape reconstruction is tested using reference objects (spheres, spheroids, cubes, and plates). The four-silhouette method of the MSI improves the representation of the particle shape and volume compared to two-silhouette methods, such as the two-dimensional video disdrometer (2DVD). The volume (equivolumetric diameters) of snowflakes estimated by the four-silhouette method is approximately 44% (13%) smaller than that estimated by the two-silhouette method. The ability of the imager to measure the fall velocity and particle size distributions based on the silhouette width and the equivolumetric diameter of 3D-shaped particles is verified via a comparison with the 2DVD in three snowfall events.


2021 ◽  
Vol 2127 (1) ◽  
pp. 012030
Author(s):  
E V Shmatko ◽  
V V Pinchukov ◽  
A D Bogachev ◽  
A Yu Poroykov

Abstract Optical methods for deformations diagnostic and surface shape measurement are widely used in scientific research and industry. Most of these methods are based on triangulating a set of two-dimensional points in the images appropriate to the same three-dimensional points of the object in space. Various algorithms to search such points are applied. The possibility of using cross-correlation processing of digital images to search these points is considered in the work. Algorithms based on the correlation function calculation are widely employed in such a popular flow diagnostic method as PIV. The cameras of a stereo system for surface shape measurement can be widely spaced, and the tilt angles relative to the surface can differ significantly. This leads to the fact that the images taken from the cameras cannot be directly processed by the correlation function because it is not invariant to rotation. To solve this problem, fiducial markers are used to find an initial estimate of displacement of the images relative to each other. This approach makes it possible to successfully apply correlation processing for stereo system images with a large stereo base.


Sign in / Sign up

Export Citation Format

Share Document