scholarly journals Illumination-Insensitive Skin Depth Estimation from a Light-Field Camera Based on CGANs toward Haptic Palpation

Electronics ◽  
2018 ◽  
Vol 7 (11) ◽  
pp. 336 ◽  
Author(s):  
Myeongseob Ko ◽  
Donghyun Kim ◽  
Mingi Kim ◽  
Kwangtaek Kim

A depth estimation has been widely studied with the emergence of a Lytro camera. However, skin depth estimation using a Lytro camera is too sensitive to the influence of illumination due to its low image quality, and thus, when three-dimensional reconstruction is attempted, there are limitations in that either the skin texture information is not properly expressed or considerable numbers of errors occur in the reconstructed shape. To address these issues, we propose a method that enhances the texture information and generates robust images unsusceptible to illumination using a deep learning method, conditional generative adversarial networks (CGANs), in order to estimate the depth of the skin surface more accurately. Because it is difficult to estimate the depth of wrinkles with very few characteristics, we have built two cost volumes using the difference of the pixel intensity and gradient, in two ways. Furthermore, we demonstrated that our method could generate a skin depth map more precisely by preserving the skin texture effectively, as well as by reducing the noise of the final depth map through the final depth-refinement step (CGAN guidance image filtering) to converge into a haptic interface that is sensitive to the small surface noise.

Author(s):  
Muhammad Tariq Mahmood ◽  
Tae-Sun Choi

Three-dimensional (3D) shape reconstruction is a fundamental problem in machine vision applications. Shape from focus (SFF) is one of the passive optical methods for 3D shape recovery, which uses degree of focus as a cue to estimate 3D shape. In this approach, usually a single focus measure operator is applied to measure the focus quality of each pixel in image sequence. However, the applicability of a single focus measure is limited to estimate accurately the depth map for diverse type of real objects. To address this problem, we introduce the development of optimal composite depth (OCD) function through genetic programming (GP) for accurate depth estimation. The OCD function is developed through optimally combining the primary information extracted using one (homogeneous features) or more focus measures (heterogeneous features). The genetically developed composite function is then used to compute the optimal depth map of objects. The performance of this function is investigated using both synthetic and real world image sequences. Experimental results demonstrate that the proposed estimator is more accurate than existing SFF methods. Further, it is found that heterogeneous function is more effective than homogeneous function.


Sensors ◽  
2019 ◽  
Vol 19 (3) ◽  
pp. 500 ◽  
Author(s):  
Luca Palmieri ◽  
Gabriele Scrofani ◽  
Nicolò Incardona ◽  
Genaro Saavedra ◽  
Manuel Martínez-Corral ◽  
...  

Light field technologies have seen a rise in recent years and microscopy is a field where such technology has had a deep impact. The possibility to provide spatial and angular information at the same time and in a single shot brings several advantages and allows for new applications. A common goal in these applications is the calculation of a depth map to reconstruct the three-dimensional geometry of the scene. Many approaches are applicable, but most of them cannot achieve high accuracy because of the nature of such images: biological samples are usually poor in features and do not exhibit sharp colors like natural scene. Due to such conditions, standard approaches result in noisy depth maps. In this work, a robust approach is proposed where accurate depth maps can be produced exploiting the information recorded in the light field, in particular, images produced with Fourier integral Microscope. The proposed approach can be divided into three main parts. Initially, it creates two cost volumes using different focal cues, namely correspondences and defocus. Secondly, it applies filtering methods that exploit multi-scale and super-pixels cost aggregation to reduce noise and enhance the accuracy. Finally, it merges the two cost volumes and extracts a depth map through multi-label optimization.


Sensors ◽  
2019 ◽  
Vol 19 (7) ◽  
pp. 1708 ◽  
Author(s):  
Daniel Stanley Tan ◽  
Chih-Yuan Yao ◽  
Conrado Ruiz ◽  
Kai-Lung Hua

Depth has been a valuable piece of information for perception tasks such as robot grasping, obstacle avoidance, and navigation, which are essential tasks for developing smart homes and smart cities. However, not all applications have the luxury of using depth sensors or multiple cameras to obtain depth information. In this paper, we tackle the problem of estimating the per-pixel depths from a single image. Inspired by the recent works on generative neural network models, we formulate the task of depth estimation as a generative task where we synthesize an image of the depth map from a single Red, Green, and Blue (RGB) input image. We propose a novel generative adversarial network that has an encoder-decoder type generator with residual transposed convolution blocks trained with an adversarial loss. Quantitative and qualitative experimental results demonstrate the effectiveness of our approach over several depth estimation works.


IEEE Access ◽  
2021 ◽  
pp. 1-1
Author(s):  
Xin Yang ◽  
Qingling Chang ◽  
Xinglin Liu ◽  
Siyuan He ◽  
Yan Cui

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Barbara Helena Barcaro Machado ◽  
Ivy Dantas De Melo E. Silva ◽  
Walter Marou Pautrat ◽  
James Frame ◽  
Mohammad Najlah

AbstractMeasuring outcomes from treatments to the skin is either reliant upon patient’s subjective feedback or scale-based peer assessments. Three-Dimensional stereophotogrammetry intend to accurately quantify skin microtopography before and after treatments. The objective of this study is comparing the accuracy of stereophotogrammetry with a scale-based peer evaluation in assessing topographical changes to skin surface following laser treatment. A 3D stereophotogrammetry system photographed skin surface of 48 patients with facial wrinkles or scars before and three months after laser resurfacing, followed immediately by topical application of vitamin C. The software measured changes in skin roughness, wrinkle depth and scar volume. Images were presented to three observers, each independently scoring cutaneous improvement according to Investigator Global Aesthetic Improvement Scale (IGAIS). As for the results, a trend reflecting skin/scar improvement was reported by 3D SPM measurements and raters. The percentage of topographical change given by the raters matched 3D SPM findings. Agreement was highest when observers analysed 3D images. However, observers overestimated skin improvement in a nontreatment control whilst 3D SPM was precise in detecting absence of intervention. This study confirmed a direct correlation between the IGAIS clinical scale and 3D SPM and confirmed the efficacy and accuracy of the latter when assessing cutaneous microtopography alterations as a response to laser treatment.


1999 ◽  
Vol 391 ◽  
pp. 249-292 ◽  
Author(s):  
ALEXANDER Z. ZINCHENKO ◽  
MICHAEL A. ROTHER ◽  
ROBERT H. DAVIS

A three-dimensional boundary-integral algorithm for interacting deformable drops in Stokes flow is developed. The algorithm is applicable to very large deformations and extreme cases, including cusped interfaces and drops closely approaching breakup. A new, curvatureless boundary-integral formulation is used, containing only the normal vectors, which are usually much less sensitive than is the curvature to discretization errors. A proper regularization makes the method applicable to small surface separations and arbitrary λ, where λ is the ratio of the viscosities of the drop and medium. The curvatureless form eliminates the difficulty with the concentrated capillary force inherent in two-dimensional cusps and allows simulation of three-dimensional drop/bubble motions with point and line singularities, while the conventional form can only handle point singularities. A combination of the curvatureless form and a special, passive technique for adaptive mesh stabilization allows three-dimensional simulations for high aspect ratio drops closely approaching breakup, using highly stretched triangulations with fixed topology. The code is applied to study relative motion of two bubbles or drops under gravity for moderately high Bond numbers [Bscr ], when cusping and breakup are typical. The deformation-induced capture efficiency of bubbles and low-viscosity drops is calculated and found to be in reasonable agreement with available experiments of Manga & Stone (1993, 1995b). Three-dimensional breakup of the smaller drop due to the interaction with a larger one for λ=O(1) is also considered, and the algorithm is shown to accurately simulate both the primary breakup moment and the volume partition by extrapolation for moderately supercritical conditions. Calculations of the breakup efficiency suggest that breakup due to interactions is significant in a sedimenting emulsion with narrow size distribution at λ=O(1) and [Bscr ][ges ]5–10. A combined capture and breakup phenomenon, when the smaller drop starts breaking without being released from the dimple formed on the larger one, is also observed in the simulations. A general classification of possible modes of two-drop interactions for λ=O(1) is made.


2021 ◽  
Author(s):  
◽  
Alistair Stronach

<p><b>New Zealand’s capital city of Wellington lies in an area of high seismic risk, which is further increased by the sedimentary basin beneath the Central Business District (CBD). Ground motion data and damage patterns from the 2013 Cook Strait and 2016 Kaikōura earthquakes indicate that two- and three-dimensional amplification effects due to the Wellington sedimentary basin may be significant. These effects are not currently accounted for in the New Zealand Building Code. In order for this to be done, three-dimensional simulations of earthquake shaking need to be undertaken, which requires detailed knowledge of basin geometry. This is currently lacking, primarily because of a dearth of deep boreholes in the CBD area, particularly in Thorndon and Pipitea where sediment depths are estimated to be greatest.</b></p> <p>A new basin depth map for the Wellington CBD has been created by conducting a gravity survey using a modern Scintrex CG-6 gravity meter. Across the study area, 519 new high precision gravity measurements were made and a residual anomaly map created, showing a maximum amplitude anomaly of -6.2 mGal with uncertainties better than ±0.1 mGal. Thirteen two-dimensional geological profiles were modelled to fit the anomalies, then combined with existing borehole constraints to construct the basin depth map. </p> <p>Results indicate on average greater depths than in existing models, particularly in Pipitea where depths are interpreted to be as great as 450 m, a difference of 250 m. Within 1 km of shore depths are interpreted to increase further, to 600 m. The recently discovered basin bounding Aotea Fault is resolved in the gravity data, where the basement is offset by up to 13 m, gravity anomaly gradients up to 8 mGal/km are observed, and possible multiple fault strands identified. A secondary strand of the Wellington Fault is also identified in the north of Pipitea, where gravity anomaly gradients up to 18 mGal/km are observed.</p>


2021 ◽  
Vol 8 ◽  
Author(s):  
Qi Zhao ◽  
Ziqiang Zheng ◽  
Huimin Zeng ◽  
Zhibin Yu ◽  
Haiyong Zheng ◽  
...  

Underwater depth prediction plays an important role in underwater vision research. Because of the complex underwater environment, it is extremely difficult and expensive to obtain underwater datasets with reliable depth annotation. Thus, underwater depth map estimation with a data-driven manner is still a challenging task. To tackle this problem, we propose an end-to-end system including two different modules for underwater image synthesis and underwater depth map estimation, respectively. The former module aims to translate the hazy in-air RGB-D images to multi-style realistic synthetic underwater images while retaining the objects and the structural information of the input images. Then we construct a semi-real RGB-D underwater dataset using the synthesized underwater images and the original corresponding depth maps. We conduct supervised learning to perform depth estimation through the pseudo paired underwater RGB-D images. Comprehensive experiments have demonstrated that the proposed method can generate multiple realistic underwater images with high fidelity, which can be applied to enhance the performance of monocular underwater image depth estimation. Furthermore, the trained depth estimation model can be applied to real underwater image depth map estimation. We will release our codes and experimental setting in https://github.com/ZHAOQIII/UW_depth.


Sign in / Sign up

Export Citation Format

Share Document