scholarly journals Vari-Focal Light Field Camera for Extended Depth of Field

Micromachines ◽  
2021 ◽  
Vol 12 (12) ◽  
pp. 1453
Author(s):  
Hyun Myung Kim ◽  
Min Seok Kim ◽  
Sehui Chang ◽  
Jiseong Jeong ◽  
Hae-Gon Jeon ◽  
...  

The light field camera provides a robust way to capture both spatial and angular information within a single shot. One of its important applications is in 3D depth sensing, which can extract depth information from the acquired scene. However, conventional light field cameras suffer from shallow depth of field (DoF). Here, a vari-focal light field camera (VF-LFC) with an extended DoF is newly proposed for mid-range 3D depth sensing applications. As a main lens of the system, a vari-focal lens with four different focal lengths is adopted to extend the DoF up to ~15 m. The focal length of the micro-lens array (MLA) is optimized by considering the DoF both in the image plane and in the object plane for each focal length. By dividing measurement regions with each focal length, depth estimation with high reliability is available within the entire DoF. The proposed VF-LFC is evaluated by the disparity data extracted from images with different distances. Moreover, the depth measurement in an outdoor environment demonstrates that our VF-LFC could be applied in various fields such as delivery robots, autonomous vehicles, and remote sensing drones.

Sensors ◽  
2019 ◽  
Vol 19 (4) ◽  
pp. 866 ◽  
Author(s):  
Tanguy Ophoff ◽  
Kristof Van Beeck ◽  
Toon Goedemé

In this paper, we investigate whether fusing depth information on top of normal RGB data for camera-based object detection can help to increase the performance of current state-of-the-art single-shot detection networks. Indeed, depth sensing is easily acquired using depth cameras such as a Kinect or stereo setups. We investigate the optimal manner to perform this sensor fusion with a special focus on lightweight single-pass convolutional neural network (CNN) architectures, enabling real-time processing on limited hardware. For this, we implement a network architecture allowing us to parameterize at which network layer both information sources are fused together. We performed exhaustive experiments to determine the optimal fusion point in the network, from which we can conclude that fusing towards the mid to late layers provides the best results. Our best fusion models significantly outperform the baseline RGB network in both accuracy and localization of the detections.


2019 ◽  
Vol 5 (4) ◽  
pp. eaav1555 ◽  
Author(s):  
A. Orth ◽  
M. Ploschner ◽  
E. R. Wilson ◽  
I. S. Maksymov ◽  
B. C. Gibson

Optical fiber bundle microendoscopes are widely used for visualizing hard-to-reach areas of the human body. These ultrathin devices often forgo tunable focusing optics because of size constraints and are therefore limited to two-dimensional (2D) imaging modalities. Ideally, microendoscopes would record 3D information for accurate clinical and biological interpretation, without bulky optomechanical parts. Here, we demonstrate that the optical fiber bundles commonly used in microendoscopy are inherently sensitive to depth information. We use the mode structure within fiber bundle cores to extract the spatio-angular description of captured light rays—the light field—enabling digital refocusing, stereo visualization, and surface and depth mapping of microscopic scenes at the distal fiber tip. Our work opens a route for minimally invasive clinical microendoscopy using standard bare fiber bundle probes. Unlike coherent 3D multimode fiber imaging techniques, our incoherent approach is single shot and resilient to fiber bending, making it attractive for clinical adoption.


Author(s):  
Shengjun Tang ◽  
Qing Zhu ◽  
Wu Chen ◽  
Walid Darwish ◽  
Bo Wu ◽  
...  

RGB-D sensors are novel sensing systems that capture RGB images along with pixel-wise depth information. Although they are widely used in various applications, RGB-D sensors have significant drawbacks with respect to 3D dense mapping of indoor environments. First, they only allow a measurement range with a limited distance (e.g., within 3 m) and a limited field of view. Second, the error of the depth measurement increases with increasing distance to the sensor. In this paper, we propose an enhanced RGB-D mapping method for detailed 3D modeling of large indoor environments by combining RGB image-based modeling and depth-based modeling. The scale ambiguity problem during the pose estimation with RGB image sequences can be resolved by integrating the information from the depth and visual information provided by the proposed system. A robust rigid-transformation recovery method is developed to register the RGB image-based and depth-based 3D models together. The proposed method is examined with two datasets collected in indoor environments for which the experimental results demonstrate the feasibility and robustness of the proposed method


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Sergio Jiménez-Gambín ◽  
Noé Jiménez ◽  
José M. Benlloch ◽  
Francisco Camarena

AbstractWe report zero-th and high-order acoustic Bessel beams with broad depth-of-field generated using acoustic holograms. While the transverse field distribution of Bessel beams generated using traditional passive methods is correctly described by a Bessel function, these methods present a common drawback: the axial distribution of the field is not constant, as required for ideal Bessel beams. In this work, we experimentally, numerically and theoretically report acoustic truncated Bessel beams of flat-intensity along their axis in the ultrasound regime using phase-only holograms. In particular, the beams present a uniform field distribution showing an elongated focal length of about 40 wavelengths, while the transverse width of the beam remains smaller than 0.7 wavelengths. The proposed acoustic holograms were compared with 3D-printed fraxicons, a blazed version of axicons. The performance of both phase-only holograms and fraxicons is studied and we found that both lenses produce Bessel beams in a wide range of frequencies. In addition, high-order Bessel beam were generated. We report first order Bessel beams that show a clear phase dislocation along their axis and a vortex with single topological charge. The proposed method may have potential applications in ultrasonic imaging, biomedical ultrasound and particle manipulation applications using passive lenses.


Sensors ◽  
2019 ◽  
Vol 19 (3) ◽  
pp. 500 ◽  
Author(s):  
Luca Palmieri ◽  
Gabriele Scrofani ◽  
Nicolò Incardona ◽  
Genaro Saavedra ◽  
Manuel Martínez-Corral ◽  
...  

Light field technologies have seen a rise in recent years and microscopy is a field where such technology has had a deep impact. The possibility to provide spatial and angular information at the same time and in a single shot brings several advantages and allows for new applications. A common goal in these applications is the calculation of a depth map to reconstruct the three-dimensional geometry of the scene. Many approaches are applicable, but most of them cannot achieve high accuracy because of the nature of such images: biological samples are usually poor in features and do not exhibit sharp colors like natural scene. Due to such conditions, standard approaches result in noisy depth maps. In this work, a robust approach is proposed where accurate depth maps can be produced exploiting the information recorded in the light field, in particular, images produced with Fourier integral Microscope. The proposed approach can be divided into three main parts. Initially, it creates two cost volumes using different focal cues, namely correspondences and defocus. Secondly, it applies filtering methods that exploit multi-scale and super-pixels cost aggregation to reduce noise and enhance the accuracy. Finally, it merges the two cost volumes and extracts a depth map through multi-label optimization.


Sensors ◽  
2020 ◽  
Vol 20 (15) ◽  
pp. 4335
Author(s):  
Jeong Nyeon Kim ◽  
Tianning Liu ◽  
Thomas N. Jackson ◽  
Kyusun Choi ◽  
Susan Trolier-McKinstry ◽  
...  

Piezoelectric micromachined ultrasound transducers (PMUT) incorporating lead zirconate titanate PbZr0.52Ti0.48O3 (PZT) thin films were investigated for miniaturized high-frequency ultrasound systems. A recently developed process to remove a PMUT from an underlying silicon (Si) substrate has enabled curved arrays to be readily formed. This research aimed to improve the design of flexible PMUT arrays using PZFlex, a finite element method software package. A 10 MHz PMUT 2D array working in 3-1 mode was designed. A circular unit-cell was structured from the top, with concentric layers of platinum (Pt)/PZT/Pt/titanium (Ti) on a polyimide (PI) substrate. Pulse-echo and spectral response analyses predicted a center frequency of 10 MHz and bandwidth of 87% under water load and air backing. A 2D array, consisting of the 256 (16 × 16) unit-cells, was created and characterized in terms of pulse-echo and spectral responses, surface displacement profiles, crosstalk, and beam profiles. The 2D array showed: decreased bandwidth due to protracted oscillation decay and guided wave effects; mechanical focal length at 2.9 mm; 3.7 mm depth of field for -6 dB; and -55.6 dB crosstalk. Finite element-based virtual prototyping identified figures of merit—center frequency, bandwidth, depth of field, and crosstalk—that could be optimized to design robust, flexible PMUT arrays.


Sensors ◽  
2020 ◽  
Vol 20 (7) ◽  
pp. 2129 ◽  
Author(s):  
Hyun Myung Kim ◽  
Min Seok Kim ◽  
Gil Ju Lee ◽  
Hyuk Jae Jang ◽  
Young Min Song

The miniaturization of 3D depth camera systems to reduce cost and power consumption is essential for their application in electrical devices that are trending toward smaller sizes (such as smartphones and unmanned aerial systems) and in other applications that cannot be realized via conventional approaches. Currently, equipment exists for a wide range of depth-sensing devices, including stereo vision, structured light, and time-of-flight. This paper reports on a miniaturized 3D depth camera based on a light field camera (LFC) configured with a single aperture and a micro-lens array (MLA). The single aperture and each micro-lens of the MLA serve as multi-camera systems for 3D surface imaging. To overcome the optical alignment challenge in the miniaturized LFC system, the MLA was designed to focus by attaching it to an image sensor. Theoretical analysis of the optical parameters was performed using optical simulation based on Monte Carlo ray tracing to find the valid optical parameters for miniaturized 3D camera systems. Moreover, we demonstrated multi-viewpoint image acquisition via a miniaturized 3D camera module integrated into a smartphone.


Sign in / Sign up

Export Citation Format

Share Document