Single-Shot Colored Speckle Pattern for High Accuracy Depth Sensing

2019 ◽  
Vol 19 (17) ◽  
pp. 7591-7597 ◽  
Author(s):  
Boxun Fu ◽  
Fu Li ◽  
Tianjiao Zhang ◽  
Jingsong Jiang ◽  
Quanlu Li ◽  
...  
Micromachines ◽  
2021 ◽  
Vol 12 (12) ◽  
pp. 1453
Author(s):  
Hyun Myung Kim ◽  
Min Seok Kim ◽  
Sehui Chang ◽  
Jiseong Jeong ◽  
Hae-Gon Jeon ◽  
...  

The light field camera provides a robust way to capture both spatial and angular information within a single shot. One of its important applications is in 3D depth sensing, which can extract depth information from the acquired scene. However, conventional light field cameras suffer from shallow depth of field (DoF). Here, a vari-focal light field camera (VF-LFC) with an extended DoF is newly proposed for mid-range 3D depth sensing applications. As a main lens of the system, a vari-focal lens with four different focal lengths is adopted to extend the DoF up to ~15 m. The focal length of the micro-lens array (MLA) is optimized by considering the DoF both in the image plane and in the object plane for each focal length. By dividing measurement regions with each focal length, depth estimation with high reliability is available within the entire DoF. The proposed VF-LFC is evaluated by the disparity data extracted from images with different distances. Moreover, the depth measurement in an outdoor environment demonstrates that our VF-LFC could be applied in various fields such as delivery robots, autonomous vehicles, and remote sensing drones.


2014 ◽  
Vol 900 ◽  
pp. 617-622
Author(s):  
Fu Sheng Yu ◽  
Teng Fei Li ◽  
Yan Chao Wu ◽  
Zhong Guo Sun ◽  
Sheng Jiang Yin

Speckle pattern interferometry can be used to measure he displacement, strain and vibration, surface deformation and surface roughness. And dynamic laser speckle measurement with high accuracy has been widely used in measurement of surface deformation. Tool breakage is the main bottleneck of high-speed intermittent cutting development, therefore, obtaining stress distribution of milling tools is a base of improving the tool design and tool life. Using a speckle measurement method of double pulsed digital based on FPGA, which involves the laser cutter, tools and CCD, transforms the high-speed dynamic measurement to quasi-static measurement. As a result, we can get two speckle images of front and back milling cutter surface and calculate the deformation ,strain and stress distribution of the tool surface with analysis.


Sensors ◽  
2020 ◽  
Vol 20 (4) ◽  
pp. 1094 ◽  
Author(s):  
Feifei Gu ◽  
Zhan Song ◽  
Zilong Zhao

Structured light (SL) has a trade-off between acquisition time and spatial resolution. Temporally coded SL can produce a 3D reconstruction with high density, yet it is not applicable to dynamic reconstruction. On the contrary, spatially coded SL works with a single shot, but it can only achieve sparse reconstruction. This paper aims to achieve accurate 3D dense and dynamic reconstruction at the same time. A speckle-based SL sensor is presented, which consists of two cameras and a diffractive optical element (DOE) projector. The two cameras record images synchronously. First, a speckle pattern was elaborately designed and projected. Second, a high-accuracy calibration method was proposed to calibrate the system; meanwhile, the stereo images were accurately aligned by developing an optimized epipolar rectification algorithm. Then, an improved semi-global matching (SGM) algorithm was proposed to improve the correctness of the stereo matching, through which a high-quality depth map was achieved. Finally, dense point clouds could be recovered from the depth map. The DOE projector was designed with a size of 8 mm × 8 mm. The baseline between stereo cameras was controlled to be below 50 mm. Experimental results validated the effectiveness of the proposed algorithm. Compared with some other single-shot 3D systems, our system displayed a better performance. At close range, such as 0.4 m, our system could achieve submillimeter accuracy.


2019 ◽  
Vol 116 (46) ◽  
pp. 22959-22965 ◽  
Author(s):  
Qi Guo ◽  
Zhujun Shi ◽  
Yao-Wei Huang ◽  
Emma Alexander ◽  
Cheng-Wei Qiu ◽  
...  

Jumping spiders (Salticidae) rely on accurate depth perception for predation and navigation. They accomplish depth perception, despite their tiny brains, by using specialized optics. Each principal eye includes a multitiered retina that simultaneously receives multiple images with different amounts of defocus, and from these images, distance is decoded with relatively little computation. We introduce a compact depth sensor that is inspired by the jumping spider. It combines metalens optics, which modifies the phase of incident light at a subwavelength scale, with efficient computations to measure depth from image defocus. Instead of using a multitiered retina to transduce multiple simultaneous images, the sensor uses a metalens to split the light that passes through an aperture and concurrently form 2 differently defocused images at distinct regions of a single planar photosensor. We demonstrate a system that deploys a 3-mm-diameter metalens to measure depth over a 10-cm distance range, using fewer than 700 floating point operations per output pixel. Compared with previous passive depth sensors, our metalens depth sensor is compact, single-shot, and requires a small amount of computation. This integration of nanophotonics and efficient computation brings artificial depth sensing closer to being feasible on millimeter-scale, microwatts platforms such as microrobots and microsensor networks.


Sensors ◽  
2019 ◽  
Vol 19 (4) ◽  
pp. 866 ◽  
Author(s):  
Tanguy Ophoff ◽  
Kristof Van Beeck ◽  
Toon Goedemé

In this paper, we investigate whether fusing depth information on top of normal RGB data for camera-based object detection can help to increase the performance of current state-of-the-art single-shot detection networks. Indeed, depth sensing is easily acquired using depth cameras such as a Kinect or stereo setups. We investigate the optimal manner to perform this sensor fusion with a special focus on lightweight single-pass convolutional neural network (CNN) architectures, enabling real-time processing on limited hardware. For this, we implement a network architecture allowing us to parameterize at which network layer both information sources are fused together. We performed exhaustive experiments to determine the optimal fusion point in the network, from which we can conclude that fusing towards the mid to late layers provides the best results. Our best fusion models significantly outperform the baseline RGB network in both accuracy and localization of the detections.


2015 ◽  
Vol 54 (12) ◽  
pp. 3796 ◽  
Author(s):  
Guangming Shi ◽  
Lili Yang ◽  
Fu Li ◽  
Yi Niu ◽  
Ruodai Li ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document