scholarly journals Single-Shot Structured Light Sensor for 3D Dense and Dynamic Reconstruction

Sensors ◽  
2020 ◽  
Vol 20 (4) ◽  
pp. 1094 ◽  
Author(s):  
Feifei Gu ◽  
Zhan Song ◽  
Zilong Zhao

Structured light (SL) has a trade-off between acquisition time and spatial resolution. Temporally coded SL can produce a 3D reconstruction with high density, yet it is not applicable to dynamic reconstruction. On the contrary, spatially coded SL works with a single shot, but it can only achieve sparse reconstruction. This paper aims to achieve accurate 3D dense and dynamic reconstruction at the same time. A speckle-based SL sensor is presented, which consists of two cameras and a diffractive optical element (DOE) projector. The two cameras record images synchronously. First, a speckle pattern was elaborately designed and projected. Second, a high-accuracy calibration method was proposed to calibrate the system; meanwhile, the stereo images were accurately aligned by developing an optimized epipolar rectification algorithm. Then, an improved semi-global matching (SGM) algorithm was proposed to improve the correctness of the stereo matching, through which a high-quality depth map was achieved. Finally, dense point clouds could be recovered from the depth map. The DOE projector was designed with a size of 8 mm × 8 mm. The baseline between stereo cameras was controlled to be below 50 mm. Experimental results validated the effectiveness of the proposed algorithm. Compared with some other single-shot 3D systems, our system displayed a better performance. At close range, such as 0.4 m, our system could achieve submillimeter accuracy.

Sensors ◽  
2020 ◽  
Vol 20 (13) ◽  
pp. 3718 ◽  
Author(s):  
Hieu Nguyen ◽  
Yuzeng Wang ◽  
Zhaoyang Wang

Single-shot 3D imaging and shape reconstruction has seen a surge of interest due to the ever-increasing evolution in sensing technologies. In this paper, a robust single-shot 3D shape reconstruction technique integrating the structured light technique with the deep convolutional neural networks (CNNs) is proposed. The input of the technique is a single fringe-pattern image, and the output is the corresponding depth map for 3D shape reconstruction. The essential training and validation datasets with high-quality 3D ground-truth labels are prepared by using a multi-frequency fringe projection profilometry technique. Unlike the conventional 3D shape reconstruction methods which involve complex algorithms and intensive computation to determine phase distributions or pixel disparities as well as depth map, the proposed approach uses an end-to-end network architecture to directly carry out the transformation of a 2D image to its corresponding 3D depth map without extra processing. In the approach, three CNN-based models are adopted for comparison. Furthermore, an accurate structured-light-based 3D imaging dataset used in this paper is made publicly available. Experiments have been conducted to demonstrate the validity and robustness of the proposed technique. It is capable of satisfying various 3D shape reconstruction demands in scientific research and engineering applications.


Sensors ◽  
2021 ◽  
Vol 21 (19) ◽  
pp. 6444
Author(s):  
Junhui Mei ◽  
Xiao Yang ◽  
Zhenxin Wang ◽  
Xiaobo Chen ◽  
Juntong Xi

In this paper, a topology-based stereo matching method for 3D measurement using a single pattern of coded spot-array structured light is proposed. The pattern of spot array is designed with a central reference ring spot, and each spot in the pattern can be uniquely coded with the row and column indexes according to the predefined topological search path. A method using rectangle templates to find the encoded spots in the captured images is proposed in the case where coding spots are missing, and an interpolation method is also proposed for rebuilding the missing spots. Experimental results demonstrate that the proposed technique could exactly and uniquely decode each spot and establish the stereo matching relation successfully, which can be used to obtain three-dimensional (3D) reconstruction with a single-shot method.


2021 ◽  
Vol 6 (1) ◽  
pp. 1-3
Author(s):  
Sina Farsangi ◽  
Mohamed A. Naiel ◽  
Mark Lamm ◽  
Paul Fieguth

Structured Light (SL) patterns generated based on pseudo-random arrays are widely used for single-shot 3D reconstruction using projector-camera systems. These SL images consist of a set of tags with different appearances, where these patterns will be projected on a target surface, then captured by a camera and decoded. The precision of localizing these tags from captured camera images affects the quality of the pixel-correspondences between the projector and the camera, and consequently that of the derived 3D shape. In this paper, we incorporate a quadrilateral representation for the detected SL tags that allows the construction of robust and accurate pixel-correspondences and the application of a spatial rectification module that leads to high tag classification accuracy. When applying the proposed method to single-shot 3D reconstruction, we show the effectiveness of this method over a baseline in estimating denser and more accurate 3D point-clouds.


Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4819
Author(s):  
Yikang Li ◽  
Zhenzhou Wang

Single-shot 3D reconstruction technique is very important for measuring moving and deforming objects. After many decades of study, a great number of interesting single-shot techniques have been proposed, yet the problem remains open. In this paper, a new approach is proposed to reconstruct deforming and moving objects with the structured light RGB line pattern. The structured light RGB line pattern is coded using parallel red, green, and blue lines with equal intervals to facilitate line segmentation and line indexing. A slope difference distribution (SDD)-based image segmentation method is proposed to segment the lines robustly in the HSV color space. A method of exclusion is proposed to index the red lines, the green lines, and the blue lines respectively and robustly. The indexed lines in different colors are fused to obtain a phase map for 3D depth calculation. The quantitative accuracies of measuring a calibration grid and a ball achieved by the proposed approach are 0.46 and 0.24 mm, respectively, which are significantly lower than those achieved by the compared state-of-the-art single-shot techniques.


2021 ◽  
Vol 21 (2) ◽  
pp. 1799-1808
Author(s):  
Guijin Wang ◽  
Chenchen Feng ◽  
Xiaowei Hu ◽  
Huazhong Yang

2021 ◽  
pp. 127507
Author(s):  
Jingtian Guan ◽  
Ji Li ◽  
Xiao Yang ◽  
Xiaobo Chen ◽  
Juntong Xi

Sign in / Sign up

Export Citation Format

Share Document