scholarly journals Separating Texture and Illumination for Single-Shot Structured Light Reconstruction

Author(s):  
Minh Vo ◽  
Srinivasa G. Narasimhan ◽  
Yaser Sheikh
Keyword(s):  
Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4819
Author(s):  
Yikang Li ◽  
Zhenzhou Wang

Single-shot 3D reconstruction technique is very important for measuring moving and deforming objects. After many decades of study, a great number of interesting single-shot techniques have been proposed, yet the problem remains open. In this paper, a new approach is proposed to reconstruct deforming and moving objects with the structured light RGB line pattern. The structured light RGB line pattern is coded using parallel red, green, and blue lines with equal intervals to facilitate line segmentation and line indexing. A slope difference distribution (SDD)-based image segmentation method is proposed to segment the lines robustly in the HSV color space. A method of exclusion is proposed to index the red lines, the green lines, and the blue lines respectively and robustly. The indexed lines in different colors are fused to obtain a phase map for 3D depth calculation. The quantitative accuracies of measuring a calibration grid and a ball achieved by the proposed approach are 0.46 and 0.24 mm, respectively, which are significantly lower than those achieved by the compared state-of-the-art single-shot techniques.


2016 ◽  
Vol 38 (2) ◽  
pp. 390-404 ◽  
Author(s):  
Minh Vo ◽  
Srinivasa G. Narasimhan ◽  
Yaser Sheikh
Keyword(s):  

Sensors ◽  
2020 ◽  
Vol 20 (4) ◽  
pp. 1094 ◽  
Author(s):  
Feifei Gu ◽  
Zhan Song ◽  
Zilong Zhao

Structured light (SL) has a trade-off between acquisition time and spatial resolution. Temporally coded SL can produce a 3D reconstruction with high density, yet it is not applicable to dynamic reconstruction. On the contrary, spatially coded SL works with a single shot, but it can only achieve sparse reconstruction. This paper aims to achieve accurate 3D dense and dynamic reconstruction at the same time. A speckle-based SL sensor is presented, which consists of two cameras and a diffractive optical element (DOE) projector. The two cameras record images synchronously. First, a speckle pattern was elaborately designed and projected. Second, a high-accuracy calibration method was proposed to calibrate the system; meanwhile, the stereo images were accurately aligned by developing an optimized epipolar rectification algorithm. Then, an improved semi-global matching (SGM) algorithm was proposed to improve the correctness of the stereo matching, through which a high-quality depth map was achieved. Finally, dense point clouds could be recovered from the depth map. The DOE projector was designed with a size of 8 mm × 8 mm. The baseline between stereo cameras was controlled to be below 50 mm. Experimental results validated the effectiveness of the proposed algorithm. Compared with some other single-shot 3D systems, our system displayed a better performance. At close range, such as 0.4 m, our system could achieve submillimeter accuracy.


2020 ◽  
Vol 45 (12) ◽  
pp. 3256
Author(s):  
Zewei Cai ◽  
Giancarlo Pedrini ◽  
Wolfgang Osten ◽  
Xiaoli Liu ◽  
Xiang Peng

Sensors ◽  
2020 ◽  
Vol 20 (13) ◽  
pp. 3718 ◽  
Author(s):  
Hieu Nguyen ◽  
Yuzeng Wang ◽  
Zhaoyang Wang

Single-shot 3D imaging and shape reconstruction has seen a surge of interest due to the ever-increasing evolution in sensing technologies. In this paper, a robust single-shot 3D shape reconstruction technique integrating the structured light technique with the deep convolutional neural networks (CNNs) is proposed. The input of the technique is a single fringe-pattern image, and the output is the corresponding depth map for 3D shape reconstruction. The essential training and validation datasets with high-quality 3D ground-truth labels are prepared by using a multi-frequency fringe projection profilometry technique. Unlike the conventional 3D shape reconstruction methods which involve complex algorithms and intensive computation to determine phase distributions or pixel disparities as well as depth map, the proposed approach uses an end-to-end network architecture to directly carry out the transformation of a 2D image to its corresponding 3D depth map without extra processing. In the approach, three CNN-based models are adopted for comparison. Furthermore, an accurate structured-light-based 3D imaging dataset used in this paper is made publicly available. Experiments have been conducted to demonstrate the validity and robustness of the proposed technique. It is capable of satisfying various 3D shape reconstruction demands in scientific research and engineering applications.


Sign in / Sign up

Export Citation Format

Share Document