Three-dimensional reconstruction of large-scale scene based on depth camera

2020 ◽  
Vol 28 (1) ◽  
pp. 234-243
Author(s):  
刘东生 LIU Dong-sheng ◽  
陈建林 CHEN Jian-lin ◽  
费 点 FEI Dian ◽  
张之江 ZHANG Zhi-jiang
2021 ◽  
Vol 87 (7) ◽  
pp. 479-484
Author(s):  
Yu Hou ◽  
Ruifeng Zhai ◽  
Xueyan Li ◽  
Junfeng Song ◽  
Xuehan Ma ◽  
...  

Three-dimensional reconstruction from a single image has excellent future prospects. The use of neural networks for three-dimensional reconstruction has achieved remarkable results. Most of the current point-cloud-based three-dimensional reconstruction networks are trained using nonreal data sets and do not have good generalizability. Based on the Karlsruhe Institute of Technology and Toyota Technological Institute at Chicago ()data set of large-scale scenes, this article proposes a method for processing real data sets. The data set produced in this work can better train our network model and realize point cloud reconstruction based on a single picture of the real world. Finally, the constructed point cloud data correspond well to the corresponding three-dimensional shapes, and to a certain extent, the disadvantage of the uneven distribution of the point cloud data obtained by light detection and ranging scanning is overcome using the proposed method.


Author(s):  
Chuang-Yuan Chiu ◽  
Michael Thelwell ◽  
Terry Senior ◽  
Simon Choppin ◽  
John Hart ◽  
...  

KinectFusion is a typical three-dimensional reconstruction technique which enables generation of individual three-dimensional human models from consumer depth cameras for understanding body shapes. The aim of this study was to compare three-dimensional reconstruction results obtained using KinectFusion from data collected with two different types of depth camera (time-of-flight and stereoscopic cameras) and compare these results with those of a commercial three-dimensional scanning system to determine which type of depth camera gives improved reconstruction. Torso mannequins and machined aluminium cylinders were used as the test objects for this study. Two depth cameras, Microsoft Kinect V2 and Intel Realsense D435, were selected as the representatives of time-of-flight and stereoscopic cameras, respectively, to capture scan data for the reconstruction of three-dimensional point clouds by KinectFusion techniques. The results showed that both time-of-flight and stereoscopic cameras, using the developed rotating camera rig, provided repeatable body scanning data with minimal operator-induced error. However, the time-of-flight camera generated more accurate three-dimensional point clouds than the stereoscopic sensor. Thus, this suggests that applications requiring the generation of accurate three-dimensional human models by KinectFusion techniques should consider using a time-of-flight camera, such as the Microsoft Kinect V2, as the image capturing sensor.


Author(s):  
J. Frank ◽  
B. F. McEwen ◽  
M. Radermacher ◽  
C. L. Rieder

The tomographic reconstruction from multiple projections of cellular components, within a thick section, offers a way of visualizing and quantifying their three-dimensional (3D) structure. However, asymmetric objects require as many views from the widest tilt range as possible; otherwise the reconstruction may be uninterpretable. Even if not for geometric obstructions, the increasing pathway of electrons, as the tilt angle is increased, poses the ultimate upper limitation to the projection range. With the maximum tilt angle being fixed, the only way to improve the faithfulness of the reconstruction is by changing the mode of the tilting from single-axis to conical; a point within the object projected with a tilt angle of 60° and a full 360° azimuthal range is then reconstructed as a slightly elliptic (axis ratio 1.2 : 1) sphere.


Sign in / Sign up

Export Citation Format

Share Document