Comparison of Active Sensors for 3D Modeling of Indoor Environments

Author(s):  
Shengjun Tang ◽  
Qing Zhu ◽  
Wu Chen ◽  
Walid Darwish ◽  
Bo Wu ◽  
...  

RGB-D sensors are novel sensing systems that capture RGB images along with pixel-wise depth information. Although they are widely used in various applications, RGB-D sensors have significant drawbacks with respect to 3D dense mapping of indoor environments. First, they only allow a measurement range with a limited distance (e.g., within 3 m) and a limited field of view. Second, the error of the depth measurement increases with increasing distance to the sensor. In this paper, we propose an enhanced RGB-D mapping method for detailed 3D modeling of large indoor environments by combining RGB image-based modeling and depth-based modeling. The scale ambiguity problem during the pose estimation with RGB image sequences can be resolved by integrating the information from the depth and visual information provided by the proposed system. A robust rigid-transformation recovery method is developed to register the RGB image-based and depth-based 3D models together. The proposed method is examined with two datasets collected in indoor environments for which the experimental results demonstrate the feasibility and robustness of the proposed method


Author(s):  
P. Biber ◽  
S. Fleck ◽  
T. Duckett ◽  
M. Wand

2019 ◽  
Vol 33 (1) ◽  
pp. 04018055 ◽  
Author(s):  
H. Tran ◽  
K. Khoshelham ◽  
A. Kealy ◽  
L. Díaz-Vilariño

2012 ◽  
Vol 31 (5) ◽  
pp. 647-663 ◽  
Author(s):  
Peter Henry ◽  
Michael Krainin ◽  
Evan Herbst ◽  
Xiaofeng Ren ◽  
Dieter Fox

Author(s):  
N. Mostofi ◽  
A. Moussa ◽  
M. Elhabiby ◽  
N. El-Sheimy

3D model of indoor environments provide rich information that can facilitate the disambiguation of different places and increases the familiarization process to any indoor environment for the remote users. In this research work, we describe a system for visual odometry and 3D modeling using information from RGB-D sensor (Camera). The visual odometry method estimates the relative pose of the consecutive RGB-D frames through feature extraction and matching techniques. The pose estimated by visual odometry algorithm is then refined with iterative closest point (ICP) method. The switching technique between ICP and visual odometry in case of no visible features suppresses inconsistency in the final developed map. Finally, we add the loop closure to remove the deviation between first and last frames. In order to have a semantic meaning out of 3D models, the planar patches are segmented from RGB-D point clouds data using region growing technique followed by convex hull method to assign boundaries to the extracted patches. In order to build a final semantic 3D model, the segmented patches are merged using relative pose information obtained from the first step.


2014 ◽  
Vol 47 (3) ◽  
pp. 7604-7609 ◽  
Author(s):  
A. Aouina ◽  
M. Devy ◽  
A. Marin Hernandez

Sign in / Sign up

Export Citation Format

Share Document