scholarly journals 3D Modelling of Interior Spaces: Learning the Language of Indoor Architecture

Author(s):  
K. Khoshelham ◽  
L. Díaz-Vilariño

3D models of indoor environments are important in many applications, but they usually exist only for newly constructed buildings. Automated approaches to modelling indoor environments from imagery and/or point clouds can make the process easier, faster and cheaper. We present an approach to 3D indoor modelling based on a shape grammar. We demonstrate that interior spaces can be modelled by iteratively placing, connecting and merging cuboid shapes. We also show that the parameters and sequence of grammar rules can be learned automatically from a point cloud. Experiments with simulated and real point clouds show promising results, and indicate the potential of the method in 3D modelling of large indoor environments.

Author(s):  
H. Tran ◽  
K. Khoshelham

<p><strong>Abstract.</strong> Automated reconstruction of 3D interior models has recently been a topic of intensive research due to its wide range of applications in Architecture, Engineering, and Construction. However, generation of the 3D models from LiDAR data and/or RGB-D data is challenged by not only the complexity of building geometries, but also the presence of clutters and the inevitable defects of the input data. In this paper, we propose a stochastic approach for automatic reconstruction of 3D models of interior spaces from point clouds, which is applicable to both Manhattan and non-Manhattan world buildings. The building interior is first partitioned into a set of 3D shapes as an arrangement of permanent structures. An optimization process is then applied to search for the most probable model as the optimal configuration of the 3D shapes using the reversible jump Markov Chain Monte Carlo (rjMCMC) sampling with the Metropolis-Hastings algorithm. This optimization is not based only on the input data, but also takes into account the intermediate stages of the model during the modelling process. Consequently, it enhances the robustness of the proposed approach to inaccuracy and incompleteness of the point cloud. The feasibility of the proposed approach is evaluated on a synthetic and an ISPRS benchmark dataset.</p>


Aerospace ◽  
2018 ◽  
Vol 5 (3) ◽  
pp. 94 ◽  
Author(s):  
Hriday Bavle ◽  
Jose Sanchez-Lopez ◽  
Paloma Puente ◽  
Alejandro Rodriguez-Ramos ◽  
Carlos Sampedro ◽  
...  

This paper presents a fast and robust approach for estimating the flight altitude of multirotor Unmanned Aerial Vehicles (UAVs) using 3D point cloud sensors in cluttered, unstructured, and dynamic indoor environments. The objective is to present a flight altitude estimation algorithm, replacing the conventional sensors such as laser altimeters, barometers, or accelerometers, which have several limitations when used individually. Our proposed algorithm includes two stages: in the first stage, a fast clustering of the measured 3D point cloud data is performed, along with the segmentation of the clustered data into horizontal planes. In the second stage, these segmented horizontal planes are mapped based on the vertical distance with respect to the point cloud sensor frame of reference, in order to provide a robust flight altitude estimation even in presence of several static as well as dynamic ground obstacles. We validate our approach using the IROS 2011 Kinect dataset available in the literature, estimating the altitude of the RGB-D camera using the provided 3D point clouds. We further validate our approach using a point cloud sensor on board a UAV, by means of several autonomous real flights, closing its altitude control loop using the flight altitude estimated by our proposed method, in presence of several different static as well as dynamic ground obstacles. In addition, the implementation of our approach has been integrated in our open-source software framework for aerial robotics called Aerostack.


Author(s):  
J. Zhu ◽  
Y. Xu ◽  
L. Hoegner ◽  
U. Stilla

<p><strong>Abstract.</strong> In this work, we discussed how to directly combine thermal infrared image (TIR) and the point cloud without additional assistance from GCPs or 3D models. Specifically, we propose a point-based co-registration process for combining the TIR image and the point cloud for the buildings. The keypoints are extracted from images and point clouds via primitive segmentation and corner detection, then pairs of corresponding points are identified manually. After that, the estimated camera pose can be computed with EPnP algorithm. Finally, the point cloud with thermal information provided by IR images can be generated as a result, which is helpful in the tasks such as energy inspection, leakage detection, and abnormal condition monitoring. This paper provides us more insight about the probability and ideas about the combining TIR image and point cloud.</p>


Author(s):  
W. Ostrowski ◽  
M. Pilarska ◽  
J. Charyton ◽  
K. Bakuła

Creating 3D building models in large scale is becoming more popular and finds many applications. Nowadays, a wide term “3D building models” can be applied to several types of products: well-known CityGML solid models (available on few Levels of Detail), which are mainly generated from Airborne Laser Scanning (ALS) data, as well as 3D mesh models that can be created from both nadir and oblique aerial images. City authorities and national mapping agencies are interested in obtaining the 3D building models. Apart from the completeness of the models, the accuracy aspect is also important. Final accuracy of a building model depends on various factors (accuracy of the source data, complexity of the roof shapes, etc.). In this paper the methodology of inspection of dataset containing 3D models is presented. The proposed approach check all building in dataset with comparison to ALS point clouds testing both: accuracy and level of details. Using analysis of statistical parameters for normal heights for reference point cloud and tested planes and segmentation of point cloud provides the tool that can indicate which building and which roof plane in do not fulfill requirement of model accuracy and detail correctness. Proposed method was tested on two datasets: solid and mesh model.


Author(s):  
B. Sirmacek ◽  
R. Lindenbergh

Low-cost sensor generated 3D models can be useful for quick 3D urban model updating, yet the quality of the models is questionable. In this article, we evaluate the reliability of an automatic point cloud generation method using multi-view iPhone images or an iPhone video file as an input. We register such automatically generated point cloud on a TLS point cloud of the same object to discuss accuracy, advantages and limitations of the iPhone generated point clouds. For the chosen example showcase, we have classified 1.23% of the iPhone point cloud points as outliers, and calculated the mean of the point to point distances to the TLS point cloud as 0.11 m. Since a TLS point cloud might also include measurement errors and noise, we computed local noise values for the point clouds from both sources. Mean (μ) and standard deviation (&amp;sigma;) of roughness histograms are calculated as (μ<sub>1</sub> = 0.44 m., &amp;sigma;<sub>1</sub> = 0.071 m.) and (μ<sub>2</sub> = 0.025 m., &amp;sigma;<sub>2</sub> = 0.037 m.) for the iPhone and TLS point clouds respectively. Our experimental results indicate possible usage of the proposed automatic 3D model generation framework for 3D urban map updating, fusion and detail enhancing, quick and real-time change detection purposes. However, further insights should be obtained first on the circumstances that are needed to guarantee a successful point cloud generation from smartphone images.


Author(s):  
F. Dadras Javan ◽  
M. Savadkouhi

Abstract. In the last few years, Unmanned Aerial Vehicles (UAVs) are being frequently used to acquire high resolution photogrammetric images and consequently producing Digital Surface Models (DSMs) and orthophotos in a photogrammetric procedure for topography and surface processing applications. Thermal imaging sensors are mostly used for interpretation and monitoring purposes because of lower geometric resolution. But yet, thermal mapping is getting more important in civil applications, as thermal sensors can be used in condition that visible sensors cannot, such as foggy weather and night times which is not possible for visible cameras. But, low geometric quality and resolution of thermal images is a main drawback that 3D thermal modelling are encountered with. This study aims to offer a solution for to fixing mentioned problem and generating a thermal 3D model with higher spatial resolution based on thermal and visible point clouds integration. This integration leads to generate a more accurate thermal point cloud and DEM with more density and resolution which is appropriate for 3D thermal modelling. The main steps of this study are: generating thermal and RGB point clouds separately, registration of them in two course and fine level and finally adding thermal information to RGB high resolution point cloud by interpolation concept. Experimental results are presented in a mesh that has more faces (With a factor of 23) which leads to a higher resolution textured mesh with thermal information.


Author(s):  
A. Masiero ◽  
F. Fissore ◽  
A. Guarnieri ◽  
A. Vettore

The subject of photogrammetric surveying with mobile devices, in particular smartphones, is becoming of significant interest in the research community. Nowadays, the process of providing 3D point clouds with photogrammetric procedures is well known. However, external information is still typically needed in order to move from the point cloud obtained from images to a 3D metric reconstruction. This paper investigates the integration of information provided by an UWB positioning system with visual based reconstruction to produce a metric reconstruction. Furthermore, the orientation (with respect to North-East directions) of the obtained model is assessed thanks to the use of inertial sensors included in the considered UWB devices. Results of this integration are shown on two case studies in indoor environments.


Author(s):  
T. Shinohara ◽  
H. Xiu ◽  
M. Matsuoka

Abstract. This study introduces a novel image to a 3D point-cloud translation method with a conditional generative adversarial network that creates a large-scale 3D point cloud. This can generate supervised point clouds observed via airborne LiDAR from aerial images. The network is composed of an encoder to produce latent features of input images, generator to translate latent features to fake point clouds, and discriminator to classify false or real point clouds. The encoder is a pre-trained ResNet; to overcome the difficulty of generating 3D point clouds in an outdoor scene, we use a FoldingNet with features from ResNet. After a fixed number of iterations, our generator can produce fake point clouds that correspond to the input image. Experimental results show that our network can learn and generate certain point clouds using the data from the 2018 IEEE GRSS Data Fusion Contest.


Author(s):  
M. Mehranfar ◽  
H. Arefi ◽  
F. Alidoost

Abstract. This paper presents a projection-based method for 3D bridge modeling using dense point clouds generated from drone-based images. The proposed workflow consists of hierarchical steps including point cloud segmentation, modeling of individual elements, and merging of individual models to generate the final 3D model. First, a fuzzy clustering algorithm including the height values and geometrical-spectral features is employed to segment the input point cloud into the main bridge elements. In the next step, a 2D projection-based reconstruction technique is developed to generate a 2D model for each element. Next, the 3D models are reconstructed by extruding the 2D models orthogonally to the projection plane. Finally, the reconstruction process is completed by merging individual 3D models and forming an integrated 3D model of the bridge structure in a CAD format. The results demonstrate the effectiveness of the proposed method to generate 3D models automatically with a median error of about 0.025 m between the elements’ dimensions in the reference and reconstructed models for two different bridge datasets.


2019 ◽  
Vol 33 (1) ◽  
pp. 04018055 ◽  
Author(s):  
H. Tran ◽  
K. Khoshelham ◽  
A. Kealy ◽  
L. Díaz-Vilariño

Sign in / Sign up

Export Citation Format

Share Document