scholarly journals CityJSON Building Generation from Airborne LiDAR 3D Point Clouds

2020 ◽  
Vol 9 (9) ◽  
pp. 521
Author(s):  
Gilles-Antoine Nys ◽  
Florent Poux ◽  
Roland Billen

The relevant insights provided by 3D City models greatly improve Smart Cities and their management policies. In the urban built environment, buildings frequently represent the most studied and modeled features. CityJSON format proposes a lightweight and developer-friendly alternative to CityGML. This paper proposes an improvement to the usability of 3D models providing an automatic generation method in CityJSON, to ensure compactness, expressivity, and interoperability. In addition to a compliance rate in excess of 92% for geometry and topology, the generated model allows the handling of contextual information, such as metadata and refined levels of details (LoD), in a built-in manner. By breaking down the building-generation process, it creates consistent building objects from the unique source of Light Detection and Ranging (LiDAR) point clouds.

Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2144
Author(s):  
Stefan Reitmann ◽  
Lorenzo Neumann ◽  
Bernhard Jung

Common Machine-Learning (ML) approaches for scene classification require a large amount of training data. However, for classification of depth sensor data, in contrast to image data, relatively few databases are publicly available and manual generation of semantically labeled 3D point clouds is an even more time-consuming task. To simplify the training data generation process for a wide range of domains, we have developed the BLAINDER add-on package for the open-source 3D modeling software Blender, which enables a largely automated generation of semantically annotated point-cloud data in virtual 3D environments. In this paper, we focus on classical depth-sensing techniques Light Detection and Ranging (LiDAR) and Sound Navigation and Ranging (Sonar). Within the BLAINDER add-on, different depth sensors can be loaded from presets, customized sensors can be implemented and different environmental conditions (e.g., influence of rain, dust) can be simulated. The semantically labeled data can be exported to various 2D and 3D formats and are thus optimized for different ML applications and visualizations. In addition, semantically labeled images can be exported using the rendering functionalities of Blender.


2020 ◽  
Vol 12 (3) ◽  
pp. 543 ◽  
Author(s):  
Małgorzata Jarząbek-Rychard ◽  
Dong Lin ◽  
Hans-Gerd Maas

Targeted energy management and control is becoming an increasing concern in the building sector. Automatic analyses of thermal data, which minimize the subjectivity of the assessment and allow for large-scale inspections, are therefore of high interest. In this study, we propose an approach for a supervised extraction of façade openings (windows and doors) from photogrammetric 3D point clouds attributed to RGB and thermal infrared (TIR) information. The novelty of the proposed approach is in the combination of thermal information with other available characteristics of data for a classification performed directly in 3D space. Images acquired in visible and thermal infrared spectra serve as input data for the camera pose estimation and the reconstruction of 3D scene geometry. To investigate the relevance of different information types to the classification performance, a Random Forest algorithm is applied to various sets of computed features. The best feature combination is then used as an input for a Conditional Random Field that enables us to incorporate contextual information and consider the interaction between the points. The evaluation executed on a per-point level shows that the fusion of all available information types together with context consideration allows us to extract objects with 90% completeness and 95% correctness. A respective assessment executed on a per-object level shows 97% completeness and 88% accuracy.


2019 ◽  
Vol 11 (10) ◽  
pp. 1204 ◽  
Author(s):  
Yue Pan ◽  
Yiqing Dong ◽  
Dalei Wang ◽  
Airong Chen ◽  
Zhen Ye

Three-dimensional (3D) digital technology is essential to the maintenance and monitoring of cultural heritage sites. In the field of bridge engineering, 3D models generated from point clouds of existing bridges is drawing increasing attention. Currently, the widespread use of the unmanned aerial vehicle (UAV) provides a practical solution for generating 3D point clouds as well as models, which can drastically reduce the manual effort and cost involved. In this study, we present a semi-automated framework for generating structural surface models of heritage bridges. To be specific, we propose to tackle this challenge via a novel top-down method for segmenting main bridge components, combined with rule-based classification, to produce labeled 3D models from UAV photogrammetric point clouds. The point clouds of the heritage bridge are generated from the captured UAV images through the structure-from-motion workflow. A segmentation method is developed based on the supervoxel structure and global graph optimization, which can effectively separate bridge components based on geometric features. Then, recognition by the use of a classification tree and bridge geometry is utilized to recognize different structural elements from the obtained segments. Finally, surface modeling is conducted to generate surface models of the recognized elements. Experiments using two bridges in China demonstrate the potential of the presented structural model reconstruction method using UAV photogrammetry and point cloud processing in 3D digital documentation of heritage bridges. By using given markers, the reconstruction error of point clouds can be as small as 0.4%. Moreover, the precision and recall of segmentation results using testing date are better than 0.8, and a recognition accuracy better than 0.8 is achieved.


Author(s):  
I.-C. Lee ◽  
F. Tsai

A series of panoramic images are usually used to generate a 720° panorama image. Although panoramic images are typically used for establishing tour guiding systems, in this research, we demonstrate the potential of using panoramic images acquired from multiple sites to create not only 720° panorama, but also three-dimensional (3D) point clouds and 3D indoor models. Since 3D modeling is one of the goals of this research, the location of the panoramic sites needed to be carefully planned in order to maintain a robust result for close-range photogrammetry. After the images are acquired, panoramic images are processed into 720° panoramas, and these panoramas which can be used directly as panorama guiding systems or other applications. <br><br> In addition to these straightforward applications, interior orientation parameters can also be estimated while generating 720° panorama. These parameters are focal length, principle point, and lens radial distortion. The panoramic images can then be processed with closerange photogrammetry procedures to extract the exterior orientation parameters and generate 3D point clouds. In this research, VisaulSFM, a structure from motion software is used to estimate the exterior orientation, and CMVS toolkit is used to generate 3D point clouds. Next, the 3D point clouds are used as references to create building interior models. In this research, Trimble Sketchup was used to build the model, and the 3D point cloud was added to the determining of locations of building objects using plane finding procedure. In the texturing process, the panorama images are used as the data source for creating model textures. This 3D indoor model was used as an Augmented Reality model replacing a guide map or a floor plan commonly used in an on-line touring guide system. <br><br> The 3D indoor model generating procedure has been utilized in two research projects: a cultural heritage site at Kinmen, and Taipei Main Station pedestrian zone guidance and navigation system. The results presented in this paper demonstrate the potential of using panoramic images to generate 3D point clouds and 3D models. However, it is currently a manual and labor-intensive process. A research is being carried out to Increase the degree of automation of these procedures.


Author(s):  
Andreas Kuhn ◽  
Hai Huang ◽  
Martin Drauschke ◽  
Helmut Mayer

High resolution consumer cameras on Unmanned Aerial Vehicles (UAVs) allow for cheap acquisition of highly detailed images, e.g., of urban regions. Via image registration by means of Structure from Motion (SfM) and Multi View Stereo (MVS) the automatic generation of huge amounts of 3D points with a relative accuracy in the centimeter range is possible. Applications such as semantic classification have a need for accurate 3D point clouds, but do not benefit from an extremely high resolution/density. In this paper, we, therefore, propose a fast fusion of high resolution 3D point clouds based on occupancy grids. The result is used for semantic classification. In contrast to state-of-the-art classification methods, we accept a certain percentage of outliers, arguing that they can be considered in the classification process when a per point belief is determined in the fusion process. To this end, we employ an octree-based fusion which allows for the derivation of outlier probabilities. The probabilities give a belief for every 3D point, which is essential for the semantic classification to consider measurement noise. For an example point cloud with half a billion 3D points (cf. Figure 1), we show that our method can reduce runtime as well as improve classification accuracy and offers high scalability for large datasets.


Author(s):  
W. Nguatem ◽  
M. Drauschke ◽  
H. Mayer

We present a workflow for the automatic generation of building models with levels of detail (LOD) 1 to 3 according to the CityGML standard (Gröger et al., 2012). We start with orienting unsorted image sets employing (Mayer et al., 2012), we compute depth maps using semi-global matching (SGM) (Hirschmüller, 2008), and fuse these depth maps to reconstruct dense 3D point clouds (Kuhn et al., 2014). Based on planes segmented from these point clouds, we have developed a stochastic method for roof model selection (Nguatem et al., 2013) and window model selection (Nguatem et al., 2014). We demonstrate our workflow up to the export into CityGML.


Author(s):  
T. Shinohara ◽  
H. Xiu ◽  
M. Matsuoka

Abstract. This study introduces a novel image to a 3D point-cloud translation method with a conditional generative adversarial network that creates a large-scale 3D point cloud. This can generate supervised point clouds observed via airborne LiDAR from aerial images. The network is composed of an encoder to produce latent features of input images, generator to translate latent features to fake point clouds, and discriminator to classify false or real point clouds. The encoder is a pre-trained ResNet; to overcome the difficulty of generating 3D point clouds in an outdoor scene, we use a FoldingNet with features from ResNet. After a fixed number of iterations, our generator can produce fake point clouds that correspond to the input image. Experimental results show that our network can learn and generate certain point clouds using the data from the 2018 IEEE GRSS Data Fusion Contest.


Author(s):  
Sara Shirowzhan ◽  
John Trinder ◽  
Paul Osmond

Monitoring sustainability of urban form as a 3D phenomenon over time is crucial in the era of smart cities for better planning of the future, and for such a monitoring system, appropriate tools, metrics, methodologies and time series 3D data are required. While accurate time series 3D data are becoming available, a lack of 3D sustainable urban form (3D SUF) metrics, appropriate methodologies and technical problems of processing time series 3D data has resulted in few studies on the assessment of 3D SUF over time. In this chapter, we review volumetric building metrics currently under development and demonstrate the technical problems associated with their validation based on time series airborne lidar data. We propose new metrics for application in spatial and temporal 3D SUF assessment. We also suggest a new approach in processing time series airborne lidar to detect three-dimensional changes of urban form. Using this approach and the developed metrics, we detected a decreased volume of vegetation and new areas prepared for the construction of taller buildings. These 3D changes and the proposed metrics can be used to numerically measure and compare urban areas in terms of trends against or in favor of sustainability goals for caring for the environment.


2021 ◽  
Vol 7 (2) ◽  
pp. 57-74
Author(s):  
Lamyaa Gamal EL-Deen Taha ◽  
A. I. Ramzi ◽  
A. Syarawi ◽  
A. Bekheet

Until recently, the most highly accurate digital surface models were obtained from airborne lidar. With the development of a new generation of large format digital photogrammetric aerial camera, a fully digital photogrammetric workflow became possible. Digital airborne images are sources for elevation extraction and orthophoto generation. This research concerned with the generation of digital surface models and orthophotos as applications from high-resolution images.  In this research, the following steps were performed. A Benchmark data of LIDAR and digital aerial camera have been used.  Firstly, image orientation, AT have been performed. Then the automatic digital surface model DSM generation has been produced from the digital aerial camera. Thirdly true digital ortho has been generated from the digital aerial camera also orthoimage will be generated using LIDAR digital elevation model (DSM). Leica Photogrammetric Suite (LPS) module of Erdsa Imagine 2014 software was utilized for processing. Then the resulted orthoimages from both techniques were mosaicked. The results show that automatic digital surface model DSM that been produced from digital aerial camera method has very high dense photogrammetric 3D point clouds compared to the LIDAR 3D point clouds. It was found that the true orthoimage produced from the second approach is better than the true orthoimage produced from the first approach. The five approaches were tested for classification of the best-orthorectified image mosaic using subpixel based (neural network) and pixel-based ( minimum distance and maximum likelihood).Multicues were extracted such as texture(entropy-mean),Digital elevation model, Digital surface model ,normalized digital surface model (nDSM) and intensity image. The contributions of the individual cues used in the classification have been evaluated. It was found that the best cue integration is intensity (pan) +nDSM+ entropy followed by intensity (pan) +nDSM+mean then intensity image +mean+ entropy after that DSM )image and two texture measures (mean and entropy) followed by the colour image. The integration with height data increases the accuracy. Also, it was found that the integration with entropy texture increases the accuracy. Resulted in fifteen cases of classification it was found that maximum likelihood classifier is the best followed by minimum distance then neural network classifier. We attribute this to the fine resolution of the digital camera image. Subpixel classifier (neural network) is not suitable for classifying aerial digital camera images. 


Author(s):  
Xinhai Liu ◽  
Zhizhong Han ◽  
Yu-Shen Liu ◽  
Matthias Zwicker

Exploring contextual information in the local region is important for shape understanding and analysis. Existing studies often employ hand-crafted or explicit ways to encode contextual information of local regions. However, it is hard to capture fine-grained contextual information in hand-crafted or explicit manners, such as the correlation between different areas in a local region, which limits the discriminative ability of learned features. To resolve this issue, we propose a novel deep learning model for 3D point clouds, named Point2Sequence, to learn 3D shape features by capturing fine-grained contextual information in a novel implicit way. Point2Sequence employs a novel sequence learning model for point clouds to capture the correlations by aggregating multi-scale areas of each local region with attention. Specifically, Point2Sequence first learns the feature of each area scale in a local region. Then, it captures the correlation between area scales in the process of aggregating all area scales using a recurrent neural network (RNN) based encoder-decoder structure, where an attention mechanism is proposed to highlight the importance of different area scales. Experimental results show that Point2Sequence achieves state-of-the-art performance in shape classification and segmentation tasks.


Sign in / Sign up

Export Citation Format

Share Document