scholarly journals AUTOMATIC 3D BUILDING MODEL GENERATION USING DEEP LEARNING METHODS BASED ON CITYJSON AND 2D FLOOR PLANS

Author(s):  
R. G. Kippers ◽  
M. Koeva ◽  
M. van Keulen ◽  
S. J. Oude Elberink

Abstract. In the past decade, a lot of effort is put into applying digital innovations to building life cycles. 3D Models have been proven to be efficient for decision making, scenario simulation and 3D data analysis during this life cycle. Creating such digital representation of a building can be a labour-intensive task, depending on the desired scale and level of detail (LOD). This research aims at creating a new automatic deep learning based method for building model reconstruction. It combines exterior and interior data sources: 1) 3D BAG, 2) archived floor plan images. To reconstruct 3D building models from the two data sources, an innovative combination of methods is proposed. In order to obtain the information needed from the floor plan images (walls, openings and labels), deep learning techniques have been used. In addition, post-processing techniques are introduced to transform the data in the required format. In order to fuse the extracted 2D data and the 3D exterior, a data fusion process is introduced. From the literature review, no prior research on automatic integration of CityGML/JSON and floor plan images has been found. Therefore, this method is a first approach to this data integration.

2014 ◽  
Vol 71 (4) ◽  
Author(s):  
R. Akmaliaa ◽  
H. Setan ◽  
Z. Majid ◽  
D. Suwardhi

Nowadays, 3D city models are used by the increasing number of applications. Most applications require not only geometric information but also semantic information. As a standard and tool for 3D city model, CityGML, provides a method for storing and managing both geometric and semantic information. Moreover, it also provides the multi-scale representation of 3D building model for efficient visualization. In CityGML, building models are represented in five LODs (Level of Detail), start from LOD0, LOD1, LOD2, LOD3, and LOD4. Each level has different accuracy and detail requirement for visualization. Usually, for obtaining multi-LOD of 3D building model, several data sources are integrated. For example, LiDAR data is used for generating LOD0, LOD1, and LOD2 as close-range photogrammetry data is used for generating more detailed model in LOD3 and LOD4. However, using additional data sources is increasing cost and time consuming. Since the development of TLS (Terrestrial Laser Scanner), data collection for detailed model can be conducted in a relative short time compared to photogrammetry. Point cloud generated from TLS can be used for generating the multi-LOD of building model. This paper gives an overview about the representation of 3D building model in CityGML and also method for generating multi-LOD of building from TLS data. An experiment was conducted using TLS. Following the standard in CityGML, point clouds from TLS were processed resulting 3D model of building in different level of details. Afterward, models in different LOD were converted into XML schema to be used in CityGML. From the experiment, final result shows that TLS can be used for generating 3D models of building in LOD1, LOD2, and LOD3.


2016 ◽  
Vol 5 (3) ◽  
pp. 47-67 ◽  
Author(s):  
Rafika Hajji ◽  
Roland Billen

The need of 3D city models increases day by day. However, 3D modeling still faces some impediments to be generalized. Therefore, new solutions such as collaboration should be investigated. The paper presents a new vision of collaboration applied on 3D modeling through the definition of the concept of a 3D collaborative model. The paper highlights basic questions to be considered for the definition and the development of that model then argues the importance of reuse of 2D data as a promising solution to reconstruct 3D data and to upgrade to integrated 3D solutions in the future. This idea is supported by a case study, to demonstrate how 2D/2.5D data collected from different providers in Walloon region in Belgium can be integrated and reengineered to match the specifications of a 3D building model compatible with the CityGML standard.


Author(s):  
V. A. Knyaz ◽  
A. A. Maksimov ◽  
M. M. Novikov ◽  
A. V. Urmashova

Abstract. Many anthropological researches require identification and measurement of craniometric and cephalometric landmarks which provide valuable information about the shape of a head. This information is necessary for morphometric analysis, face approximation, craniafacial identification etc. Traditional techniques use special anthropological tools to perform required measurements, identification of landmarks usually being made by an expert-anthropologist. Modern techniques of optical 3D measurements such as photogrammetry, computer tomography, laser 3D scanning provide new possibilities for acquiring accurate 2D and 3D data of high resolution, thus creating new conditions for anthropological data analysis. Traditional anthropological manual point measurements can be substituted by analysis of accurate textured 3D models, which allow to retrieve more information about studied object and easily to share data for independent analysis. The paper presents the deep learning technique for anthropological landmarks identification and accurate 3D measurements. Photogrammetric methods and their practical implementation in the automatic system for accurate digital 3D reconstruction of anthropological objects are described.


2019 ◽  
Vol 11 (7) ◽  
pp. 847 ◽  
Author(s):  
Eleonora Grilli ◽  
Fabio Remondino

In recent years, the use of 3D models in cultural and archaeological heritage for documentation and dissemination purposes is increasing. The association of heterogeneous information to 3D data by means of automated segmentation and classification methods can help to characterize, describe and better interpret the object under study. Indeed, the high complexity of 3D data along with the large diversity of heritage assets themselves have constituted segmentation and classification methods as currently active research topics. Although machine learning methods brought great progress in this respect, few advances have been developed in relation to cultural heritage 3D data. Starting from the existing literature, this paper aims to develop, explore and validate reliable and efficient automated procedures for the classification of 3D data (point clouds or polygonal mesh models) of heritage scenarios. In more detail, the proposed solution works on 2D data (“texture-based” approach) or directly on the 3D data (“geometry-based approach) with supervised or unsupervised machine learning strategies. The method was applied and validated on four different archaeological/architectural scenarios. Experimental results demonstrate that the proposed approach is reliable and replicable and it is effective for restoration and documentation purposes, providing metric information e.g. of damaged areas to be restored.


Author(s):  
A. Jamali ◽  
A. A. Rahman ◽  
P. Boguslawski ◽  
C. M. Gold

Indoor navigation is important for various applications such as disaster management and safety analysis. In the last decade, indoor environment has been a focus of wide research; that includes developing techniques for acquiring indoor data (e.g. Terrestrial laser scanning), 3D indoor modelling and 3D indoor navigation models. In this paper, an automated 3D topological indoor network generated from inaccurate 3D building models is proposed. In a normal scenario, 3D indoor navigation network derivation needs accurate 3D models with no errors (e.g. gap, intersect) and two cells (e.g. rooms, corridors) should touch each other to build their connections. The presented 3D modeling of indoor navigation network is based on surveying control points and it is less dependent on the 3D geometrical building model. For reducing time and cost of indoor building data acquisition process, Trimble LaserAce 1000 as surveying instrument is used. The modelling results were validated against an accurate geometry of indoor building environment which was acquired using Trimble M3 total station.


2021 ◽  
Vol 14 (1) ◽  
pp. 50
Author(s):  
Haiqing He ◽  
Jing Yu ◽  
Penggen Cheng ◽  
Yuqian Wang ◽  
Yufeng Zhu ◽  
...  

Most 3D CityGML building models in street-view maps (e.g., Google, Baidu) lack texture information, which is generally used to reconstruct real-scene 3D models by photogrammetric techniques, such as unmanned aerial vehicle (UAV) mapping. However, due to its simplified building model and inaccurate location information, the commonly used photogrammetric method using a single data source cannot satisfy the requirement of texture mapping for the CityGML building model. Furthermore, a single data source usually suffers from several problems, such as object occlusion. We proposed a novel approach to achieve CityGML building model texture mapping by multiview coplanar extraction from UAV remotely sensed or terrestrial images to alleviate these problems. We utilized a deep convolutional neural network to filter out object occlusion (e.g., pedestrians, vehicles, and trees) and obtain building-texture distribution. Point-line-based features are extracted to characterize multiview coplanar textures in 2D space under the constraint of a homography matrix, and geometric topology is subsequently conducted to optimize the boundary of textures by using a strategy combining Hough-transform and iterative least-squares methods. Experimental results show that the proposed approach enables texture mapping for building façades to use 2D terrestrial images without the requirement of exterior orientation information; that is, different from the photogrammetric method, a collinear equation is not an essential part to capture texture information. In addition, the proposed approach can significantly eliminate blurred and distorted textures of building models, so it is suitable for automatic and rapid texture updates.


Author(s):  
Rossana Estanqueiro ◽  
José António Tenedório ◽  
Carla Rebelo ◽  
Joao Pedro Marques

Urbanism has mainly used 2D data for both the urban analysis and diagnosis and the presentation of proposals for changes in the whole city or parts of the city. Even regarding the production of urban indicators, using, for example, the quantification of the existing green area in relation to the resident population, this practice is regularly based on the area and rarely on volume. This situation is mainly justified by the sluggishness and costs associated with obtaining 3D data. The recent development of data collection technology by unmanned aerial vehicles has triggered a change in this scenario. This chapter presents the UAV data acquisition and processing chain, analyses the positional accuracy of UAV data processing performed with GCP measurements obtained from GNSS, demonstrates how positional accuracy assessment and UAV workflow's quality control are relevant for ensuring the accuracy of derived UAV geospatial products, and demonstrates the usability of 3D models in a theoretical 3D urbanism context.


2003 ◽  
Vol 42 (05) ◽  
pp. 215-219
Author(s):  
G. Platsch ◽  
A. Schwarz ◽  
K. Schmiedehausen ◽  
B. Tomandl ◽  
W. Huk ◽  
...  

Summary: Aim: Although the fusion of images from different modalities may improve diagnostic accuracy, it is rarely used in clinical routine work due to logistic problems. Therefore we evaluated performance and time needed for fusing MRI and SPECT images using a semiautomated dedicated software. Patients, material and Method: In 32 patients regional cerebral blood flow was measured using 99mTc ethylcystein dimer (ECD) and the three-headed SPECT camera MultiSPECT 3. MRI scans of the brain were performed using either a 0,2 T Open or a 1,5 T Sonata. Twelve of the MRI data sets were acquired using a 3D-T1w MPRAGE sequence, 20 with a 2D acquisition technique and different echo sequences. Image fusion was performed on a Syngo workstation using an entropy minimizing algorithm by an experienced user of the software. The fusion results were classified. We measured the time needed for the automated fusion procedure and in case of need that for manual realignment after automated, but insufficient fusion. Results: The mean time of the automated fusion procedure was 123 s. It was for the 2D significantly shorter than for the 3D MRI datasets. For four of the 2D data sets and two of the 3D data sets an optimal fit was reached using the automated approach. The remaining 26 data sets required manual correction. The sum of the time required for automated fusion and that needed for manual correction averaged 320 s (50-886 s). Conclusion: The fusion of 3D MRI data sets lasted significantly longer than that of the 2D MRI data. The automated fusion tool delivered in 20% an optimal fit, in 80% manual correction was necessary. Nevertheless, each of the 32 SPECT data sets could be merged in less than 15 min with the corresponding MRI data, which seems acceptable for clinical routine use.


Author(s):  
Lijing Wang ◽  
Aniruddha Adiga ◽  
Srinivasan Venkatramanan ◽  
Jiangzhuo Chen ◽  
Bryan Lewis ◽  
...  

Energies ◽  
2021 ◽  
Vol 14 (13) ◽  
pp. 3800
Author(s):  
Sebastian Krapf ◽  
Nils Kemmerzell ◽  
Syed Khawaja Haseeb Khawaja Haseeb Uddin ◽  
Manuel Hack Hack Vázquez ◽  
Fabian Netzler ◽  
...  

Roof-mounted photovoltaic systems play a critical role in the global transition to renewable energy generation. An analysis of roof photovoltaic potential is an important tool for supporting decision-making and for accelerating new installations. State of the art uses 3D data to conduct potential analyses with high spatial resolution, limiting the study area to places with available 3D data. Recent advances in deep learning allow the required roof information from aerial images to be extracted. Furthermore, most publications consider the technical photovoltaic potential, and only a few publications determine the photovoltaic economic potential. Therefore, this paper extends state of the art by proposing and applying a methodology for scalable economic photovoltaic potential analysis using aerial images and deep learning. Two convolutional neural networks are trained for semantic segmentation of roof segments and superstructures and achieve an Intersection over Union values of 0.84 and 0.64, respectively. We calculated the internal rate of return of each roof segment for 71 buildings in a small study area. A comparison of this paper’s methodology with a 3D-based analysis discusses its benefits and disadvantages. The proposed methodology uses only publicly available data and is potentially scalable to the global level. However, this poses a variety of research challenges and opportunities, which are summarized with a focus on the application of deep learning, economic photovoltaic potential analysis, and energy system analysis.


Sign in / Sign up

Export Citation Format

Share Document