scholarly journals Geospatial Artificial Intelligence: Potentials of Machine Learning for 3D Point Clouds and Geospatial Digital Twins

Author(s):  
Jürgen Döllner
Author(s):  
E. Grilli ◽  
E. M. Farella ◽  
A. Torresani ◽  
F. Remondino

<p><strong>Abstract.</strong> In the last years, the application of artificial intelligence (Machine Learning and Deep Learning methods) for the classification of 3D point clouds has become an important task in modern 3D documentation and modelling applications. The identification of proper geometric and radiometric features becomes fundamental to classify 2D/3D data correctly. While many studies have been conducted in the geospatial field, the cultural heritage sector is still partly unexplored. In this paper we analyse the efficacy of the geometric covariance features as a support for the classification of Cultural Heritage point clouds. To analyse the impact of the different features calculated on spherical neighbourhoods at various radius sizes, we present results obtained on four different heritage case studies using different features configurations.</p>


2021 ◽  
Vol 10 (3) ◽  
pp. 187
Author(s):  
Muhammed Enes Atik ◽  
Zaide Duran ◽  
Dursun Zafer Seker

3D scene classification has become an important research field in photogrammetry, remote sensing, computer vision and robotics with the widespread usage of 3D point clouds. Point cloud classification, called semantic labeling, semantic segmentation, or semantic classification of point clouds is a challenging topic. Machine learning, on the other hand, is a powerful mathematical tool used to classify 3D point clouds whose content can be significantly complex. In this study, the classification performance of different machine learning algorithms in multiple scales was evaluated. The feature spaces of the points in the point cloud were created using the geometric features generated based on the eigenvalues of the covariance matrix. Eight supervised classification algorithms were tested in four different areas from three datasets (the Dublin City dataset, Vaihingen dataset and Oakland3D dataset). The algorithms were evaluated in terms of overall accuracy, precision, recall, F1 score and process time. The best overall results were obtained for four test areas with different algorithms. Dublin City Area 1 was obtained with Random Forest as 93.12%, Dublin City Area 2 was obtained with a Multilayer Perceptron algorithm as 92.78%, Vaihingen was obtained as 79.71% with Support Vector Machines and Oakland3D with Linear Discriminant Analysis as 97.30%.


2021 ◽  
pp. 113-133
Author(s):  
F. Patricia Medina ◽  
Randy Paffenroth

Author(s):  
E. Özdemir ◽  
F. Remondino ◽  
A. Golkar

Abstract. With recent advances in technology, 3D point clouds are getting more and more frequently requested and used, not only for visualization needs but also e.g. by public administrations for urban planning and management. 3D point clouds are also a very frequent source for generating 3D city models which became recently more available for many applications, such as urban development plans, energy evaluation, navigation, visibility analysis and numerous other GIS studies. While the main data sources remained the same (namely aerial photogrammetry and LiDAR), the way these city models are generated have been evolving towards automation with different approaches. As most of these approaches are based on point clouds with proper semantic classes, our aim is to classify aerial point clouds into meaningful semantic classes, e.g. ground level objects (GLO, including roads and pavements), vegetation, buildings’ facades and buildings’ roofs. In this study we tested and evaluated various machine learning algorithms for classification, including three deep learning algorithms and one machine learning algorithm. In the experiments, several hand-crafted geometric features depending on the dataset are used and, unconventionally, these geometric features are used also for deep learning.


Author(s):  
M. Mohamed ◽  
S. Morsy ◽  
A. El-Shazly

Abstract. 3D road mapping is essential for intelligent transportation system in smart cities. Road features can be utilized for road maintenance, autonomous driving vehicles, and providing regulations to drivers. Currently, 3D road environment receives its data from Mobile Laser Scanning (MLS) systems. MLS systems are capable of rapidly acquiring dense and accurate 3D point clouds, which allow for effective surveying of long road corridors. They produce huge amount of point clouds, which requires automatic features classification algorithms with acceptable processing time. Road features have variant geometric regular or irregular shapes. Therefore, most researches focus on classification of one road feature such as road surface, curbs, building facades, etc. Machine learning (ML) algorithms are widely used for predicting the future or classifying information to help policymakers in making necessary decisions. This prediction comes from a pre-trained model on a given data consisting of inputs and their corresponding outputs of the same characteristics. This research uses ML algorithms for mobile LiDAR data classification. First, cylindrical neighbourhood selection method was used to define point’s surroundings. Second, geometric point features including geometric, moment and height features were derived. Finally, three ML algorithms, Random Forest (RF), Gaussian Naïve Bayes (GNB), and Quadratic Discriminant Analysis (QDA) were applied. The ML algorithms were used to classify a part of Paris-Lille-3D benchmark of about 1.5 km long road in Lille with more than 98 million points into nine classes. The results demonstrated an overall accuracy of 92.39%, 78.5%, and 78.1% for RF, GNB, and QDA, respectively.


Author(s):  
A-M. Loghin ◽  
N. Pfeifer ◽  
J. Otepka-Schremmer

Abstract. Image matching of aerial or satellite images and Airborne Laser Scanning (ALS) are the two main techniques for the acquisition of geospatial information (3D point clouds), used for mapping and 3D modelling of large surface areas. While ALS point cloud classification is a widely investigated topic, there are fewer studies related to the image-derived point clouds, even less for point clouds derived from stereo satellite imagery. Therefore, the main focus of this contribution is a comparative analysis and evaluation of a supervised machine learning classification method that exploits the full 3D content of point clouds generated by dense image matching of tri-stereo Very High Resolution (VHR) satellite imagery. The images were collected with two different sensors (Pléiades and WorldView-3) at different timestamps for a study area covering a surface of 24 km2, located in Waldviertel, Lower Austria. In particular, we evaluate the performance and precision of the classifier by analysing the variation of the results obtained after multiple scenarios using different training and test data sets. The temporal difference of the two Pléiades acquisitions (7 days) allowed us to calculate the repeatability of the adopted machine learning algorithm for the classification. Additionally, we investigate how the different acquisition geometries (ground sample distance, viewing and convergence angles) influence the performance of classifying the satellite image-derived point clouds into five object classes: ground, trees, roads, buildings, and vehicles. Our experimental results indicate that, in overall the classifier performs very similar in all situations, with values for the F1-score between 0.63 and 0.65 and overall accuracies beyond 93%. As a measure of repeatability, stable classes such as buildings and roads show a variation below 3% for the F1-score between the two Pléiades acquisitions, proving the stability of the model.


Author(s):  
J. Höhle

Facades of buildings contain various types of objects which have to be recorded for information systems. The article describes a solution for this task focussing on automated classification by means of machine learning techniques. Stereo pairs of oblique images are used to derive 3D point clouds of buildings. The planes of the buildings are automatically detected. The derived planes are supplemented with a regular grid of points for which the colour values are found in the images. For each grid point of the façade additional attributes are derived from image and object data. This "intelligent" point cloud is analysed by a decision tree, which is derived from a small training set. The derived decision tree is then used to classify the complete point cloud. To each point of the regular façade grid a class is assigned and a façade plan is mapped by a colour palette representing the different objects. Some image processing methods are applied to improve the appearance of the interpreted façade plot and to extract additional information. The proposed method is tested on facades of a church. Accuracy measures were derived from 140 independent checkpoints, which were randomly selected. When selecting four classes ("window", "stone work", "painted wall", and "vegetation") the overall accuracy is assessed with 80 % (95 % Confidence Interval: 71 %&ndash;88 %). The user accuracy of class “stonework” was assessed with 90 % (95 % CI: 80 %&ndash;97 %). The proposed methodology has a high potential for automation and fast processing.


Sign in / Sign up

Export Citation Format

Share Document