scholarly journals TREE SPECIES CLASSIFICATION BASED ON 3D SPECTRAL POINT CLOUDS AND ORTHOMOSAICS ACQUIRED BY SNAPSHOT HYPERSPECTRAL UAS SENSOR

Author(s):  
C. Iseli ◽  
A. Lucieer

<p><strong>Abstract.</strong> In recent years, there has been a growing number of small hyperspectral sensors suitable for deployment on unmanned aerial systems (UAS. The introduction of the hyperspectral snapshot sensor provides interesting opportunities for acquisition of three-dimensional (3D) hyperspectral point clouds based on the structure-from-motion (SfM) workflow. In this study, we describe the integration of a 25-band hyperspectral snapshot sensor (PhotonFocus camera with IMEC 600&amp;thinsp;&amp;ndash;&amp;thinsp;875&amp;thinsp;nm 5x5 mosaic chip) on a multi-rotor UAS. The sensor was integrated with a dual frequency GNSS receiver for accurate time synchronisation and geolocation. We describe the sensor calibration workflow, including dark current and flat field characterisation. An SfM workflow was implemented to derive hyperspectral 3D point clouds and orthomosaics from overlapping frames. On-board GNSS coordinates for each hyperspectral frame assisted in the SfM process and allowed for accurate direct georeferencing (&amp;lt;&amp;thinsp;10&amp;thinsp;cm absolute accuracy). We present the processing workflow to generate seamless hyperspectral orthomosaics from hundreds of raw images. Spectral reference panels and in-field spectral measurements were used to calibrate and validate the spectral signatures. This process provides a novel data type which contains both 3D, geometric structure and detailed spectral information in a single format. First, to determine the potential improvements that such a format could provide, the core aim of this study was to compare the use of 3D hyperspectral point clouds to conventional hyperspectral imagery in the classification of two Eucalyptus tree species found in Tasmania, Australia. The IMEC SM5x5 hyperspectral snapshot sensor was flown over a small native plantation plot, consisting of a mix of the <i>Eucalyptus pauciflora</i> and <i>E. tenuiramis</i> species. High overlap hyperspectral imagery was captured and then processed using SfM algorithms to generate both a hyperspectral orthomosaic and a dense hyperspectral point cloud. Additionally, to ensure the optimum spectral quality of the data, the characteristics of the hyperspectral snapshot imaging sensor were analysed utilising measurements captured in a laboratory environment. To coincide with the generated hyperspectral point cloud data, both a file format and additional processing and visualisation software were developed to provide the necessary tools for a complete classification workflow. Results based on the classification of the <i>E. pauciflora</i> and <i>E. tenuiramis</i> species revealed that the hyperspectral point cloud produced an increased classification accuracy over conventional hyperspectral imagery based on random forest classification. This was represented by an increase in classification accuracy from 67.2% to 73.8%. It was found that even when applied separately, the geometric and spectral feature sets from the point cloud both provided increased classification accuracy over the hyperspectral imagery.</p>

Sensors ◽  
2021 ◽  
Vol 21 (21) ◽  
pp. 7392
Author(s):  
Danish Nazir ◽  
Muhammad Zeshan Afzal ◽  
Alain Pagani ◽  
Marcus Liwicki ◽  
Didier Stricker

In this paper, we present the idea of Self Supervised learning on the shape completion and classification of point clouds. Most 3D shape completion pipelines utilize AutoEncoders to extract features from point clouds used in downstream tasks such as classification, segmentation, detection, and other related applications. Our idea is to add contrastive learning into AutoEncoders to encourage global feature learning of the point cloud classes. It is performed by optimizing triplet loss. Furthermore, local feature representations learning of point cloud is performed by adding the Chamfer distance function. To evaluate the performance of our approach, we utilize the PointNet classifier. We also extend the number of classes for evaluation from 4 to 10 to show the generalization ability of the learned features. Based on our results, embeddings generated from the contrastive AutoEncoder enhances shape completion and classification performance from 84.2% to 84.9% of point clouds achieving the state-of-the-art results with 10 classes.


Author(s):  
M. Weinmann ◽  
A. Schmidt ◽  
C. Mallet ◽  
S. Hinz ◽  
F. Rottensteiner ◽  
...  

The fully automated analysis of 3D point clouds is of great importance in photogrammetry, remote sensing and computer vision. For reliably extracting objects such as buildings, road inventory or vegetation, many approaches rely on the results of a point cloud classification, where each 3D point is assigned a respective semantic class label. Such an assignment, in turn, typically involves statistical methods for feature extraction and machine learning. Whereas the different components in the processing workflow have extensively, but separately been investigated in recent years, the respective connection by sharing the results of crucial tasks across all components has not yet been addressed. This connection not only encapsulates the interrelated issues of neighborhood selection and feature extraction, but also the issue of how to involve spatial context in the classification step. In this paper, we present a novel and generic approach for 3D scene analysis which relies on (<i>i</i>) individually optimized 3D neighborhoods for (<i>ii</i>) the extraction of distinctive geometric features and (<i>iii</i>) the contextual classification of point cloud data. For a labeled benchmark dataset, we demonstrate the beneficial impact of involving contextual information in the classification process and that using individual 3D neighborhoods of optimal size significantly increases the quality of the results for both pointwise and contextual classification.


2021 ◽  
Vol 10 (3) ◽  
pp. 187
Author(s):  
Muhammed Enes Atik ◽  
Zaide Duran ◽  
Dursun Zafer Seker

3D scene classification has become an important research field in photogrammetry, remote sensing, computer vision and robotics with the widespread usage of 3D point clouds. Point cloud classification, called semantic labeling, semantic segmentation, or semantic classification of point clouds is a challenging topic. Machine learning, on the other hand, is a powerful mathematical tool used to classify 3D point clouds whose content can be significantly complex. In this study, the classification performance of different machine learning algorithms in multiple scales was evaluated. The feature spaces of the points in the point cloud were created using the geometric features generated based on the eigenvalues of the covariance matrix. Eight supervised classification algorithms were tested in four different areas from three datasets (the Dublin City dataset, Vaihingen dataset and Oakland3D dataset). The algorithms were evaluated in terms of overall accuracy, precision, recall, F1 score and process time. The best overall results were obtained for four test areas with different algorithms. Dublin City Area 1 was obtained with Random Forest as 93.12%, Dublin City Area 2 was obtained with a Multilayer Perceptron algorithm as 92.78%, Vaihingen was obtained as 79.71% with Support Vector Machines and Oakland3D with Linear Discriminant Analysis as 97.30%.


Author(s):  
M. R. Hess ◽  
V. Petrovic ◽  
F. Kuester

Digital documentation of cultural heritage structures is increasingly more common through the application of different imaging techniques. Many works have focused on the application of laser scanning and photogrammetry techniques for the acquisition of threedimensional (3D) geometry detailing cultural heritage sites and structures. With an abundance of these 3D data assets, there must be a digital environment where these data can be visualized and analyzed. Presented here is a feedback driven visualization framework that seamlessly enables interactive exploration and manipulation of massive point cloud data. The focus of this work is on the classification of different building materials with the goal of building more accurate as-built information models of historical structures. User defined functions have been tested within the interactive point cloud visualization framework to evaluate automated and semi-automated classification of 3D point data. These functions include decisions based on observed color, laser intensity, normal vector or local surface geometry. Multiple case studies are presented here to demonstrate the flexibility and utility of the presented point cloud visualization framework to achieve classification objectives.


2021 ◽  
Vol 12 ◽  
Author(s):  
Dominik Seidel ◽  
Peter Annighöfer ◽  
Anton Thielman ◽  
Quentin Edward Seifert ◽  
Jan-Henrik Thauer ◽  
...  

Automated species classification from 3D point clouds is still a challenge. It is, however, an important task for laser scanning-based forest inventory, ecosystem models, and to support forest management. Here, we tested the performance of an image classification approach based on convolutional neural networks (CNNs) with the aim to classify 3D point clouds of seven tree species based on 2D representation in a computationally efficient way. We were particularly interested in how the approach would perform with artificially increased training data size based on image augmentation techniques. Our approach yielded a high classification accuracy (86%) and the confusion matrix revealed that despite rather small sample sizes of the training data for some tree species, classification accuracy was high. We could partly relate this to the successful application of the image augmentation technique, improving our result by 6% in total and 13, 14, and 24% for ash, oak and pine, respectively. The introduced approach is hence not only applicable to small-sized datasets, it is also computationally effective since it relies on 2D instead of 3D data to be processed in the CNN. Our approach was faster and more accurate when compared to the point cloud-based “PointNet” approach.


Author(s):  
E. Özdemir ◽  
F. Remondino

<p><strong>Abstract.</strong> 3D city modeling has become important over the last decades as these models are being used in different studies including, energy evaluation, visibility analysis, 3D cadastre, urban planning, change detection, disaster management, etc. Segmentation and classification of photogrammetric or LiDAR data is important for 3D city models as these are the main data sources, and, these tasks are challenging due to their complexity. This study presents research in progress, which focuses on the segmentation and classification of 3D point clouds and orthoimages to generate 3D urban models. The aim is to classify photogrammetric-based point clouds (&amp;gt;<span class="thinspace"></span>30<span class="thinspace"></span>pts/sqm) in combination with aerial RGB orthoimages (~<span class="thinspace"></span>10<span class="thinspace"></span>cm, RGB image) in order to name buildings, ground level objects (GLOs), trees, grass areas, and other regions. If on the one hand the classification of aerial orthoimages is foreseen to be a fast approach to get classes and then transfer them from the image to the point cloud space, on the other hand, segmenting a point cloud is expected to be much more time consuming but to provide significant segments from the analyzed scene. For this reason, the proposed method combines segmentation methods on the two geoinformation in order to achieve better results.</p>


Author(s):  
A. Tabkha ◽  
R. Hajji ◽  
R. Billen ◽  
F. Poux

Abstract. The raw nature of point clouds is an important challenge for their direct exploitation in architecture, engineering and construction applications. Particularly, their lack of semantics hinders their utility for automatic workflows (Poux, 2019). In addition, the volume and the irregularity of the structure of point clouds makes it difficult to directly and automatically classify datasets efficiently, especially when compared to the state-of-the art 2D raster classification. Recently, with the advances in deep learning models such as convolutional neural networks (CNNs) , the performance of image-based classification of remote sensing scenes has improved considerably (Chen et al., 2018; Cheng et al., 2017). In this research, we examine a simple and innovative approach that represent large 3D point clouds through multiple 2D projections to leverage learning approaches based on 2D images. In other words, the approach in this study proposes an automatic process for extracting 360° panoramas, enhancing these to be able to leverage raster data to obtain domain-base semantic enrichment possibilities. Indeed, it is very important to obtain a rigorous characterization for use in the classification of a point cloud. Especially because there is a very large variety of 3D point cloud domain applications. In order to test the adequacy of the method and its potential for generalization, several tests were performed on different datasets. The developed semantic augmentation algorithm uses only the attributes X, Y, Z and camera positions as inputs.


2021 ◽  
Vol 13 (23) ◽  
pp. 4750
Author(s):  
Jianchang Chen ◽  
Yiming Chen ◽  
Zhengjun Liu

We propose the Point Cloud Tree Species Classification Network (PCTSCN) to overcome challenges in classifying tree species from laser data with deep learning methods. The network is mainly composed of two parts: a sampling component in the early stage and a feature extraction component in the later stage. We used geometric sampling to extract regions with local features from the tree contours since these tend to be species-specific. Then we used an improved Farthest Point Sampling method to extract the features from a global perspective. We input the intensity of the tree point cloud as a dimensional feature and spatial information into the neural network and mapped it to higher dimensions for feature extraction. We used the data obtained by Terrestrial Laser Scanning (TLS) and Unmanned Aerial Vehicle Laser Scanning (UAVLS) to conduct tree species classification experiments of white birch and larch. The experimental results showed that in both the TLS and UAVLS datasets, the input tree point cloud density and the highest feature dimensionality of the mapping had an impact on the classification accuracy of the tree species. When the single tree sample obtained by TLS consisted of 1024 points and the highest dimension of the network mapping was 512, the classification accuracy of the trained model reached 96%. For the individual tree samples obtained by UAVLS, which consisted of 2048 points and had the highest dimension of the network mapping of 1024, the classification accuracy of the trained model reached 92%. TLS data tree species classification accuracy of PCTSCN was improved by 2–9% compared with other models using the same point density, amount of data and highest feature dimension. The classification accuracy of tree species obtained by UAVLS was up to 8% higher. We propose PCTSCN to provide a new strategy for the intelligent classification of forest tree species.


Author(s):  
J. Wolf ◽  
R. Richter ◽  
S. Discher ◽  
J. Döllner

<p><strong>Abstract.</strong> In this work, we present an approach that uses an established image recognition convolutional neural network for the semantic classification of two-dimensional objects found in mobile mapping 3D point cloud scans of road environments, namely manhole covers and road markings. We show that the approach is capable of classifying these objects and that it can efficiently be applied on large datasets. Top-down view images from the point cloud are rendered and classified by a U-Net implementation. The results are integrated into the point cloud by setting an additional semantic attribute. Shape files can be computed from the classified points.</p>


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1228
Author(s):  
Ting On Chan ◽  
Linyuan Xia ◽  
Yimin Chen ◽  
Wei Lang ◽  
Tingting Chen ◽  
...  

Ancient pagodas are usually parts of hot tourist spots in many oriental countries due to their unique historical backgrounds. They are usually polygonal structures comprised by multiple floors, which are separated by eaves. In this paper, we propose a new method to investigate both the rotational and reflectional symmetry of such polygonal pagodas through developing novel geometric models to fit to the 3D point clouds obtained from photogrammetric reconstruction. The geometric model consists of multiple polygonal pyramid/prism models but has a common central axis. The method was verified by four datasets collected by an unmanned aerial vehicle (UAV) and a hand-held digital camera. The results indicate that the models fit accurately to the pagodas’ point clouds. The symmetry was realized by rotating and reflecting the pagodas’ point clouds after a complete leveling of the point cloud was achieved using the estimated central axes. The results show that there are RMSEs of 5.04 cm and 5.20 cm deviated from the perfect (theoretical) rotational and reflectional symmetries, respectively. This concludes that the examined pagodas are highly symmetric, both rotationally and reflectionally. The concept presented in the paper not only work for polygonal pagodas, but it can also be readily transformed and implemented for other applications for other pagoda-like objects such as transmission towers.


Sign in / Sign up

Export Citation Format

Share Document