scholarly journals Semantic Segmentation of Natural Materials on a Point Cloud Using Spatial and Multispectral Features

Sensors ◽  
2020 ◽  
Vol 20 (8) ◽  
pp. 2244
Author(s):  
J. M. Jurado ◽  
J. L. Cárdenas ◽  
C. J. Ogayar ◽  
L. Ortega ◽  
F. R. Feito

The characterization of natural spaces by the precise observation of their material properties is highly demanded in remote sensing and computer vision. The production of novel sensors enables the collection of heterogeneous data to get a comprehensive knowledge of the living and non-living entities in the ecosystem. The high resolution of consumer-grade RGB cameras is frequently used for the geometric reconstruction of many types of environments. Nevertheless, the understanding of natural spaces is still challenging. The automatic segmentation of homogeneous materials in nature is a complex task because there are many overlapping structures and an indirect illumination, so the object recognition is difficult. In this paper, we propose a method based on fusing spatial and multispectral characteristics for the unsupervised classification of natural materials in a point cloud. A high-resolution camera and a multispectral sensor are mounted on a custom camera rig in order to simultaneously capture RGB and multispectral images. Our method is tested in a controlled scenario, where different natural objects coexist. Initially, the input RGB images are processed to generate a point cloud by applying the structure-from-motion (SfM) algorithm. Then, the multispectral images are mapped on the three-dimensional model to characterize the geometry with the reflectance captured from four narrow bands (green, red, red-edge and near-infrared). The reflectance, the visible colour and the spatial component are combined to extract key differences among all existing materials. For this purpose, a hierarchical cluster analysis is applied to pool the point cloud and identify the feature pattern for every material. As a result, the tree trunk, the leaves, different species of low plants, the ground and rocks can be clearly recognized in the scene. These results demonstrate the feasibility to perform a semantic segmentation by considering multispectral and spatial features with an unknown number of clusters to be detected on the point cloud. Moreover, our solution is compared to other method based on supervised learning in order to test the improvement of the proposed approach.

Author(s):  
Naga Madhavi lavanya Gandi

Land cover classification information plays a very important role in various applications. Airborne Light detection and Ranging (LiDAR) data is widely used in remote sensing application for the classification of land cover. The present study presents a Spatial classification method using Terrasoild macros . The data used in this study are a LiDAR point cloud data with the wavelength of green:532nm, near infrared:1064nm and mid-infrared-1550nm and High Resolution RGB data. The classification is carried in TERRASCAN Module with twelve land cover classes. The classification accuracies were assessed using high resolution RGB data. From the results it is concluded that the LiDAR data classification with overall accuracy and kappa coefficient 85.2% and 0.7562.


2020 ◽  
Vol 12 (7) ◽  
pp. 1106 ◽  
Author(s):  
J. M. Jurado ◽  
L. Ortega ◽  
J. J. Cubillas ◽  
F. R. Feito

3D plant structure observation and characterization to get a comprehensive knowledge about the plant status still poses a challenge in Precision Agriculture (PA). The complex branching and self-hidden geometry in the plant canopy are some of the existing problems for the 3D reconstruction of vegetation. In this paper, we propose a novel application for the fusion of multispectral images and high-resolution point clouds of an olive orchard. Our methodology is based on a multi-temporal approach to study the evolution of olive trees. This process is fully automated and no human intervention is required to characterize the point cloud with the reflectance captured by multiple multispectral images. The main objective of this work is twofold: (1) the multispectral image mapping on a high-resolution point cloud and (2) the multi-temporal analysis of morphological and spectral traits in two flight campaigns. Initially, the study area is modeled by taking multiple overlapping RGB images with a high-resolution camera from an unmanned aerial vehicle (UAV). In addition, a UAV-based multispectral sensor is used to capture the reflectance for some narrow-bands (green, near-infrared, red, and red-edge). Then, the RGB point cloud with a high detailed geometry of olive trees is enriched by mapping the reflectance maps, which are generated for every multispectral image. Therefore, each 3D point is related to its corresponding pixel of the multispectral image, in which it is visible. As a result, the 3D models of olive trees are characterized by the observed reflectance in the plant canopy. These reflectance values are also combined to calculate several vegetation indices (NDVI, RVI, GRVI, and NDRE). According to the spectral and spatial relationships in the olive plantation, segmentation of individual olive trees is performed. On the one hand, plant morphology is studied by a voxel-based decomposition of its 3D structure to estimate the height and volume. On the other hand, the plant health is studied by the detection of meaningful spectral traits of olive trees. Moreover, the proposed methodology also allows the processing of multi-temporal data to study the variability of the studied features. Consequently, some relevant changes are detected and the development of each olive tree is analyzed by a visual-based and statistical approach. The interactive visualization and analysis of the enriched 3D plant structure with different spectral layers is an innovative method to inspect the plant health and ensure adequate plantation sustainability.


2020 ◽  
Vol 961 (7) ◽  
pp. 47-55
Author(s):  
A.G. Yunusov ◽  
A.J. Jdeed ◽  
N.S. Begliarov ◽  
M.A. Elshewy

Laser scanning is considered as one of the most useful and fast technologies for modelling. On the other hand, the size of scan results can vary from hundreds to several million points. As a result, the large volume of the obtained clouds leads to complication at processing the results and increases the time costs. One way to reduce the volume of a point cloud is segmentation, which reduces the amount of data from several million points to a limited number of segments. In this article, we evaluated effect on the performance, the accuracy of various segmentation methods and the geometric accuracy of the obtained models at density changes taking into account the processing time. The results of our experiment were compared with reference data in a form of comparative analysis. As a conclusion, some recommendations for choosing the best segmentation method were proposed.


2021 ◽  
Vol 7 (2) ◽  
pp. 187-199
Author(s):  
Meng-Hao Guo ◽  
Jun-Xiong Cai ◽  
Zheng-Ning Liu ◽  
Tai-Jiang Mu ◽  
Ralph R. Martin ◽  
...  

AbstractThe irregular domain and lack of ordering make it challenging to design deep neural networks for point cloud processing. This paper presents a novel framework named Point Cloud Transformer (PCT) for point cloud learning. PCT is based on Transformer, which achieves huge success in natural language processing and displays great potential in image processing. It is inherently permutation invariant for processing a sequence of points, making it well-suited for point cloud learning. To better capture local context within the point cloud, we enhance input embedding with the support of farthest point sampling and nearest neighbor search. Extensive experiments demonstrate that the PCT achieves the state-of-the-art performance on shape classification, part segmentation, semantic segmentation, and normal estimation tasks.


Sign in / Sign up

Export Citation Format

Share Document