scholarly journals Multi-Dimensional Underwater Point Cloud Detection Based on Deep Learning

Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 884
Author(s):  
Chia-Ming Tsai ◽  
Yi-Horng Lai ◽  
Yung-Da Sun ◽  
Yu-Jen Chung ◽  
Jau-Woei Perng

Numerous sensors can obtain images or point cloud data on land, however, the rapid attenuation of electromagnetic signals and the lack of light in water have been observed to restrict sensing functions. This study expands the utilization of two- and three-dimensional detection technologies in underwater applications to detect abandoned tires. A three-dimensional acoustic sensor, the BV5000, is used in this study to collect underwater point cloud data. Some pre-processing steps are proposed to remove noise and the seabed from raw data. Point clouds are then processed to obtain two data types: a 2D image and a 3D point cloud. Deep learning methods with different dimensions are used to train the models. In the two-dimensional method, the point cloud is transferred into a bird’s eye view image. The Faster R-CNN and YOLOv3 network architectures are used to detect tires. Meanwhile, in the three-dimensional method, the point cloud associated with a tire is cut out from the raw data and is used as training data. The PointNet and PointConv network architectures are then used for tire classification. The results show that both approaches provide good accuracy.

2021 ◽  
Vol 11 (19) ◽  
pp. 8996
Author(s):  
Yuwei Cao ◽  
Marco Scaioni

In current research, fully supervised Deep Learning (DL) techniques are employed to train a segmentation network to be applied to point clouds of buildings. However, training such networks requires large amounts of fine-labeled buildings’ point-cloud data, presenting a major challenge in practice because they are difficult to obtain. Consequently, the application of fully supervised DL for semantic segmentation of buildings’ point clouds at LoD3 level is severely limited. In order to reduce the number of required annotated labels, we proposed a novel label-efficient DL network that obtains per-point semantic labels of LoD3 buildings’ point clouds with limited supervision, named 3DLEB-Net. In general, it consists of two steps. The first step (Autoencoder, AE) is composed of a Dynamic Graph Convolutional Neural Network (DGCNN) encoder and a folding-based decoder. It is designed to extract discriminative global and local features from input point clouds by faithfully reconstructing them without any label. The second step is the semantic segmentation network. By supplying a small amount of task-specific supervision, a segmentation network is proposed for semantically segmenting the encoded features acquired from the pre-trained AE. Experimentally, we evaluated our approach based on the Architectural Cultural Heritage (ArCH) dataset. Compared to the fully supervised DL methods, we found that our model achieved state-of-the-art results on the unseen scenes, with only 10% of labeled training data from fully supervised methods as input. Moreover, we conducted a series of ablation studies to show the effectiveness of the design choices of our model.


Author(s):  
Y. Hori ◽  
T. Ogawa

The implementation of laser scanning in the field of archaeology provides us with an entirely new dimension in research and surveying. It allows us to digitally recreate individual objects, or entire cities, using millions of three-dimensional points grouped together in what is referred to as "point clouds". In addition, the visualization of the point cloud data, which can be used in the final report by archaeologists and architects, should usually be produced as a JPG or TIFF file. Not only the visualization of point cloud data, but also re-examination of older data and new survey of the construction of Roman building applying remote-sensing technology for precise and detailed measurements afford new information that may lead to revising drawings of ancient buildings which had been adduced as evidence without any consideration of a degree of accuracy, and finally can provide new research of ancient buildings. We used laser scanners at fields because of its speed, comprehensive coverage, accuracy and flexibility of data manipulation. Therefore, we “skipped” many of post-processing and focused on the images created from the meta-data simply aligned using a tool which extended automatic feature-matching algorithm and a popular renderer that can provide graphic results.


2019 ◽  
Vol 8 (5) ◽  
pp. 213 ◽  
Author(s):  
Florent Poux ◽  
Roland Billen

Automation in point cloud data processing is central in knowledge discovery within decision-making systems. The definition of relevant features is often key for segmentation and classification, with automated workflows presenting the main challenges. In this paper, we propose a voxel-based feature engineering that better characterize point clusters and provide strong support to supervised or unsupervised classification. We provide different feature generalization levels to permit interoperable frameworks. First, we recommend a shape-based feature set (SF1) that only leverages the raw X, Y, Z attributes of any point cloud. Afterwards, we derive relationship and topology between voxel entities to obtain a three-dimensional (3D) structural connectivity feature set (SF2). Finally, we provide a knowledge-based decision tree to permit infrastructure-related classification. We study SF1/SF2 synergy on a new semantic segmentation framework for the constitution of a higher semantic representation of point clouds in relevant clusters. Finally, we benchmark the approach against novel and best-performing deep-learning methods while using the full S3DIS dataset. We highlight good performances, easy-integration, and high F1-score (> 85%) for planar-dominant classes that are comparable to state-of-the-art deep learning.


Author(s):  
A. Nurunnabi ◽  
F. N. Teferle ◽  
J. Li ◽  
R. C. Lindenbergh ◽  
A. Hunegnaw

Abstract. Ground surface extraction is one of the classic tasks in airborne laser scanning (ALS) point cloud processing that is used for three-dimensional (3D) city modelling, infrastructure health monitoring, and disaster management. Many methods have been developed over the last three decades. Recently, Deep Learning (DL) has become the most dominant technique for 3D point cloud classification. DL methods used for classification can be categorized into end-to-end and non end-to-end approaches. One of the main challenges of using supervised DL approaches is getting a sufficient amount of training data. The main advantage of using a supervised non end-to-end approach is that it requires less training data. This paper introduces a novel local feature-based non end-to-end DL algorithm that generates a binary classifier for ground point filtering. It studies feature relevance, and investigates three models that are different combinations of features. This method is free from the limitations of point clouds’ irregular data structure and varying data density, which is the biggest challenge for using the elegant convolutional neural network. The new algorithm does not require transforming data into regular 3D voxel grids or any rasterization. The performance of the new method has been demonstrated through two ALS datasets covering urban environments. The method successfully labels ground and non-ground points in the presence of steep slopes and height discontinuity in the terrain. Experiments in this paper show that the algorithm achieves around 97% in both F1-score and model accuracy for ground point labelling.


Author(s):  
Gülhan Benli

Since the 2000s, terrestrial laser scanning, as one of the methods used to document historical edifices in protected areas, has taken on greater importance because it mitigates the difficulties associated with working on large areas and saves time while also making it possible to better understand all the particularities of the area. Through this technology, comprehensive point data (point clouds) about the surface of an object can be generated in a highly accurate three-dimensional manner. Furthermore, with the proper software this three-dimensional point cloud data can be transformed into three-dimensional rendering/mapping/modeling and quantitative orthophotographs. In this chapter, the study will present the results of terrestrial laser scanning and surveying which was used to obtain three-dimensional point clouds through three-dimensional survey measurements and scans of silhouettes of streets in Fatih in Historic Peninsula in Istanbul, which were then transposed into survey images and drawings. The study will also cite examples of the facade mapping using terrestrial laser scanning data in Istanbul Historic Peninsula Project.


2020 ◽  
Vol 12 (11) ◽  
pp. 1729 ◽  
Author(s):  
Saifullahi Aminu Bello ◽  
Shangshu Yu ◽  
Cheng Wang ◽  
Jibril Muhmmad Adam ◽  
Jonathan Li

A point cloud is a set of points defined in a 3D metric space. Point clouds have become one of the most significant data formats for 3D representation and are gaining increased popularity as a result of the increased availability of acquisition devices, as well as seeing increased application in areas such as robotics, autonomous driving, and augmented and virtual reality. Deep learning is now the most powerful tool for data processing in computer vision and is becoming the most preferred technique for tasks such as classification, segmentation, and detection. While deep learning techniques are mainly applied to data with a structured grid, the point cloud, on the other hand, is unstructured. The unstructuredness of point clouds makes the use of deep learning for its direct processing very challenging. This paper contains a review of the recent state-of-the-art deep learning techniques, mainly focusing on raw point cloud data. The initial work on deep learning directly with raw point cloud data did not model local regions; therefore, subsequent approaches model local regions through sampling and grouping. More recently, several approaches have been proposed that not only model the local regions but also explore the correlation between points in the local regions. From the survey, we conclude that approaches that model local regions and take into account the correlation between points in the local regions perform better. Contrary to existing reviews, this paper provides a general structure for learning with raw point clouds, and various methods were compared based on the general structure. This work also introduces the popular 3D point cloud benchmark datasets and discusses the application of deep learning in popular 3D vision tasks, including classification, segmentation, and detection.


Author(s):  
E. S. Malinverni ◽  
R. Pierdicca ◽  
M. Paolanti ◽  
M. Martini ◽  
C. Morbidoni ◽  
...  

<p><strong>Abstract.</strong> Cultural Heritage is a testimony of past human activity, and, as such, its objects exhibit great variety in their nature, size and complexity; from small artefacts and museum items to cultural landscapes, from historical building and ancient monuments to city centers and archaeological sites. Cultural Heritage around the globe suffers from wars, natural disasters and human negligence. The importance of digital documentation is well recognized and there is an increasing pressure to document our heritage both nationally and internationally. For this reason, the three-dimensional scanning and modeling of sites and artifacts of cultural heritage have remarkably increased in recent years. The semantic segmentation of point clouds is an essential step of the entire pipeline; in fact, it allows to decompose complex architectures in single elements, which are then enriched with meaningful information within Building Information Modelling software. Notwithstanding, this step is very time consuming and completely entrusted on the manual work of domain experts, far from being automatized. This work describes a method to label and cluster automatically a point cloud based on a supervised Deep Learning approach, using a state-of-the-art Neural Network called PointNet++. Despite other methods are known, we have choose PointNet++ as it reached significant results for classifying and segmenting 3D point clouds. PointNet++ has been tested and improved, by training the network with annotated point clouds coming from a real survey and to evaluate how performance changes according to the input training data. It can result of great interest for the research community dealing with the point cloud semantic segmentation, since it makes public a labelled dataset of CH elements for further tests.</p>


Author(s):  
M. Nakagawa ◽  
R. Nozaki

<p><strong>Abstract.</strong> Three-dimensional indoor navigation requires various functions, such as the shortest path retrieval, obstacle avoidance, and secure path retrieval, for optimal path finding using a geometrical network model. Although the geometrical network model can be prepared manually, the model should be automatically generated using images and point clouds to represent changing indoor environments. Thus, we propose a methodology for generating a geometrical network model for indoor navigation using point clouds through object classification, navigable area estimation, and navigable path estimation. Our proposed methodology was evaluated through experiments using the benchmark of the International Society for Photogrammetry and Remote Sensing for indoor modeling. In our experiments, we confirmed that our methodology can generate a geometrical network model automatically.</p>


2021 ◽  
Vol 7 (1) ◽  
pp. 1-24
Author(s):  
Piotr Tompalski ◽  
Nicholas C. Coops ◽  
Joanne C. White ◽  
Tristan R.H. Goodbody ◽  
Chris R. Hennigar ◽  
...  

Abstract Purpose of Review The increasing availability of three-dimensional point clouds, including both airborne laser scanning and digital aerial photogrammetry, allow for the derivation of forest inventory information with a high level of attribute accuracy and spatial detail. When available at two points in time, point cloud datasets offer a rich source of information for detailed analysis of change in forest structure. Recent Findings Existing research across a broad range of forest types has demonstrated that those analyses can be performed using different approaches, levels of detail, or source data. By reviewing the relevant findings, we highlight the potential that bi- and multi-temporal point clouds have for enhanced analysis of forest growth. We divide the existing approaches into two broad categories— – approaches that focus on estimating change based on predictions of two or more forest inventory attributes over time, and approaches for forecasting forest inventory attributes. We describe how point clouds acquired at two or more points in time can be used for both categories of analysis by comparing input airborne datasets, before discussing the methods that were used, and resulting accuracies. Summary To conclude, we outline outstanding research gaps that require further investigation, including the need for an improved understanding of which three-dimensional datasets can be applied using certain methods. We also discuss the likely implications of these datasets on the expected outcomes, improvements in tree-to-tree matching and analysis, integration with growth simulators, and ultimately, the development of growth models driven entirely with point cloud data.


Author(s):  
Thomas Blanc ◽  
Mohamed El Beheiry ◽  
Jean-Baptiste Masson ◽  
Bassam Hajj

AbstractThe quantity of experimentally recorded point cloud data, such generated in single-molecule experiments, is increasing continuously in both size and dimension. Gaining an intuitive understanding of the data and facilitating multi-dimensional data analysis remains a challenge. It is especially challenging when static distribution properties are not predictive of dynamical properties. Here, we report a new open-source software platform, Genuage, that enables the easy perception, interaction and analysis of complex multidimensional point cloud datasets by leveraging virtual reality. We illustrate the benefit of the Genuage with examples of three-dimensional static and dynamic localization microscopy datasets, as well as some synthetic datasets. Genuage has a large breadth of usage modes, due to its compatibility with arbitrary multidimensional data types extending beyond the single-molecule research community.


Sign in / Sign up

Export Citation Format

Share Document