scholarly journals Optimizing Wireless Sensor Network Installations by Visibility Analysis on 3D Point Clouds

2019 ◽  
Vol 8 (10) ◽  
pp. 460
Author(s):  
Gracchi ◽  
Gigli ◽  
Noël ◽  
Jaboyedoff ◽  
Madiai ◽  
...  

In this paper, a MATLAB tool for the automatic detection of the best locations to install a wireless sensor network (WSN) is presented. The implemented code works directly on high-resolution 3D point clouds and aims to help in positioning sensors that are part of a network requiring inter-visibility, namely, a clear line of sight (LOS). Indeed, with the development of LiDAR and Structure from Motion technologies, there is an opportunity to directly use 3D point cloud data to perform visibility analyses. By doing so, many disadvantages of traditional modelling and analysis methods can be bypassed. The algorithm points out the optimal deployment of devices following mainly two criteria: inter-visibility (using a modified version of the Hidden Point Removal operator) and inter-distance. Furthermore, an option to prioritize significant areas is provided. The proposed method was first validated on an artificial 3D model, and then on a landslide 3D point cloud acquired from terrestrial laser scanning for the real positioning of an ultrawide-band WSN already installed in 2016. The comparison between collected data and data acquired by the WSN installed following traditional patterns has demonstrated its ability for the optimal deployment of a WSN requiring inter-visibility.

Author(s):  
R. Boerner ◽  
M. Kröhnert

3D point clouds, acquired by state-of-the-art terrestrial laser scanning techniques (TLS), provide spatial information about accuracies up to several millimetres. Unfortunately, common TLS data has no spectral information about the covered scene. However, the matching of TLS data with images is important for monoplotting purposes and point cloud colouration. Well-established methods solve this issue by matching of close range images and point cloud data by fitting optical camera systems on top of laser scanners or rather using ground control points. <br><br> The approach addressed in this paper aims for the matching of 2D image and 3D point cloud data from a freely moving camera within an environment covered by a large 3D point cloud, e.g. a 3D city model. The key advantage of the free movement affects augmented reality applications or real time measurements. Therefore, a so-called real image, captured by a smartphone camera, has to be matched with a so-called synthetic image which consists of reverse projected 3D point cloud data to a synthetic projection centre whose exterior orientation parameters match the parameters of the image, assuming an ideal distortion free camera.


Author(s):  
R. Boerner ◽  
M. Kröhnert

3D point clouds, acquired by state-of-the-art terrestrial laser scanning techniques (TLS), provide spatial information about accuracies up to several millimetres. Unfortunately, common TLS data has no spectral information about the covered scene. However, the matching of TLS data with images is important for monoplotting purposes and point cloud colouration. Well-established methods solve this issue by matching of close range images and point cloud data by fitting optical camera systems on top of laser scanners or rather using ground control points. <br><br> The approach addressed in this paper aims for the matching of 2D image and 3D point cloud data from a freely moving camera within an environment covered by a large 3D point cloud, e.g. a 3D city model. The key advantage of the free movement affects augmented reality applications or real time measurements. Therefore, a so-called real image, captured by a smartphone camera, has to be matched with a so-called synthetic image which consists of reverse projected 3D point cloud data to a synthetic projection centre whose exterior orientation parameters match the parameters of the image, assuming an ideal distortion free camera.


Aerospace ◽  
2018 ◽  
Vol 5 (3) ◽  
pp. 94 ◽  
Author(s):  
Hriday Bavle ◽  
Jose Sanchez-Lopez ◽  
Paloma Puente ◽  
Alejandro Rodriguez-Ramos ◽  
Carlos Sampedro ◽  
...  

This paper presents a fast and robust approach for estimating the flight altitude of multirotor Unmanned Aerial Vehicles (UAVs) using 3D point cloud sensors in cluttered, unstructured, and dynamic indoor environments. The objective is to present a flight altitude estimation algorithm, replacing the conventional sensors such as laser altimeters, barometers, or accelerometers, which have several limitations when used individually. Our proposed algorithm includes two stages: in the first stage, a fast clustering of the measured 3D point cloud data is performed, along with the segmentation of the clustered data into horizontal planes. In the second stage, these segmented horizontal planes are mapped based on the vertical distance with respect to the point cloud sensor frame of reference, in order to provide a robust flight altitude estimation even in presence of several static as well as dynamic ground obstacles. We validate our approach using the IROS 2011 Kinect dataset available in the literature, estimating the altitude of the RGB-D camera using the provided 3D point clouds. We further validate our approach using a point cloud sensor on board a UAV, by means of several autonomous real flights, closing its altitude control loop using the flight altitude estimated by our proposed method, in presence of several different static as well as dynamic ground obstacles. In addition, the implementation of our approach has been integrated in our open-source software framework for aerial robotics called Aerostack.


2018 ◽  
Vol 9 (2) ◽  
pp. 37-53
Author(s):  
Sinh Van Nguyen ◽  
Ha Manh Tran ◽  
Minh Khai Tran

Building 3D objects or reconstructing their surfaces from 3D point cloud data are researched activities in the field of geometric modeling and computer graphics. In the recent years, they are also studied and used in some fields such as: graph models and simulation; image processing or restoration of digital heritages. This article presents an improved method for restoring the shape of 3D point cloud surfaces. The method is a combination of creating a Bezier surface patch and computing tangent plane of 3D points to fill holes on a surface of 3D point clouds. This method is described as follows: at first, a boundary for each hole on the surface is identified. The holes are then filled by computing Bezier curves of surface patches to find missing points. After that, the holes are refined based on two steps (rough and elaborate) to adjust the inserted points and preserve the local curvature of the holes. The contribution of the proposed method has been shown in processing time and the novelty of combined computation in this method has preserved the initial shape of the surface


2010 ◽  
Vol 22 (2) ◽  
pp. 158-166 ◽  
Author(s):  
Taro Suzuki ◽  
◽  
Yoshiharu Amano ◽  
Takumi Hashizume

This paper describes outdoor localization for a mobile robot using a laser scanner and three-dimensional (3D) point cloud data. A Mobile Mapping System (MMS) measures outdoor 3D point clouds easily and precisely. The full six-dimensional state of a mobile robot is estimated combining dead reckoning and 3D point cloud data. Two-dimensional (2D) position and orientation are extended to 3D using 3D point clouds assuming that the mobile robot remains in continuous contact with the road surface. Our approach applies a particle filter to correct position error in the laser measurement model in 3D point cloud space. Field experiments were conducted to evaluate the accuracy of our proposal. As the result of the experiment, it was confirmed that a localization precision of 0.2 m (RMS) is possible using our proposal.


2020 ◽  
Vol 12 (18) ◽  
pp. 3043 ◽  
Author(s):  
Juan M. Jurado ◽  
Luís Pádua ◽  
Francisco R. Feito ◽  
Joaquim J. Sousa

The optimisation of vineyards management requires efficient and automated methods able to identify individual plants. In the last few years, Unmanned Aerial Vehicles (UAVs) have become one of the main sources of remote sensing information for Precision Viticulture (PV) applications. In fact, high resolution UAV-based imagery offers a unique capability for modelling plant’s structure making possible the recognition of significant geometrical features in photogrammetric point clouds. Despite the proliferation of innovative technologies in viticulture, the identification of individual grapevines relies on image-based segmentation techniques. In that way, grapevine and non-grapevine features are separated and individual plants are estimated usually considering a fixed distance between them. In this study, an automatic method for grapevine trunk detection, using 3D point cloud data, is presented. The proposed method focuses on the recognition of key geometrical parameters to ensure the existence of every plant in the 3D model. The method was tested in different commercial vineyards and to push it to its limit a vineyard characterised by several missing plants along the vine rows, irregular distances between plants and occluded trunks by dense vegetation in some areas, was also used. The proposed method represents a disruption in relation to the state of the art, and is able to identify individual trunks, posts and missing plants based on the interpretation and analysis of a 3D point cloud. Moreover, a validation process was carried out allowing concluding that the method has a high performance, especially when it is applied to 3D point clouds generated in phases in which the leaves are not yet very dense (January to May). However, if correct flight parametrizations are set, the method remains effective throughout the entire vegetative cycle.


Author(s):  
H. Houshiar ◽  
S. Winkler

With advance in technology access to data especially 3D point cloud data becomes more and more an everyday task. 3D point clouds are usually captured with very expensive tools such as 3D laser scanners or very time consuming methods such as photogrammetry. Most of the available softwares for 3D point cloud processing are designed for experts and specialists in this field and are usually very large software packages containing variety of methods and tools. This results in softwares that are usually very expensive to acquire and also very difficult to use. Difficulty of use is caused by complicated user interfaces that is required to accommodate a large list of features. The aim of these complex softwares is to provide a powerful tool for a specific group of specialist. However they are not necessary required by the majority of the up coming average users of point clouds. In addition to complexity and high costs of these softwares they generally rely on expensive and modern hardware and only compatible with one specific operating system. Many point cloud customers are not point cloud processing experts or willing to spend the high acquisition costs of these expensive softwares and hardwares. In this paper we introduce a solution for low cost point cloud processing. Our approach is designed to accommodate the needs of the average point cloud user. To reduce the cost and complexity of software our approach focuses on one functionality at a time in contrast with most available softwares and tools that aim to solve as many problems as possible at the same time. Our simple and user oriented design improve the user experience and empower us to optimize our methods for creation of an efficient software. In this paper we introduce Pointo family as a series of connected softwares to provide easy to use tools with simple design for different point cloud processing requirements. PointoVIEWER and PointoCAD are introduced as the first components of the Pointo family to provide a fast and efficient visualization with the ability to add annotation and documentation to the point clouds.


Author(s):  
A. Kumar ◽  
K. Anders ◽  
L Winiwarter ◽  
B. Höfle

<p><strong>Abstract.</strong> 3D point clouds acquired by laser scanning and other techniques are difficult to interpret because of their irregular structure. To make sense of this data and to allow for the derivation of useful information, a segmentation of the points in groups, units, or classes fit for the specific use case is required. In this paper, we present a non-end-to-end deep learning classifier for 3D point clouds using multiple sets of input features and compare it with an implementation of the state-of-the-art deep learning framework PointNet++. We first start by extracting features derived from the local normal vector (normal vectors, eigenvalues, and eigenvectors) from the point cloud, and study the result of classification for different local search radii. We extract additional features related to spatial point distribution and use them together with the normal vector-based features. We find that the classification accuracy improves by up to 33% as we include normal vector features with multiple search radii and features related to spatial point distribution. Our method achieves a mean Intersection over Union (mIoU) of 94% outperforming PointNet++’s Multi Scale Grouping by up to 12%. The study presents the importance of multiple search radii for different point cloud features for classification in an urban 3D point cloud scene acquired by terrestrial laser scanning.</p>


Sensors ◽  
2020 ◽  
Vol 20 (16) ◽  
pp. 4594
Author(s):  
Ting On Chan ◽  
Linyuan Xia ◽  
Derek D. Lichti ◽  
Yeran Sun ◽  
Jun Wang ◽  
...  

Pipe elbow joints exist in almost every piping system supporting many important applications such as clean water supply. However, spatial information of the elbow joints is rarely extracted and analyzed from observations such as point cloud data obtained from laser scanning due to lack of a complete geometric model that can be applied to different types of joints. In this paper, we proposed a novel geometric model and several model adaptions for typical elbow joints including the 90° and 45° types, which facilitates the use of 3D point clouds of the elbow joints collected from laser scanning. The model comprises translational, rotational, and dimensional parameters, which can be used not only for monitoring the joints’ geometry but also other applications such as point cloud registrations. Both simulated and real datasets were used to verify the model, and two applications derived from the proposed model (point cloud registration and mounting bracket detection) were shown. The results of the geometric fitting of the simulated datasets suggest that the model can accurately recover the geometry of the joint with very low translational (0.3 mm) and rotational (0.064°) errors when ±0.02 m random errors were introduced to coordinates of a simulated 90° joint (with diameter equal to 0.2 m). The fitting of the real datasets suggests that the accuracy of the diameter estimate reaches 97.2%. The joint-based registration accuracy reaches sub-decimeter and sub-degree levels for the translational and rotational parameters, respectively.


Author(s):  
A. Leichter ◽  
U. Feuerhake ◽  
M. Sester

Abstract. Public space is a scarce good in cities. There are many concurrent usages, which makes an adequate allocation of space both difficult and highly attractive. A lot of space is allocated by parking cars – even if the parking spaces are not occupied by cars all the time. In this work, we analyze space demand and usage by parking cars, in order to evaluate, when this space could be used for other purposes. The analysis is based on 3D point clouds acquired at several times during a day. We propose a processing pipeline to extract car bounding boxes from a given 3D point cloud. For the car extraction we utilize a label transfer technique for transfers from semantically segmented 2D RGB images to 3D point cloud data. This semantically segmented 3D data allows us to identify car instances. Subsequently, we aggregate and analyze information about parking cars. We present an exemplary analysis of the urban area where we extracted 15.000 cars at five different points in time. Based on this aggregated we present analytical results for time dependent parking behavior, parking space availability and utilization.


Sign in / Sign up

Export Citation Format

Share Document