scholarly journals DISTRIBUTED DIMENSONALITY-BASED RENDERING OF LIDAR POINT CLOUDS

Author(s):  
M. Brédif ◽  
B. Vallet ◽  
B. Ferrand

Mobile Mapping Systems (MMS) are now commonly acquiring lidar scans of urban environments for an increasing number of applications such as 3D reconstruction and mapping, urban planning, urban furniture monitoring, practicability assessment for persons with reduced mobility (PRM)... MMS acquisitions are usually huge enough to incur a usability bottleneck for the increasing number of non-expert user that are not trained to process and visualize these huge datasets through specific softwares. A vast majority of their current need is for a simple 2D visualization that is both legible on screen and printable on a static 2D medium, while still conveying the understanding of the 3D scene and minimizing the disturbance of the lidar acquisition geometry (such as lidar shadows). The users that motivated this research are, by law, bound to precisely georeference underground networks for which they currently have schematics with no or poor absolute georeferencing. A solution that may fit their needs is thus a 2D visualization of the MMS dataset that they could easily interpret and on which they could accurately match features with their user datasets they would like to georeference. Our main contribution is two-fold. First, we propose a 3D point cloud stylization for 2D static visualization that leverages a Principal Component Analysis (PCA)-like local geometry analysis. By skipping the usual and error-prone estimation of a ground elevation, this rendering is thus robust to non-flat areas and has no hard-to-tune parameters such as height thresholds. Second, we implemented the corresponding rendering pipeline so that it can scale up to arbitrary large datasets by leveraging the Spark framework and its Resilient Distributed Dataset (RDD) and Dataframe abstractions.

Author(s):  
G. López-Pazos ◽  
J. Balado ◽  
L. Díaz-Vilariño ◽  
P. Arias ◽  
M. Scaioni

With the rise of urban population, many initiatives are focused upon the <i>smart city</i> concept, in which mobility of citizens arises as one of the main components. Updated and detailed spatial information of outdoor environments is needed to accurate path planning for pedestrians, especially for people with reduced mobility, in which physical barriers should be considered. This work presents a methodology to use point clouds to direct path planning. The starting point is a classified point cloud in which ground elements have been previously classified as roads, sidewalks, crosswalks, curbs and stairs. The remaining points compose the obstacle class. The methodology starts by individualizing ground elements and simplifying them into representative points, which are used as nodes in the graph creation. The region of influence of obstacles is used to refine the graph. Edges of the graph are weighted according to distance between nodes and according to their accessibility for wheelchairs. As a result, we obtain a very accurate graph representing the as-built environment. The methodology has been tested in a couple of real case studies and Dijkstra algorithm was used to pathfinding. The resulting paths represent the optimal according to motor skills and safety.


Author(s):  
A. Nurunnabi ◽  
F. N. Teferle ◽  
J. Li ◽  
R. C. Lindenbergh ◽  
S. Parvaz

Abstract. Semantic segmentation of point clouds is indispensable for 3D scene understanding. Point clouds have credibility for capturing geometry of objects including shape, size, and orientation. Deep learning (DL) has been recognized as the most successful approach for image semantic segmentation. Applied to point clouds, performance of the many DL algorithms degrades, because point clouds are often sparse and have irregular data format. As a result, point clouds are regularly first transformed into voxel grids or image collections. PointNet was the first promising algorithm that feeds point clouds directly into the DL architecture. Although PointNet achieved remarkable performance on indoor point clouds, its performance has not been extensively studied in large-scale outdoor point clouds. So far, we know, no study on large-scale aerial point clouds investigates the sensitivity of the hyper-parameters used in the PointNet. This paper evaluates PointNet’s performance for semantic segmentation through three large-scale Airborne Laser Scanning (ALS) point clouds of urban environments. Reported results show that PointNet has potential in large-scale outdoor scene semantic segmentation. A remarkable limitation of PointNet is that it does not consider local structure induced by the metric space made by its local neighbors. Experiments exhibit PointNet is expressively sensitive to the hyper-parameters like batch-size, block partition and the number of points in a block. For an ALS dataset, we get significant difference between overall accuracies of 67.5% and 72.8%, for the block sizes of 5m × 5m and 10m × 10m, respectively. Results also discover that the performance of PointNet depends on the selection of input vectors.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1304
Author(s):  
Wenchao Wu ◽  
Yongguang Hu ◽  
Yongzong Lu

Plant leaf 3D architecture changes during growth and shows sensitive response to environmental stresses. In recent years, acquisition and segmentation methods of leaf point cloud developed rapidly, but 3D modelling leaf point clouds has not gained much attention. In this study, a parametric surface modelling method was proposed for accurately fitting tea leaf point cloud. Firstly, principal component analysis was utilized to adjust posture and position of the point cloud. Then, the point cloud was sliced into multiple sections, and some sections were selected to generate a point set to be fitted (PSF). Finally, the PSF was fitted into non-uniform rational B-spline (NURBS) surface. Two methods were developed to generate the ordered PSF and the unordered PSF, respectively. The PSF was firstly fitted as B-spline surface and then was transformed to NURBS form by minimizing fitting error, which was solved by particle swarm optimization (PSO). The fitting error was specified as weighted sum of the root-mean-square error (RMSE) and the maximum value (MV) of Euclidean distances between fitted surface and a subset of the point cloud. The results showed that the proposed modelling method could be used even if the point cloud is largely simplified (RMSE < 1 mm, MV < 2 mm, without performing PSO). Future studies will model wider range of leaves as well as incomplete point cloud.


2021 ◽  
Vol 13 (2) ◽  
pp. 223
Author(s):  
Zhenyang Hui ◽  
Shuanggen Jin ◽  
Dajun Li ◽  
Yao Yevenyo Ziggah ◽  
Bo Liu

Individual tree extraction is an important process for forest resource surveying and monitoring. To obtain more accurate individual tree extraction results, this paper proposed an individual tree extraction method based on transfer learning and Gaussian mixture model separation. In this study, transfer learning is first adopted in classifying trunk points, which can be used as clustering centers for tree initial segmentation. Subsequently, principal component analysis (PCA) transformation and kernel density estimation are proposed to determine the number of mixed components in the initial segmentation. Based on the number of mixed components, the Gaussian mixture model separation is proposed to separate canopies for each individual tree. Finally, the trunk stems corresponding to each canopy are extracted based on the vertical continuity principle. Six tree plots with different forest environments were used to test the performance of the proposed method. Experimental results show that the proposed method can achieve 87.68% average correctness, which is much higher than that of other two classical methods. In terms of completeness and mean accuracy, the proposed method also outperforms the other two methods.


2021 ◽  
Vol 11 (5) ◽  
pp. 2268
Author(s):  
Erika Straková ◽  
Dalibor Lukáš ◽  
Zdenko Bobovský ◽  
Tomáš Kot ◽  
Milan Mihola ◽  
...  

While repairing industrial machines or vehicles, recognition of components is a critical and time-consuming task for a human. In this paper, we propose to automatize this task. We start with a Principal Component Analysis (PCA), which fits the scanned point cloud with an ellipsoid by computing the eigenvalues and eigenvectors of a 3-by-3 covariant matrix. In case there is a dominant eigenvalue, the point cloud is decomposed into two clusters to which the PCA is applied recursively. In case the matching is not unique, we continue to distinguish among several candidates. We decompose the point cloud into planar and cylindrical primitives and assign mutual features such as distance or angle to them. Finally, we refine the matching by comparing the matrices of mutual features of the primitives. This is a more computationally demanding but very robust method. We demonstrate the efficiency and robustness of the proposed methodology on a collection of 29 real scans and a database of 389 STL (Standard Triangle Language) models. As many as 27 scans are uniquely matched to their counterparts from the database, while in the remaining two cases, there is only one additional candidate besides the correct model. The overall computational time is about 10 min in MATLAB.


2021 ◽  
Vol 10 (8) ◽  
pp. 525
Author(s):  
Wenmin Yao ◽  
Tong Chu ◽  
Wenlong Tang ◽  
Jingyu Wang ◽  
Xin Cao ◽  
...  

As one of China′s most precious cultural relics, the excavation and protection of the Terracotta Warriors pose significant challenges to archaeologists. A fairly common situation in the excavation is that the Terracotta Warriors are mostly found in the form of fragments, and manual reassembly among numerous fragments is laborious and time-consuming. This work presents a fracture-surface-based reassembling method, which is composed of SiamesePointNet, principal component analysis (PCA), and deep closest point (DCP), and is named SPPD. Firstly, SiamesePointNet is proposed to determine whether a pair of point clouds of 3D Terracotta Warrior fragments can be reassembled. Then, a coarse-to-fine registration method based on PCA and DCP is proposed to register the two fragments into a reassembled one. The above two steps iterate until the termination condition is met. A series of experiments on real-world examples are conducted, and the results demonstrate that the proposed method performs better than the conventional reassembling methods. We hope this work can provide a valuable tool for the virtual restoration of three-dimension cultural heritage artifacts.


Author(s):  
Suyong Yeon ◽  
ChangHyun Jun ◽  
Hyunga Choi ◽  
Jaehyeon Kang ◽  
Youngmok Yun ◽  
...  

Purpose – The authors aim to propose a novel plane extraction algorithm for geometric 3D indoor mapping with range scan data. Design/methodology/approach – The proposed method utilizes a divide-and-conquer step to efficiently handle huge amounts of point clouds not in a whole group, but in forms of separate sub-groups with similar plane parameters. This method adopts robust principal component analysis to enhance estimation accuracy. Findings – Experimental results verify that the method not only shows enhanced performance in the plane extraction, but also broadens the domain of interest of the plane registration to an information-poor environment (such as simple indoor corridors), while the previous method only adequately works in an information-rich environment (such as a space with many features). Originality/value – The proposed algorithm has three advantages over the current state-of-the-art method in that it is fast, utilizes more inlier sensor data that does not become contaminated by severe sensor noise and extracts more accurate plane parameters.


2021 ◽  
Vol 20 (1) ◽  
Author(s):  
Tiwadayo Braimoh ◽  
Isaac Danat ◽  
Mohammed Abubakar ◽  
Obinna Ajeroh ◽  
Melinda Stanley ◽  
...  

Abstract Background Nearly 90,000 under-five children die from diarrhoea annually in Nigeria. Over 90% of the deaths can be prevented with oral rehydration salt (ORS) and zinc treatment but coverage nationally was less than 34% for ORS and 3% for zinc with wide inequities. A program was implemented in eight states to address critical barriers to the optimal functioning of the health care market to deliver these treatments. In this study, we examine changes in the inequities of coverage of ORS and zinc over the intervention period. Methods Baseline and endline household surveys were used to measure ORS and zinc coverage and household assets. Principal component analysis was used to construct wealth quintiles. We used multi-level logistic regression models to estimate predictive coverage of ORS and zinc by wealth and urbanicity at each survey period. Simple measures of disparity and concentration indices and curves were used to evaluate changes in ORS and zinc coverage inequities. Results At baseline, 28% (95% CI: 22–35%) of children with diarrhoea from the poorest wealth quintile received ORS compared to 50% (95% CI: 52–58%) from the richest. This inequality reduced at endline as ORS coverage increased by 21%-points (P <  0.001) for the poorest and 17%-points (P <  0.001) for the richest. Zinc coverage increased significantly for both quintiles at endline from an equally low baseline coverage level. Consistent with the findings of the pairwise comparison of the poorest and the richest, the summary measure of disparity across all wealth quintiles showed a narrowing of inequities from baseline to endline. Concentration curves shifted towards equality for both treatments, concentration indices declined from 0.1012 to 0.0480 for ORS and from 0.2640 to 0.0567 for zinc. Disparities in ORS and zinc coverage between rural and urban at both time points was insignificant except that the use of zinc in the rural at endline was significantly higher at 38% (95%CI: 35–41%) compared to 29% (95%CI, 25–33%) in the urban. Conclusion The results show a pro-rural improvement in coverage and a reduction in coverage inequities across wealth quintiles from baseline to endline. This gives an indication that initiatives focused on shaping healthcare market systems may be effective in reducing health coverage gaps without detracting from equity as a health policy objective.


Proceedings ◽  
2018 ◽  
Vol 2 (18) ◽  
pp. 1193
Author(s):  
Roi Santos ◽  
Xose Pardo ◽  
Xose Fdez-Vidal

The increasing use of autonomous UAVs inside buildings and around human-made structures demands new accurate and comprehensive representation of their operation environments. Most of the 3D scene abstraction methods use invariant feature point matching, nevertheless some sparse 3D point clouds do not concisely represent the structure of the environment. Likewise, line clouds constructed by short and redundant segments with inaccurate directions limit the understanding of scenes as those that include environments with poor texture, or whose texture resembles a repetitive pattern. The presented approach is based on observation and representation models using the straight line segments, whose resemble the limits of an urban indoor or outdoor environment. The goal of the work is to get a full method based on the matching of lines that provides a complementary approach to state-of-the-art methods when facing 3D scene representation of poor texture environments for future autonomous UAV.


Sensors ◽  
2019 ◽  
Vol 19 (20) ◽  
pp. 4523 ◽  
Author(s):  
Carlos Cabo ◽  
Celestino Ordóñez ◽  
Fernando Sáchez-Lasheras ◽  
Javier Roca-Pardiñas ◽  
and Javier de Cos-Juez

We analyze the utility of multiscale supervised classification algorithms for object detection and extraction from laser scanning or photogrammetric point clouds. Only the geometric information (the point coordinates) was considered, thus making the method independent of the systems used to collect the data. A maximum of five features (input variables) was used, four of them related to the eigenvalues obtained from a principal component analysis (PCA). PCA was carried out at six scales, defined by the diameter of a sphere around each observation. Four multiclass supervised classification models were tested (linear discriminant analysis, logistic regression, support vector machines, and random forest) in two different scenarios, urban and forest, formed by artificial and natural objects, respectively. The results obtained were accurate (overall accuracy over 80% for the urban dataset, and over 93% for the forest dataset), in the range of the best results found in the literature, regardless of the classification method. For both datasets, the random forest algorithm provided the best solution/results when discrimination capacity, computing time, and the ability to estimate the relative importance of each variable are considered together.


Sign in / Sign up

Export Citation Format

Share Document