scholarly journals Clustering-Based Plane Segmentation Neural Network for Urban Scene Modeling

Sensors ◽  
2021 ◽  
Vol 21 (24) ◽  
pp. 8382
Author(s):  
Hongjae Lee ◽  
Jiyoung Jung

Urban scene modeling is a challenging but essential task for various applications, such as 3D map generation, city digitization, and AR/VR/metaverse applications. To model man-made structures, such as roads and buildings, which are the major components in general urban scenes, we present a clustering-based plane segmentation neural network using 3D point clouds, called hybrid K-means plane segmentation (HKPS). The proposed method segments unorganized 3D point clouds into planes by training the neural network to estimate the appropriate number of planes in the point cloud based on hybrid K-means clustering. We consider both the Euclidean distance and cosine distance to cluster nearby points in the same direction for better plane segmentation results. Our network does not require any labeled information for training. We evaluated the proposed method using the Virtual KITTI dataset and showed that our method outperforms conventional methods in plane segmentation. Our code is publicly available.

2021 ◽  
Vol 13 (15) ◽  
pp. 3021
Author(s):  
Bufan Zhao ◽  
Xianghong Hua ◽  
Kegen Yu ◽  
Xiaoxing He ◽  
Weixing Xue ◽  
...  

Urban object segmentation and classification tasks are critical data processing steps in scene understanding, intelligent vehicles and 3D high-precision maps. Semantic segmentation of 3D point clouds is the foundational step in object recognition. To identify the intersecting objects and improve the accuracy of classification, this paper proposes a segment-based classification method for 3D point clouds. This method firstly divides points into multi-scale supervoxels and groups them by proposed inverse node graph (IN-Graph) construction, which does not need to define prior information about the node, it divides supervoxels by judging the connection state of edges between them. This method reaches minimum global energy by graph cutting, obtains the structural segments as completely as possible, and retains boundaries at the same time. Then, the random forest classifier is utilized for supervised classification. To deal with the mislabeling of scattered fragments, higher-order CRF with small-label cluster optimization is proposed to refine the classification results. Experiments were carried out on mobile laser scan (MLS) point dataset and terrestrial laser scan (TLS) points dataset, and the results show that overall accuracies of 97.57% and 96.39% were obtained in the two datasets. The boundaries of objects were retained well, and the method achieved a good result in the classification of cars and motorcycles. More experimental analyses have verified the advantages of the proposed method and proved the practicability and versatility of the method.


IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 87857-87869
Author(s):  
Jue Hou ◽  
Wenbin Ouyang ◽  
Bugao Xu ◽  
Rongwu Wang

Author(s):  
X.-F. Xing ◽  
M. A. Mostafavi ◽  
G. Edwards ◽  
N. Sabo

<p><strong>Abstract.</strong> Automatic semantic segmentation of point clouds observed in a 3D complex urban scene is a challenging issue. Semantic segmentation of urban scenes based on machine learning algorithm requires appropriate features to distinguish objects from mobile terrestrial and airborne LiDAR point clouds in point level. In this paper, we propose a pointwise semantic segmentation method based on our proposed features derived from Difference of Normal and the features “directional height above” that compare height difference between a given point and neighbors in eight directions in addition to the features based on normal estimation. Random forest classifier is chosen to classify points in mobile terrestrial and airborne LiDAR point clouds. The results obtained from our experiments show that the proposed features are effective for semantic segmentation of mobile terrestrial and airborne LiDAR point clouds, especially for vegetation, building and ground classes in an airborne LiDAR point clouds in urban areas.</p>


Author(s):  
Zulfian Azmi

Utilization of wind energy sources provides advantages in terms of being environmentally friendly, and it can be energy source is realible. The analysis of wind mill control using Neural Network model for Uncertain Variables or abbreviated as the VTP model is expected to provide a solution in solving the windmill control case. And the Neural Network model for Uncertain Variables uses probability techniques, degree of membership, logical OR function, linear programming and    euclidean distance to reduce the learning process In this research, wind mill control uses variable air pressure and duration of sunshine to determine whether the wind mill is moving or not. Finally, this research tries to analyze windmill control, which in the future is expected to produce a smart wind mill control system. And the Neural Network model for Uncertain Variables can be used to control windmills with the different of input data


2022 ◽  
Vol 41 (1) ◽  
pp. 1-21
Author(s):  
Chems-Eddine Himeur ◽  
Thibault Lejemble ◽  
Thomas Pellegrini ◽  
Mathias Paulin ◽  
Loic Barthe ◽  
...  

In recent years, Convolutional Neural Networks (CNN) have proven to be efficient analysis tools for processing point clouds, e.g., for reconstruction, segmentation, and classification. In this article, we focus on the classification of edges in point clouds, where both edges and their surrounding are described. We propose a new parameterization adding to each point a set of differential information on its surrounding shape reconstructed at different scales. These parameters, stored in a Scale-Space Matrix (SSM) , provide a well-suited information from which an adequate neural network can learn the description of edges and use it to efficiently detect them in acquired point clouds. After successfully applying a multi-scale CNN on SSMs for the efficient classification of edges and their neighborhood, we propose a new lightweight neural network architecture outperforming the CNN in learning time, processing time, and classification capabilities. Our architecture is compact, requires small learning sets, is very fast to train, and classifies millions of points in seconds.


2021 ◽  
Vol 7 (2) ◽  
pp. 57-74
Author(s):  
Lamyaa Gamal EL-Deen Taha ◽  
A. I. Ramzi ◽  
A. Syarawi ◽  
A. Bekheet

Until recently, the most highly accurate digital surface models were obtained from airborne lidar. With the development of a new generation of large format digital photogrammetric aerial camera, a fully digital photogrammetric workflow became possible. Digital airborne images are sources for elevation extraction and orthophoto generation. This research concerned with the generation of digital surface models and orthophotos as applications from high-resolution images.  In this research, the following steps were performed. A Benchmark data of LIDAR and digital aerial camera have been used.  Firstly, image orientation, AT have been performed. Then the automatic digital surface model DSM generation has been produced from the digital aerial camera. Thirdly true digital ortho has been generated from the digital aerial camera also orthoimage will be generated using LIDAR digital elevation model (DSM). Leica Photogrammetric Suite (LPS) module of Erdsa Imagine 2014 software was utilized for processing. Then the resulted orthoimages from both techniques were mosaicked. The results show that automatic digital surface model DSM that been produced from digital aerial camera method has very high dense photogrammetric 3D point clouds compared to the LIDAR 3D point clouds. It was found that the true orthoimage produced from the second approach is better than the true orthoimage produced from the first approach. The five approaches were tested for classification of the best-orthorectified image mosaic using subpixel based (neural network) and pixel-based ( minimum distance and maximum likelihood).Multicues were extracted such as texture(entropy-mean),Digital elevation model, Digital surface model ,normalized digital surface model (nDSM) and intensity image. The contributions of the individual cues used in the classification have been evaluated. It was found that the best cue integration is intensity (pan) +nDSM+ entropy followed by intensity (pan) +nDSM+mean then intensity image +mean+ entropy after that DSM )image and two texture measures (mean and entropy) followed by the colour image. The integration with height data increases the accuracy. Also, it was found that the integration with entropy texture increases the accuracy. Resulted in fifteen cases of classification it was found that maximum likelihood classifier is the best followed by minimum distance then neural network classifier. We attribute this to the fine resolution of the digital camera image. Subpixel classifier (neural network) is not suitable for classifying aerial digital camera images. 


Author(s):  
Y. Xu ◽  
Z. Sun ◽  
R. Boerner ◽  
T. Koch ◽  
L. Hoegner ◽  
...  

In this work, we report a novel way of generating ground truth dataset for analyzing point cloud from different sensors and the validation of algorithms. Instead of directly labeling large amount of 3D points requiring time consuming manual work, a multi-resolution 3D voxel grid for the testing site is generated. Then, with the help of a set of basic labeled points from the reference dataset, we can generate a 3D labeled space of the entire testing site with different resolutions. Specifically, an octree-based voxel structure is applied to voxelize the annotated reference point cloud, by which all the points are organized by 3D grids of multi-resolutions. When automatically annotating the new testing point clouds, a voting based approach is adopted to the labeled points within multiple resolution voxels, in order to assign a semantic label to the 3D space represented by the voxel. Lastly, robust line- and plane-based fast registration methods are developed for aligning point clouds obtained via various sensors. Benefiting from the labeled 3D spatial information, we can easily create new annotated 3D point clouds of different sensors of the same scene directly by considering the corresponding labels of 3D space the points located, which would be convenient for the validation and evaluation of algorithms related to point cloud interpretation and semantic segmentation.


Author(s):  
Sergei Voronin ◽  
Artyom Makovetskii ◽  
Aleksei Voronin ◽  
Dmitrii Zhernov

2020 ◽  
Vol 413 ◽  
pp. 487-498
Author(s):  
Wenjing Zhang ◽  
Songzhi Su ◽  
Beizhan Wang ◽  
Qingqi Hong ◽  
Li Sun

Sign in / Sign up

Export Citation Format

Share Document