scholarly journals Moving Object Classification Using 3D Point Cloud in Urban Traffic Environment

2020 ◽  
Vol 2020 ◽  
pp. 1-12
Author(s):  
MingFang Zhang ◽  
Rui Fu ◽  
YingShi Guo ◽  
Li Wang

Moving object classification is essential for autonomous vehicle to complete high-level tasks like scene understanding and motion planning. In this paper, we propose a novel approach for classifying moving objects into four classes of interest using 3D point cloud in urban traffic environment. Unlike most existing work on object recognition which involves dense point cloud, our approach combines extensive feature extraction with the multiframe classification optimization to solve the classification task when partial occlusion occurs. First, the point cloud of moving object is segmented by a data preprocessing procedure. Then, the efficient features are selected via Gini index criterion applied to the extended feature set. Next, Bayes Decision Theory (BDT) is employed to incorporate the preliminary results from posterior probability Support Vector Machine (SVM) classifier at consecutive frames. The point cloud data acquired from our own LIDAR as well as public KITTI dataset is used to validate the proposed moving object classification method in the experiments. The results show that the proposed SVM-BDT classifier based on 18 selected features can effectively recognize the moving objects.

Author(s):  
N. Munir ◽  
M. Awrangjeb ◽  
B. Stantic ◽  
G. Lu ◽  
S. Islam

<p><strong>Abstract.</strong> Extraction of individual pylons and wires is important for modelling of 3D objects in a power line corridor (PLC) map. However, the existing methods mostly classify points into distinct classes like pylons and wires, but hardly into individual pylons or wires. The proposed method extracts standalone pylons, vegetation and wires from LiDAR data. The extraction of individual objects is needed for a detailed PLC mapping. The proposed approach starts off with the separation of ground and non ground points. The non-ground points are then classified into vertical (e.g., pylons and vegetation) and non-vertical (e.g., wires) object points using the vertical profile feature (VPF) through the binary support vector machine (SVM) classifier. Individual pylons and vegetation are then separated using their shape and area properties. The locations of pylons are further used to extract the span points between two successive pylons. Finally, span points are voxelised and alignment properties of wires in the voxel grid is used to extract individual wires points. The results are evaluated on dataset which has multiple spans with bundled wires in each span. The evaluation results show that the proposed method and features are very effective for extraction of individual wires, pylons and vegetation with 99% correctness and 98% completeness.</p>


GigaScience ◽  
2021 ◽  
Vol 10 (5) ◽  
Author(s):  
Teng Miao ◽  
Weiliang Wen ◽  
Yinglun Li ◽  
Sheng Wu ◽  
Chao Zhu ◽  
...  

Abstract Background The 3D point cloud is the most direct and effective data form for studying plant structure and morphology. In point cloud studies, the point cloud segmentation of individual plants to organs directly determines the accuracy of organ-level phenotype estimation and the reliability of the 3D plant reconstruction. However, highly accurate, automatic, and robust point cloud segmentation approaches for plants are unavailable. Thus, the high-throughput segmentation of many shoots is challenging. Although deep learning can feasibly solve this issue, software tools for 3D point cloud annotation to construct the training dataset are lacking. Results We propose a top-to-down point cloud segmentation algorithm using optimal transportation distance for maize shoots. We apply our point cloud annotation toolkit for maize shoots, Label3DMaize, to achieve semi-automatic point cloud segmentation and annotation of maize shoots at different growth stages, through a series of operations, including stem segmentation, coarse segmentation, fine segmentation, and sample-based segmentation. The toolkit takes ∼4–10 minutes to segment a maize shoot and consumes 10–20% of the total time if only coarse segmentation is required. Fine segmentation is more detailed than coarse segmentation, especially at the organ connection regions. The accuracy of coarse segmentation can reach 97.2% that of fine segmentation. Conclusion Label3DMaize integrates point cloud segmentation algorithms and manual interactive operations, realizing semi-automatic point cloud segmentation of maize shoots at different growth stages. The toolkit provides a practical data annotation tool for further online segmentation research based on deep learning and is expected to promote automatic point cloud processing of various plants.


2021 ◽  
Author(s):  
Khaled Saleh ◽  
Ahmed Abobakr ◽  
Mohammed Hossny ◽  
Darius Nahavandi ◽  
Julie Iskander ◽  
...  

Author(s):  
Romina Dastoorian ◽  
Ahmad E. Elhabashy ◽  
Wenmeng Tian ◽  
Lee J. Wells ◽  
Jaime A. Camelio

With the latest advancements in three-dimensional (3D) measurement technologies, obtaining 3D point cloud data for inspection purposes in manufacturing is becoming more common. While 3D point cloud data allows for better inspection capabilities, their analysis is typically challenging. Especially with unstructured 3D point cloud data, containing coordinates at random locations, the challenges increase with higher levels of noise and larger volumes of data. Hence, the objective of this paper is to extend the previously developed Adaptive Generalized Likelihood Ratio (AGLR) approach to handle unstructured 3D point cloud data used for automated surface defect inspection in manufacturing. More specifically, the AGLR approach was implemented in a practical case study to inspect twenty-seven samples, each with a unique fault. These faults were designed to cover an array of possible faults having three different sizes, three different magnitudes, and located in three different locations. The results show that the AGLR approach can indeed differentiate between non-faulty and a varying range of faulty surfaces while being able to pinpoint the fault location. This work also serves as a validation for the previously developed AGLR approach in a practical scenario.


Aerospace ◽  
2018 ◽  
Vol 5 (3) ◽  
pp. 94 ◽  
Author(s):  
Hriday Bavle ◽  
Jose Sanchez-Lopez ◽  
Paloma Puente ◽  
Alejandro Rodriguez-Ramos ◽  
Carlos Sampedro ◽  
...  

This paper presents a fast and robust approach for estimating the flight altitude of multirotor Unmanned Aerial Vehicles (UAVs) using 3D point cloud sensors in cluttered, unstructured, and dynamic indoor environments. The objective is to present a flight altitude estimation algorithm, replacing the conventional sensors such as laser altimeters, barometers, or accelerometers, which have several limitations when used individually. Our proposed algorithm includes two stages: in the first stage, a fast clustering of the measured 3D point cloud data is performed, along with the segmentation of the clustered data into horizontal planes. In the second stage, these segmented horizontal planes are mapped based on the vertical distance with respect to the point cloud sensor frame of reference, in order to provide a robust flight altitude estimation even in presence of several static as well as dynamic ground obstacles. We validate our approach using the IROS 2011 Kinect dataset available in the literature, estimating the altitude of the RGB-D camera using the provided 3D point clouds. We further validate our approach using a point cloud sensor on board a UAV, by means of several autonomous real flights, closing its altitude control loop using the flight altitude estimated by our proposed method, in presence of several different static as well as dynamic ground obstacles. In addition, the implementation of our approach has been integrated in our open-source software framework for aerial robotics called Aerostack.


Sign in / Sign up

Export Citation Format

Share Document