Shape Recognition of Plant Equipment from 3-D Scanned Point Cloud Data Using a Convolutional Neural Network

2018 ◽  
Vol 42 (9) ◽  
pp. 863-869
Author(s):  
Byung Chul Kim
2020 ◽  
Vol 10 (2) ◽  
pp. 617
Author(s):  
Jo ◽  
Moon

In this paper, a Collision Grid Map (CGM) is proposed by using 3d point cloud data to predict the collision between the cattle and the end effector of the manipulator in the barn environment. The Generated Collision Grid Map using x-y plane and depth z data in 3D point cloud data is applied to a Convolutional Neural Network to predict a collision situation. There is an invariant of the permutation problem, which is not efficiently learned in occurring matter of different orders when 3d point cloud data is applied to Convolutional Neural Network. The Collision Grid Map is generated by point cloud data based on the probability method. The Collision Grid Map scheme is composed of a 2-channel. The first channel is constructed by location data in the x-y plane. The second channel is composed of depth data in the z-direction. 3D point cloud is measured in a barn environment and created a Collision Grid Map. Then the generated Collision Grid Map is applied to the Convolutional Neural Network to predict the collision with cattle. The experimental results show that the proposed scheme is reliable and robust in a barn environment.


Sensors ◽  
2018 ◽  
Vol 18 (11) ◽  
pp. 3681 ◽  
Author(s):  
Le Zhang ◽  
Jian Sun ◽  
Qiang Zheng

The recognition of three-dimensional (3D) lidar (light detection and ranging) point clouds remains a significant issue in point cloud processing. Traditional point cloud recognition employs the 3D point clouds from the whole object. Nevertheless, the lidar data is a collection of two-and-a-half-dimensional (2.5D) point clouds (each 2.5D point cloud comes from a single view) obtained by scanning the object within a certain field angle by lidar. To deal with this problem, we initially propose a novel representation which expresses 3D point clouds using 2.5D point clouds from multiple views and then we generate multi-view 2.5D point cloud data based on the Point Cloud Library (PCL). Subsequently, we design an effective recognition model based on a multi-view convolutional neural network. The model directly acts on the raw 2.5D point clouds from all views and learns to get a global feature descriptor by fusing the features from all views by the view fusion network. It has been proved that our approach can achieve an excellent recognition performance without any requirement for three-dimensional reconstruction and the preprocessing of point clouds. In conclusion, this paper can effectively solve the recognition problem of lidar point clouds and provide vital practical value.


Sign in / Sign up

Export Citation Format

Share Document