Dual-view 3D object recognition and detection via Lidar point cloud and camera image

2022 ◽  
pp. 103999
Author(s):  
Jing Li ◽  
Rui Li ◽  
Jiehao Li ◽  
Junzheng Wang ◽  
Qingbin Wu ◽  
...  
IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 44335-44345 ◽  
Author(s):  
Deping Li ◽  
Hanyun Wang ◽  
Ning Liu ◽  
Xiaoming Wang ◽  
Jin Xu

2020 ◽  
Vol 10 (10) ◽  
pp. 3409
Author(s):  
Francisco Gomez-Donoso ◽  
Felix Escalona ◽  
Miguel Cazorla

Deep learning-based methods have proven to be the best performers when it comes to object recognition cues both in images and tridimensional data. Nonetheless, when it comes to 3D object recognition, the authors tend to convert the 3D data to images and then perform their classification. However, despite its accuracy, this approach has some issues. In this work, we present a deep learning pipeline for object recognition that takes a point cloud as input and provides the classification probabilities as output. Our proposal is trained on synthetic CAD objects and is able to perform accurately when fed with real data provided by commercial sensors. Unlike most approaches, our method is specifically trained to work on partial views of the objects rather than on a full representation, which is not the representation of the objects as captured by commercial sensors. We trained our proposal with the ModelNet10 dataset and achieved a 78.39 % accuracy. We also tested it by adding noise to the dataset and against a number of datasets and real data with high success.


2020 ◽  
Vol 13 (1) ◽  
pp. 66
Author(s):  
Yifei Tian ◽  
Long Chen ◽  
Wei Song ◽  
Yunsick Sung ◽  
Sangchul Woo

3D (3-Dimensional) object recognition is a hot research topic that benefits environment perception, disease diagnosis, and the mobile robot industry. Point clouds collected by range sensors are a popular data structure to represent a 3D object model. This paper proposed a 3D object recognition method named Dynamic Graph Convolutional Broad Network (DGCB-Net) to realize feature extraction and 3D object recognition from the point cloud. DGCB-Net adopts edge convolutional layers constructed by weight-shared multiple-layer perceptrons (MLPs) to extract local features from the point cloud graph structure automatically. Features obtained from all edge convolutional layers are concatenated together to form a feature aggregation. Unlike stacking many layers in-depth, our DGCB-Net employs a broad architecture to extend point cloud feature aggregation flatly. The broad architecture is structured utilizing a flat combining architecture with multiple feature layers and enhancement layers. Both feature layers and enhancement layers concatenate together to further enrich the features’ information of the point cloud. All features work on the object recognition results thus that our DGCB-Net show better recognition performance than other 3D object recognition algorithms on ModelNet10/40 and our scanning point cloud dataset.


Author(s):  
Yifei Tian ◽  
Wei Song ◽  
Long Chen ◽  
Simon Fong ◽  
Yunsick Sung ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document