General-Purpose Deep Point Cloud Feature Extractor

Author(s):  
Miguel Dominguez ◽  
Rohan Dhamdhere ◽  
Atir Petkar ◽  
Saloni Jain ◽  
Shagan Sah ◽  
...  
2021 ◽  
Author(s):  
Yihuan Zhang ◽  
Liang Wang ◽  
Chen Fu ◽  
Yifan Dai ◽  
John M. Dolan
Keyword(s):  

Author(s):  
Xifeng Guo ◽  
En Zhu ◽  
Xinwang Liu ◽  
Jianping Yin

Existing deep neural networks mainly focus on learning transformation invariant features. However, it is the equivariant features that are more adequate for general purpose tasks. Unfortunately, few work has been devoted to learning equivariant features. To fill this gap, in this paper, we propose an affine equivariant autoencoder to learn features that are equivariant to the affine transformation in an unsupervised manner. The objective consists of the self-reconstruction of the original example and affine transformed example, and the approximation of the affine transformation function, where the reconstruction makes the encoder a valid feature extractor and the approximation encourages the equivariance. Extensive experiments are conducted to validate the equivariance and discriminative ability of the features learned by our affine equivariant autoencoder.


Author(s):  
Xianzhi Li ◽  
Ruihui Li ◽  
Guangyong Chen ◽  
Chi-Wing Fu ◽  
Daniel Cohen-Or ◽  
...  

Author(s):  
Michael Person ◽  
Mathew Jensen ◽  
Anthony O. Smith ◽  
Hector Gutierrez

In order for autonomous vehicles to safely navigate the road ways, accurate object detection must take place before safe path planning can occur. Currently, general purpose object detection convolutional neural network (CNN) models have the highest detection accuracies of any method. However, there is a gap in the proposed detection frameworks. Specifically, those that provide high detection accuracy necessary for deployment but do not perform inference in realtime, and those that perform inference in realtime but detection accuracy is low. We propose multimodel fusion detection system (MFDS), a sensor fusion system that combines the speed of a fast image detection CNN model along with the accuracy of light detection and range (LiDAR) point cloud data through a decision tree approach. The primary objective is to bridge the tradeoff between performance and accuracy. The motivation for MFDS is to reduce the computational complexity associated with using a CNN model to extract features from an image. To improve efficiency, MFDS extracts complimentary features from the LiDAR point cloud in order to obtain comparable detection accuracy. MFDS is novel by not only using the image detections to aid three-dimensional (3D) LiDAR detection but also using the LiDAR data to jointly bolster the image detections and provide 3D detections. MFDS achieves 3.7% higher accuracy than the base CNN detection model and is able to operate at 10 Hz. Additionally, the memory requirement for MFDS is small enough to fit on the Nvidia Tx1 when deployed on an embedded device.


Sign in / Sign up

Export Citation Format

Share Document