Object detection via receptive field co-occurrence and spatial cloud-point data

Author(s):  
Luis A. Contreras ◽  
Abel Pacheco-Ortega ◽  
Jose I. Figueroa ◽  
Walterio W. Mayol-Cuevas ◽  
Jesus Savage
2008 ◽  
Vol 25 (3) ◽  
pp. 563-570 ◽  
Author(s):  
J. P. Bender ◽  
A. Junges ◽  
E. Franceschi ◽  
F. C. Corazza ◽  
C. Dariva ◽  
...  

2020 ◽  
Vol 40 (4) ◽  
pp. 0415001
Author(s):  
谢学立 Xie Xueli ◽  
李传祥 Li Chuanxiang ◽  
杨小冈 Yang Xiaogang ◽  
席建祥 Xi Jianxiang ◽  
陈彤 Chen Tong

Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1066
Author(s):  
Peng Jia ◽  
Fuxiang Liu

At present, the one-stage detector based on the lightweight model can achieve real-time speed, but the detection performance is challenging. To enhance the discriminability and robustness of the model extraction features and improve the detector’s detection performance for small objects, we propose two modules in this work. First, we propose a receptive field enhancement method, referred to as adaptive receptive field fusion (ARFF). It enhances the model’s feature representation ability by adaptively learning the fusion weights of different receptive field branches in the receptive field module. Then, we propose an enhanced up-sampling (EU) module to reduce the information loss caused by up-sampling on the feature map. Finally, we assemble ARFF and EU modules on top of YOLO v3 to build a real-time, high-precision and lightweight object detection system referred to as the ARFF-EU network. We achieve a state-of-the-art speed and accuracy trade-off on both the Pascal VOC and MS COCO data sets, reporting 83.6% AP at 37.5 FPS and 42.5% AP at 33.7 FPS, respectively. The experimental results show that our proposed ARFF and EU modules improve the detection performance of the ARFF-EU network and achieve the development of advanced, very deep detectors while maintaining real-time speed.


2020 ◽  
Vol 405 ◽  
pp. 138-148
Author(s):  
Lin Jiao ◽  
Shengyu Zhang ◽  
Shifeng Dong ◽  
Hongqiang Wang

Sensors ◽  
2020 ◽  
Vol 20 (3) ◽  
pp. 704 ◽  
Author(s):  
Hongwu Kuang ◽  
Bei Wang ◽  
Jianping An ◽  
Ming Zhang ◽  
Zehan Zhang

Object detection in point cloud data is one of the key components in computer vision systems, especially for autonomous driving applications. In this work, we present Voxel-Feature Pyramid Network, a novel one-stage 3D object detector that utilizes raw data from LIDAR sensors only. The core framework consists of an encoder network and a corresponding decoder followed by a region proposal network. Encoder extracts and fuses multi-scale voxel information in a bottom-up manner, whereas decoder fuses multiple feature maps from various scales by Feature Pyramid Network in a top-down way. Extensive experiments show that the proposed method has better performance on extracting features from point data and demonstrates its superiority over some baselines on the challenging KITTI-3D benchmark, obtaining good performance on both speed and accuracy in real-world scenarios.


Sign in / Sign up

Export Citation Format

Share Document