Real-Time Object Detection and Classification for Autonomous Driving

Author(s):  
Seyyed Hamed Naghavi ◽  
Hamidreza Pourreza
2020 ◽  
Vol 20 (20) ◽  
pp. 11959-11966
Author(s):  
Jiachen Yang ◽  
Chenguang Wang ◽  
Huihui Wang ◽  
Qiang Li

Author(s):  
B. Ravi Kiran ◽  
Luis Roldão ◽  
Beñat Irastorza ◽  
Renzo Verastegui ◽  
Sebastian Süss ◽  
...  

Electronics ◽  
2018 ◽  
Vol 7 (11) ◽  
pp. 301 ◽  
Author(s):  
Alex Dominguez-Sanchez ◽  
Miguel Cazorla ◽  
Sergio Orts-Escolano

In recent years, we have seen a large growth in the number of applications which use deep learning-based object detectors. Autonomous driving assistance systems (ADAS) are one of the areas where they have the most impact. This work presents a novel study evaluating a state-of-the-art technique for urban object detection and localization. In particular, we investigated the performance of the Faster R-CNN method to detect and localize urban objects in a variety of outdoor urban videos involving pedestrians, cars, bicycles and other objects moving in the scene (urban driving). We propose a new dataset that is used for benchmarking the accuracy of a real-time object detector (Faster R-CNN). Part of the data was collected using an HD camera mounted on a vehicle. Furthermore, some of the data is weakly annotated so it can be used for testing weakly supervised learning techniques. There already exist urban object datasets, but none of them include all the essential urban objects. We carried out extensive experiments demonstrating the effectiveness of the baseline approach. Additionally, we propose an R-CNN plus tracking technique to accelerate the process of real-time urban object detection.


Sensors ◽  
2021 ◽  
Vol 21 (24) ◽  
pp. 8381
Author(s):  
Duarte Fernandes ◽  
Tiago Afonso ◽  
Pedro Girão ◽  
Dibet Gonzalez ◽  
António Silva ◽  
...  

Recently released research about deep learning applications related to perception for autonomous driving focuses heavily on the usage of LiDAR point cloud data as input for the neural networks, highlighting the importance of LiDAR technology in the field of Autonomous Driving (AD). In this sense, a great percentage of the vehicle platforms used to create the datasets released for the development of these neural networks, as well as some AD commercial solutions available on the market, heavily invest in an array of sensors, including a large number of sensors as well as several sensor modalities. However, these costs create a barrier to entry for low-cost solutions for the performance of critical perception tasks such as Object Detection and SLAM. This paper explores current vehicle platforms and proposes a low-cost, LiDAR-based test vehicle platform capable of running critical perception tasks (Object Detection and SLAM) in real time. Additionally, we propose the creation of a deep learning-based inference model for Object Detection deployed in a resource-constrained device, as well as a graph-based SLAM implementation, providing important considerations, explored while taking into account the real-time processing requirement and presenting relevant results demonstrating the usability of the developed work in the context of the proposed low-cost platform.


2019 ◽  
Vol 277 ◽  
pp. 02005
Author(s):  
Ning Feng ◽  
Le Dong ◽  
Qianni Zhang ◽  
Ning Zhang ◽  
Xi Wu ◽  
...  

Real-time semantic segmentation has become crucial in many applications such as medical image analysis and autonomous driving. In this paper, we introduce a single semantic segmentation network, called DNS, for joint object detection and segmentation task. We take advantage of multi-scale deconvolution mechanism to perform real time computations. To this goal, down-scale and up-scale streams are utilized to combine the multi-scale features for the final detection and segmentation task. By using the proposed DNS, not only the tradeoff between accuracy and cost but also the balance of detection and segmentation performance are settled. Experimental results for PASCAL VOC datasets show competitive performance for joint object detection and segmentation task.


IEEE Network ◽  
2021 ◽  
Vol 35 (1) ◽  
pp. 177-183
Author(s):  
Seung-Wook Kim ◽  
Keunsoo Ko ◽  
Haneul Ko ◽  
Victor C. M. Leung

2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Rui Wang ◽  
Ziyue Wang ◽  
Zhengwei Xu ◽  
Chi Wang ◽  
Qiang Li ◽  
...  

Object detection is an important part of autonomous driving technology. To ensure the safe running of vehicles at high speed, real-time and accurate detection of all the objects on the road is required. How to balance the speed and accuracy of detection is a hot research topic in recent years. This paper puts forward a one-stage object detection algorithm based on YOLOv4, which improves the detection accuracy and supports real-time operation. The backbone of the algorithm doubles the stacking times of the last residual block of CSPDarkNet53. The neck of the algorithm replaces the SPP with the RFB structure, improves the PAN structure of the feature fusion module, adds the attention mechanism CBAM and CA structure to the backbone and neck structure, and finally reduces the overall width of the network to the original 3/4, so as to reduce the model parameters and improve the inference speed. Compared with YOLOv4, the algorithm in this paper improves the average accuracy on KITTI dataset by 2.06% and BDD dataset by 2.95%. When the detection accuracy is almost unchanged, the inference speed of this algorithm is increased by 9.14%, and it can detect in real time at a speed of more than 58.47 FPS.


Sign in / Sign up

Export Citation Format

Share Document