Real-time Pedestrian and Vehicle Detection for Autonomous Driving

Author(s):  
Zhiheng Yang ◽  
Jun Li ◽  
Huiyun Li
2020 ◽  
Vol 2020 ◽  
pp. 1-14 ◽  
Author(s):  
Jun Liu ◽  
Rui Zhang

Vehicle detection is a crucial task for autonomous driving and demands high accuracy and real-time speed. Considering that the current deep learning object detection model size is too large to be deployed on the vehicle, this paper introduces the lightweight network to modify the feature extraction layer of YOLOv3 and improve the remaining convolution structure, and the improved Lightweight YOLO network reduces the number of network parameters to a quarter. Then, the license plate is detected to calculate the actual vehicle width and the distance between the vehicles is estimated by the width. This paper proposes a detection and ranging fusion method based on two different focal length cameras to solve the problem of difficult detection and low accuracy caused by a small license plate when the distance is far away. The experimental results show that the average precision and recall of the Lightweight YOLO trained on the self-built dataset is 4.43% and 3.54% lower than YOLOv3, respectively, but the computing speed of the network decreases 49 ms per frame. The road experiments in different scenes also show that the long and short focal length camera fusion ranging method dramatically improves the accuracy and stability of ranging. The mean error of ranging results is less than 4%, and the range of stable ranging can reach 100 m. The proposed method can realize real-time vehicle detection and ranging on the on-board embedded platform Jetson Xavier, which satisfies the requirements of automatic driving environment perception.


2018 ◽  
Vol 3 (4) ◽  
pp. 3434-3440 ◽  
Author(s):  
Yiming Zeng ◽  
Yu Hu ◽  
Shice Liu ◽  
Jing Ye ◽  
Yinhe Han ◽  
...  

Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 657
Author(s):  
Aoki Takanose ◽  
Yoshiki Atsumi ◽  
Kanamu Takikawa ◽  
Junichi Meguro

Autonomous driving support systems and self-driving cars require the determination of reliable vehicle positions with high accuracy. The real time kinematic (RTK) algorithm with global navigation satellite system (GNSS) is generally employed to obtain highly accurate position information. Because RTK can estimate the fix solution, which is a centimeter-level positioning solution, it is also used as an indicator of the position reliability. However, in urban areas, the degradation of the GNSS signal environment poses a challenge. Multipath noise caused by surrounding tall buildings degrades the positioning accuracy. This leads to large errors in the fix solution, which is used as a measure of reliability. We propose a novel position reliability estimation method by considering two factors; one is that GNSS errors are more likely to occur in the height than in the plane direction; the other is that the height variation of the actual vehicle travel path is small compared to the amount of movement in the horizontal directions. Based on these considerations, we proposed a method to detect a reliable fix solution by estimating the height variation during driving. To verify the effectiveness of the proposed method, an evaluation test was conducted in an urban area of Tokyo. According to the evaluation test, a reliability judgment rate of 99% was achieved in an urban environment, and a plane accuracy of less than 0.3 m in RMS was achieved. The results indicate that the accuracy of the proposed method is higher than that of the conventional fix solution, demonstratingits effectiveness.


Author(s):  
Andres Bell ◽  
Tomas Mantecon ◽  
Cesar Diaz ◽  
Carlos R. del-Blanco ◽  
Fernando Jaureguizar ◽  
...  

Sensors ◽  
2020 ◽  
Vol 21 (1) ◽  
pp. 15
Author(s):  
Filippo Aleotti ◽  
Giulio Zaccaroni ◽  
Luca Bartolomei ◽  
Matteo Poggi ◽  
Fabio Tosi ◽  
...  

Depth perception is paramount for tackling real-world problems, ranging from autonomous driving to consumer applications. For the latter, depth estimation from a single image would represent the most versatile solution since a standard camera is available on almost any handheld device. Nonetheless, two main issues limit the practical deployment of monocular depth estimation methods on such devices: (i) the low reliability when deployed in the wild and (ii) the resources needed to achieve real-time performance, often not compatible with low-power embedded systems. Therefore, in this paper, we deeply investigate all these issues, showing how they are both addressable by adopting appropriate network design and training strategies. Moreover, we also outline how to map the resulting networks on handheld devices to achieve real-time performance. Our thorough evaluation highlights the ability of such fast networks to generalize well to new environments, a crucial feature required to tackle the extremely varied contexts faced in real applications. Indeed, to further support this evidence, we report experimental results concerning real-time, depth-aware augmented reality and image blurring with smartphones in the wild.


2020 ◽  
Vol 13 (1) ◽  
pp. 89
Author(s):  
Manuel Carranza-García ◽  
Jesús Torres-Mateo ◽  
Pedro Lara-Benítez ◽  
Jorge García-Gutiérrez

Object detection using remote sensing data is a key task of the perception systems of self-driving vehicles. While many generic deep learning architectures have been proposed for this problem, there is little guidance on their suitability when using them in a particular scenario such as autonomous driving. In this work, we aim to assess the performance of existing 2D detection systems on a multi-class problem (vehicles, pedestrians, and cyclists) with images obtained from the on-board camera sensors of a car. We evaluate several one-stage (RetinaNet, FCOS, and YOLOv3) and two-stage (Faster R-CNN) deep learning meta-architectures under different image resolutions and feature extractors (ResNet, ResNeXt, Res2Net, DarkNet, and MobileNet). These models are trained using transfer learning and compared in terms of both precision and efficiency, with special attention to the real-time requirements of this context. For the experimental study, we use the Waymo Open Dataset, which is the largest existing benchmark. Despite the rising popularity of one-stage detectors, our findings show that two-stage detectors still provide the most robust performance. Faster R-CNN models outperform one-stage detectors in accuracy, being also more reliable in the detection of minority classes. Faster R-CNN Res2Net-101 achieves the best speed/accuracy tradeoff but needs lower resolution images to reach real-time speed. Furthermore, the anchor-free FCOS detector is a slightly faster alternative to RetinaNet, with similar precision and lower memory usage.


Electronics ◽  
2019 ◽  
Vol 8 (9) ◽  
pp. 943 ◽  
Author(s):  
Il Bae ◽  
Jaeyoung Moon ◽  
Jeongseok Seo

The convergence of mechanical, electrical, and advanced ICT technologies, driven by artificial intelligence and 5G vehicle-to-everything (5G-V2X) connectivity, will help to develop high-performance autonomous driving vehicles and services that are usable and convenient for self-driving passengers. Despite widespread research on self-driving, user acceptance remains an essential part of successful market penetration; this forms the motivation behind studies on human factors associated with autonomous shuttle services. We address this by providing a comfortable driving experience while not compromising safety. We focus on the accelerations and jerks of vehicles to reduce the risk of motion sickness and to improve the driving experience for passengers. Furthermore, this study proposes a time-optimal velocity planning method for guaranteeing comfort criteria when an explicit reference path is given. The overall controller and planning method were verified using real-time, software-in-the-loop (SIL) environments for a real-time vehicle dynamics simulation; the performance was then compared with a typical planning approach. The proposed optimized planning shows a relatively better performance and enables a comfortable passenger experience in a self-driving shuttle bus according to the recommended criteria.


Sign in / Sign up

Export Citation Format

Share Document