scholarly journals Real-time 3D Traffic Cone Detection for Autonomous Driving

Author(s):  
Ankit Dhall ◽  
Dengxin Dai ◽  
Luc Van Gool
Keyword(s):  
Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 657
Author(s):  
Aoki Takanose ◽  
Yoshiki Atsumi ◽  
Kanamu Takikawa ◽  
Junichi Meguro

Autonomous driving support systems and self-driving cars require the determination of reliable vehicle positions with high accuracy. The real time kinematic (RTK) algorithm with global navigation satellite system (GNSS) is generally employed to obtain highly accurate position information. Because RTK can estimate the fix solution, which is a centimeter-level positioning solution, it is also used as an indicator of the position reliability. However, in urban areas, the degradation of the GNSS signal environment poses a challenge. Multipath noise caused by surrounding tall buildings degrades the positioning accuracy. This leads to large errors in the fix solution, which is used as a measure of reliability. We propose a novel position reliability estimation method by considering two factors; one is that GNSS errors are more likely to occur in the height than in the plane direction; the other is that the height variation of the actual vehicle travel path is small compared to the amount of movement in the horizontal directions. Based on these considerations, we proposed a method to detect a reliable fix solution by estimating the height variation during driving. To verify the effectiveness of the proposed method, an evaluation test was conducted in an urban area of Tokyo. According to the evaluation test, a reliability judgment rate of 99% was achieved in an urban environment, and a plane accuracy of less than 0.3 m in RMS was achieved. The results indicate that the accuracy of the proposed method is higher than that of the conventional fix solution, demonstratingits effectiveness.


Sensors ◽  
2020 ◽  
Vol 21 (1) ◽  
pp. 15
Author(s):  
Filippo Aleotti ◽  
Giulio Zaccaroni ◽  
Luca Bartolomei ◽  
Matteo Poggi ◽  
Fabio Tosi ◽  
...  

Depth perception is paramount for tackling real-world problems, ranging from autonomous driving to consumer applications. For the latter, depth estimation from a single image would represent the most versatile solution since a standard camera is available on almost any handheld device. Nonetheless, two main issues limit the practical deployment of monocular depth estimation methods on such devices: (i) the low reliability when deployed in the wild and (ii) the resources needed to achieve real-time performance, often not compatible with low-power embedded systems. Therefore, in this paper, we deeply investigate all these issues, showing how they are both addressable by adopting appropriate network design and training strategies. Moreover, we also outline how to map the resulting networks on handheld devices to achieve real-time performance. Our thorough evaluation highlights the ability of such fast networks to generalize well to new environments, a crucial feature required to tackle the extremely varied contexts faced in real applications. Indeed, to further support this evidence, we report experimental results concerning real-time, depth-aware augmented reality and image blurring with smartphones in the wild.


2020 ◽  
Vol 13 (1) ◽  
pp. 89
Author(s):  
Manuel Carranza-García ◽  
Jesús Torres-Mateo ◽  
Pedro Lara-Benítez ◽  
Jorge García-Gutiérrez

Object detection using remote sensing data is a key task of the perception systems of self-driving vehicles. While many generic deep learning architectures have been proposed for this problem, there is little guidance on their suitability when using them in a particular scenario such as autonomous driving. In this work, we aim to assess the performance of existing 2D detection systems on a multi-class problem (vehicles, pedestrians, and cyclists) with images obtained from the on-board camera sensors of a car. We evaluate several one-stage (RetinaNet, FCOS, and YOLOv3) and two-stage (Faster R-CNN) deep learning meta-architectures under different image resolutions and feature extractors (ResNet, ResNeXt, Res2Net, DarkNet, and MobileNet). These models are trained using transfer learning and compared in terms of both precision and efficiency, with special attention to the real-time requirements of this context. For the experimental study, we use the Waymo Open Dataset, which is the largest existing benchmark. Despite the rising popularity of one-stage detectors, our findings show that two-stage detectors still provide the most robust performance. Faster R-CNN models outperform one-stage detectors in accuracy, being also more reliable in the detection of minority classes. Faster R-CNN Res2Net-101 achieves the best speed/accuracy tradeoff but needs lower resolution images to reach real-time speed. Furthermore, the anchor-free FCOS detector is a slightly faster alternative to RetinaNet, with similar precision and lower memory usage.


Electronics ◽  
2019 ◽  
Vol 8 (9) ◽  
pp. 943 ◽  
Author(s):  
Il Bae ◽  
Jaeyoung Moon ◽  
Jeongseok Seo

The convergence of mechanical, electrical, and advanced ICT technologies, driven by artificial intelligence and 5G vehicle-to-everything (5G-V2X) connectivity, will help to develop high-performance autonomous driving vehicles and services that are usable and convenient for self-driving passengers. Despite widespread research on self-driving, user acceptance remains an essential part of successful market penetration; this forms the motivation behind studies on human factors associated with autonomous shuttle services. We address this by providing a comfortable driving experience while not compromising safety. We focus on the accelerations and jerks of vehicles to reduce the risk of motion sickness and to improve the driving experience for passengers. Furthermore, this study proposes a time-optimal velocity planning method for guaranteeing comfort criteria when an explicit reference path is given. The overall controller and planning method were verified using real-time, software-in-the-loop (SIL) environments for a real-time vehicle dynamics simulation; the performance was then compared with a typical planning approach. The proposed optimized planning shows a relatively better performance and enables a comfortable passenger experience in a self-driving shuttle bus according to the recommended criteria.


Author(s):  
Hrishikesh Dey ◽  
Rithika Ranadive ◽  
Abhishek Chaudhari

Path planning algorithm integrated with a velocity profile generation-based navigation system is one of the most important aspects of an autonomous driving system. In this paper, a real-time path planning solution to obtain a feasible and collision-free trajectory is proposed for navigating an autonomous car on a virtual highway. This is achieved by designing the navigation algorithm to incorporate a path planner for finding the optimal path, and a velocity planning algorithm for ensuring a safe and comfortable motion along the obtained path. The navigation algorithm was validated on the Unity 3D Highway-Simulated Environment for practical driving while maintaining velocity and acceleration constraints. The autonomous vehicle drives at the maximum specified velocity until interrupted by vehicular traffic, whereas then, the path planner, based on the various constraints provided by the simulator using µWebSockets, decides to either decelerate the vehicle or shift to a more secure lane. Subsequently, a splinebased trajectory generation for this path results in continuous and smooth trajectories. The velocity planner employs an analytical method based on trapezoidal velocity profile to generate velocities for the vehicle traveling along the precomputed path. To provide smooth control, an s-like trapezoidal profile is considered that uses a cubic spline for generating velocities for the ramp-up and ramp-down portions of the curve. The acceleration and velocity constraints, which are derived from road limitations and physical systems, are explicitly considered. Depending upon these constraints and higher module requirements (e.g., maintaining velocity, and stopping), an appropriate segment of the velocity profile is deployed. The motion profiles for all the use-cases are generated and verified graphically.


2021 ◽  
Vol 11 (22) ◽  
pp. 10713
Author(s):  
Dong-Gyu Lee

Autonomous driving is a safety-critical application that requires a high-level understanding of computer vision with real-time inference. In this study, we focus on the computational efficiency of an important factor by improving the running time and performing multiple tasks simultaneously for practical applications. We propose a fast and accurate multi-task learning-based architecture for joint segmentation of drivable area, lane line, and classification of the scene. An encoder-decoder architecture efficiently handles input frames through shared representation. A comprehensive understanding of the driving environment is improved by generalization and regularization from different tasks. The proposed method learns end-to-end through multi-task learning on a very challenging Berkeley Deep Drive dataset and shows its robustness for three tasks in autonomous driving. Experimental results show that the proposed method outperforms other multi-task learning approaches in both speed and accuracy. The computational efficiency of the method was over 93.81 fps at inference, enabling execution in real-time.


Sign in / Sign up

Export Citation Format

Share Document