Considering measurement uncertainty in dynamic object tracking for autonomous driving applications

2018 ◽  
Vol 85 (12) ◽  
pp. 764-778
Author(s):  
Benjamin Naujoks ◽  
Torsten Engler ◽  
Martin Michaelis ◽  
Thorsten Luettel ◽  
Hans-Joachim Wuensche

Abstract Measurement uncertainty plays an important role in every real-world perception task. This paper describes the influence of measurement uncertainty in state estimation, which is the main part of Dynamic Object Tracking. Its base is the probabilistic Bayesian Filtering approach. Practical examples and tools for choosing the correct filter implementation including measurement models and their conversion, for different kinds of sensors are presented.

Sensors ◽  
2021 ◽  
Vol 21 (9) ◽  
pp. 2894
Author(s):  
Minh-Quan Dao ◽  
Vincent Frémont

Multi-Object Tracking (MOT) is an integral part of any autonomous driving pipelines because it produces trajectories of other moving objects in the scene and predicts their future motion. Thanks to the recent advances in 3D object detection enabled by deep learning, track-by-detection has become the dominant paradigm in 3D MOT. In this paradigm, a MOT system is essentially made of an object detector and a data association algorithm which establishes track-to-detection correspondence. While 3D object detection has been actively researched, association algorithms for 3D MOT has settled at bipartite matching formulated as a Linear Assignment Problem (LAP) and solved by the Hungarian algorithm. In this paper, we adapt a two-stage data association method which was successfully applied to image-based tracking to the 3D setting, thus providing an alternative for data association for 3D MOT. Our method outperforms the baseline using one-stage bipartite matching for data association by achieving 0.587 Average Multi-Object Tracking Accuracy (AMOTA) in NuScenes validation set and 0.365 AMOTA (at level 2) in Waymo test set.


Author(s):  
Walter Morales Alvarez ◽  
Francisco Miguel Moreno ◽  
Oscar Sipele ◽  
Nikita Smirnov ◽  
Cristina Olaverri-Monreal

2020 ◽  
Author(s):  
Marvin Chancán

<div>Visual navigation tasks in real-world environments often require both self-motion and place recognition feedback. While deep reinforcement learning has shown success in solving these perception and decision-making problems in an end-to-end manner, these algorithms require large amounts of experience to learn navigation policies from high-dimensional data, which is generally impractical for real robots due to sample complexity. In this paper, we address these problems with two main contributions. We first leverage place recognition and deep learning techniques combined with goal destination feedback to generate compact, bimodal image representations that can then be used to effectively learn control policies from a small amount of experience. Second, we present an interactive framework, CityLearn, that enables for the first time training and deployment of navigation algorithms across city-sized, realistic environments with extreme visual appearance changes. CityLearn features more than 10 benchmark datasets, often used in visual place recognition and autonomous driving research, including over 100 recorded traversals across 60 cities around the world. We evaluate our approach on two CityLearn environments, training our navigation policy on a single traversal. Results show our method can be over 2 orders of magnitude faster than when using raw images, and can also generalize across extreme visual changes including day to night and summer to winter transitions.</div>


2020 ◽  
Vol 35 (4) ◽  
pp. 2670-2682
Author(s):  
Samson Shenglong Yu ◽  
Junhao Guo ◽  
Tat Kei Chau ◽  
Tyrone Fernando ◽  
Herbert Ho-Ching Iu ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document