Environment representations for automated on-road vehicles

2018 ◽  
Vol 66 (2) ◽  
pp. 107-118 ◽  
Author(s):  
Matthias Schreier

AbstractOne of the key challenges of anyAutomated Driving(AD) system lies in the perception and representation of the driving environment. Data from a multitude of different information sources such as various vehicle environment sensors, external communication interfaces, and digital maps must be adequately combined to one consistentComprehensive Environment Model(CEM) that acts as a generic abstraction layer for the driving functions. This overview article summarizes and discusses different approaches in this area with a focus on metric representations of static and dynamic driving environments for on-road AD systems. Feature maps, parametric free space maps, interval maps, occupancy grid maps, elevation maps, the stixel world, multi-level surface maps, voxel grids, meshes, and raw sensor data models are presented and compared in this regard.

Sensors ◽  
2018 ◽  
Vol 18 (8) ◽  
pp. 2452 ◽  
Author(s):  
Lakshay Narula ◽  
Michael Wooten ◽  
Matthew Murrian ◽  
Daniel LaChapelle ◽  
Todd Humphreys

Exchange of location and sensor data among connected and automated vehicles will demand accurate global referencing of the digital maps currently being developed to aid positioning for automated driving. This paper explores the limit of such maps’ globally-referenced position accuracy when the mapping agents are equipped with low-cost Global Navigation Satellite System (GNSS) receivers performing standard code-phase-based navigation, and presents a globally-referenced electro-optical simultaneous localization and mapping pipeline, called GEOSLAM, designed to achieve this limit. The key accuracy-limiting factor is shown to be the asymptotic average of the error sources that impair standard GNSS positioning. Asymptotic statistics of each GNSS error source are analyzed through both simulation and empirical data to show that sub-50-cm accurate digital mapping is feasible in the horizontal plane after multiple mapping sessions with standard GNSS, but larger biases persist in the vertical direction. GEOSLAM achieves this accuracy by (i) incorporating standard GNSS position estimates in the visual SLAM framework, (ii) merging digital maps from multiple mapping sessions, and (iii) jointly optimizing structure and motion with respect to time-separated GNSS measurements.


Robotica ◽  
1996 ◽  
Vol 14 (5) ◽  
pp. 553-560
Author(s):  
Yuefeng Zhang ◽  
Robert E. Webber

SUMMARYA grid-based method for detecting moving objects is presented. This method involves the extension and combination of two methods: (1) the Hough Transform and (2) the Occupancy Grid method. The Occupancy Grid method forms the basis for a probabilistic estimation of the location and velocity of objects in the scene from the sensor data. The Hough Transform enables the new method to handle non-integer velocity values. A model for simulating a sonar ring is also presented. Experimental results show that this method can handle objects moving at non-integer velocities.


2021 ◽  
Vol 2 ◽  
Author(s):  
Mysore Narasimhamurthy Sharath ◽  
Babak Mehran

The article presents a review of recent literature on the performance metrics of Automated Driving Systems (ADS). More specifically, performance indicators of environment perception and motion planning modules are reviewed as they are the most complicated ADS modules. The need for the incorporation of the level of threat an obstacle poses in the performance metrics is described. A methodology to quantify the level of threat of an obstacle is presented in this regard. The approach involves simultaneously considering multiple stimulus parameters (that elicit responses from drivers), thereby not ignoring multivariate interactions. Human-likeness of ADS is a desirable characteristic as ADS share road infrastructure with humans. The described method can be used to develop human-like perception and motion planning modules of ADS. In this regard, performance metrics capable of quantifying human-likeness of ADS are also presented. A comparison of different performance metrics is then summarized. ADS operators have an obligation to report any incident (crash/disengagement) to safety regulating authorities. However, precrash events/states are not being reported. The need for the collection of the precrash scenario is described. A desirable modification to the data reporting/collecting is suggested as a framework. The framework describes the precrash sequences to be reported along with the possible ways of utilizing such a valuable dataset (by the safety regulating authorities) to comprehensively assess (and consequently improve) the safety of ADS. The framework proposes to collect and maintain a repository of precrash sequences. Such a repository can be used to 1) comprehensively learn and model the precrash scenarios, 2) learn the characteristics of precrash scenarios and eventually anticipate them, 3) assess the appropriateness of the different performance metrics in precrash scenarios, 4) synthesize a diverse dataset of precrash scenarios, 5) identify the ideal configuration of sensors and algorithms to enhance safety, and 6) monitor the performance of perception and motion planning modules.


Sensors ◽  
2020 ◽  
Vol 20 (9) ◽  
pp. 2457 ◽  
Author(s):  
Jinhan Jeong ◽  
Yook Hyun Yoon ◽  
Jahng Hyon Park

Lane detection and tracking in a complex road environment is one of the most important research areas in highly automated driving systems. Studies on lane detection cover a variety of difficulties, such as shadowy situations, dimmed lane painting, and obstacles that prohibit lane feature detection. There are several hard cases in which lane candidate features are not easily extracted from image frames captured by a driving vehicle. We have carefully selected typical scenarios in which the extraction of lane candidate features can be easily corrupted by road vehicles and road markers that lead to degradations in the understanding of road scenes, resulting in difficult decision making. We have introduced two main contributions to the interpretation of road scenes in dense traffic environments. First, to obtain robust road scene understanding, we have designed a novel framework combining a lane tracker method integrated with a camera and a radar forward vehicle tracker system, which is especially useful in dense traffic situations. We have introduced an image template occupancy matching method with the integrated vehicle tracker that makes it possible to avoid extracting irrelevant lane features caused by forward target vehicles and road markers. Second, we present a robust multi-lane detection by a tracking algorithm that incudes adjacent lanes as well as ego lanes. We verify a comprehensive experimental evaluation with a real dataset comprised of problematic road scenarios. Experimental result shows that the proposed method is very reliable for multi-lane detection at the presented difficult situations.


Sign in / Sign up

Export Citation Format

Share Document