scholarly journals Autonomous Driving: Framework for Pedestrian Intention Estimation in a Real World Scenario

Author(s):  
Walter Morales Alvarez ◽  
Francisco Miguel Moreno ◽  
Oscar Sipele ◽  
Nikita Smirnov ◽  
Cristina Olaverri-Monreal
2020 ◽  
Author(s):  
Marvin Chancán

<div>Visual navigation tasks in real-world environments often require both self-motion and place recognition feedback. While deep reinforcement learning has shown success in solving these perception and decision-making problems in an end-to-end manner, these algorithms require large amounts of experience to learn navigation policies from high-dimensional data, which is generally impractical for real robots due to sample complexity. In this paper, we address these problems with two main contributions. We first leverage place recognition and deep learning techniques combined with goal destination feedback to generate compact, bimodal image representations that can then be used to effectively learn control policies from a small amount of experience. Second, we present an interactive framework, CityLearn, that enables for the first time training and deployment of navigation algorithms across city-sized, realistic environments with extreme visual appearance changes. CityLearn features more than 10 benchmark datasets, often used in visual place recognition and autonomous driving research, including over 100 recorded traversals across 60 cities around the world. We evaluate our approach on two CityLearn environments, training our navigation policy on a single traversal. Results show our method can be over 2 orders of magnitude faster than when using raw images, and can also generalize across extreme visual changes including day to night and summer to winter transitions.</div>


2018 ◽  
Vol 3 (3) ◽  
pp. 276-286 ◽  
Author(s):  
Yihuan Zhang ◽  
Qin Lin ◽  
Jun Wang ◽  
Sicco Verwer ◽  
John M. Dolan

Author(s):  
Joseph K. Muguro ◽  
Pringgo Widyo Laksono ◽  
Yuta Sasatake ◽  
Kojiro Matsushita ◽  
Minoru Sasaki

As Automated Driving Systems (ADS) technology gets assimilated into the market, the driver&rsquo;s obligation will be changed to a supervisory role. A key point to consider is the driver&rsquo;s engagement in the secondary task to maintain the driver/user in the control loop. The paper&rsquo;s objective is to monitor driver engagement with a game and identify any impacts the task has on hazard recognition. We designed a driving simulation using Unity3D and incorporated three tasks: No-task, AR-Video, and AR-Game tasks. The driver engaged in an AR object interception game while monitoring the road for threatening road scenarios. From the results, there was less than 1 second difference between the means of gaming task (mean = 2.55s, std = 0.1002s) to no-task (mean = 2.55s, std = 0.1002s). Game scoring followed three profiles/phases: learning, saturation, and decline profile. From the profiles, it is possible to quantify/infer drivers&rsquo; engagement with the game task. The paper proposes alternative monitoring that has utility, i.e., entertaining the user. Further experiments AR-Game focusing on real-world car environment will be performed to confirm the performance following the recommendations derived from the current test.


2012 ◽  
Vol 24 (1) ◽  
pp. 219-225 ◽  
Author(s):  
Bo Sun ◽  
◽  
Michitaka Kameyama

Highly safe intelligent vehicles can significantly reduce vehicle accidents by warning drivers of dangerous situations. Trajectory estimation of target vehicles is expected to be used in highly safe intelligent vehicles. Trajectory estimation requires that we estimate driver intent not detectable by sensors. The Bayesian Network (BN) building we propose for trajectory estimation related to driver intent defines driver intent hierarchically to simplify the BN as much as possible. Causal driver-intent relationships are discussed reflecting real-world motion. This raises the quality of driver-intent estimation and increasing inference performance. Experimental learning based on 2D image processing is presented to acquire probabilistic BN parameters.


2018 ◽  
Vol 85 (12) ◽  
pp. 764-778
Author(s):  
Benjamin Naujoks ◽  
Torsten Engler ◽  
Martin Michaelis ◽  
Thorsten Luettel ◽  
Hans-Joachim Wuensche

Abstract Measurement uncertainty plays an important role in every real-world perception task. This paper describes the influence of measurement uncertainty in state estimation, which is the main part of Dynamic Object Tracking. Its base is the probabilistic Bayesian Filtering approach. Practical examples and tools for choosing the correct filter implementation including measurement models and their conversion, for different kinds of sensors are presented.


2021 ◽  
Vol 10 (5) ◽  
pp. 336
Author(s):  
Jian Yu ◽  
Meng Zhou ◽  
Xin Wang ◽  
Guoliang Pu ◽  
Chengqi Cheng ◽  
...  

Forecasting the motion of surrounding vehicles is necessary for an autonomous driving system applied in complex traffic. Trajectory prediction helps vehicles make more sensible decisions, which provides vehicles with foresight. However, traditional models consider the trajectory prediction as a simple sequence prediction task. The ignorance of inter-vehicle interaction and environment influence degrades these models in real-world datasets. To address this issue, we propose a novel Dynamic and Static Context-aware Attention Network named DSCAN in this paper. The DSCAN utilizes an attention mechanism to dynamically decide which surrounding vehicles are more important at the moment. We also equip the DSCAN with a constraint network to consider the static environment information. We conducted a series of experiments on a real-world dataset, and the experimental results demonstrated the effectiveness of our model. Moreover, the present study suggests that the attention mechanism and static constraints enhance the prediction results.


2021 ◽  
Vol 70 ◽  
pp. 1517-1555
Author(s):  
Anirban Santara ◽  
Sohan Rudra ◽  
Sree Aditya Buridi ◽  
Meha Kaushik ◽  
Abhishek Naik ◽  
...  

Autonomous driving has emerged as one of the most active areas of research as it has the promise of making transportation safer and more efficient than ever before. Most real-world autonomous driving pipelines perform perception, motion planning and action in a loop. In this work we present MADRaS, an open-source multi-agent driving simulator for use in the design and evaluation of motion planning algorithms for autonomous driving. Given a start and a goal state, the task of motion planning is to solve for a sequence of position, orientation and speed values in order to navigate between the states while adhering to safety constraints. These constraints often involve the behaviors of other agents in the environment. MADRaS provides a platform for constructing a wide variety of highway and track driving scenarios where multiple driving agents can be trained for motion planning tasks using reinforcement learning and other machine learning algorithms. MADRaS is built on TORCS, an open-source car-racing simulator. TORCS offers a variety of cars with different dynamic properties and driving tracks with different geometries and surface.  MADRaS inherits these functionalities from TORCS and introduces support for multi-agent training, inter-vehicular communication, noisy observations, stochastic actions, and custom traffic cars whose behaviors can be programmed to simulate challenging traffic conditions encountered in the real world. MADRaS can be used to create driving tasks whose complexities can be tuned along eight axes in well-defined steps. This makes it particularly suited for curriculum and continual learning. MADRaS is lightweight and it provides a convenient OpenAI Gym interface for independent control of each car. Apart from the primitive steering-acceleration-brake control mode of TORCS, MADRaS offers a hierarchical track-position – speed control mode that can potentially be used to achieve better generalization. MADRaS uses a UDP based client server model where the simulation engine is the server and each client is a driving agent. MADRaS uses multiprocessing to run each agent as a parallel process for efficiency and integrates well with popular reinforcement learning libraries like RLLib. We show experiments on single and multi-agent reinforcement learning with and without curriculum


2021 ◽  
Vol 12 (1) ◽  
pp. 281
Author(s):  
Jaesung Jang ◽  
Hyeongyu Lee ◽  
Jong-Chan Kim

For safe autonomous driving, deep neural network (DNN)-based perception systems play essential roles, where a vast amount of driving images should be manually collected and labeled with ground truth (GT) for training and validation purposes. After observing the manual GT generation’s high cost and unavoidable human errors, this study presents an open-source automatic GT generation tool, CarFree, based on the Carla autonomous driving simulator. By that, we aim to democratize the daunting task of (in particular) object detection dataset generation, which was only possible by big companies or institutes due to its high cost. CarFree comprises (i) a data extraction client that automatically collects relevant information from the Carla simulator’s server and (ii) a post-processing software that produces precise 2D bounding boxes of vehicles and pedestrians on the gathered driving images. Our evaluation results show that CarFree can generate a considerable amount of realistic driving images along with their GTs in a reasonable time. Moreover, using the synthesized training images with artificially made unusual weather and lighting conditions, which are difficult to obtain in real-world driving scenarios, CarFree significantly improves the object detection accuracy in the real world, particularly in the case of harsh environments. With CarFree, we expect its users to generate a variety of object detection datasets in hassle-free ways.


Sign in / Sign up

Export Citation Format

Share Document