scholarly journals Virtual Scenario Simulation and Modeling Framework in Autonomous Driving Simulators

Electronics ◽  
2021 ◽  
Vol 10 (6) ◽  
pp. 694
Author(s):  
Mingyun Wen ◽  
Jisun Park ◽  
Yunsick Sung ◽  
Yong Woon Park ◽  
Kyungeun Cho

Recently, virtual environment-based techniques to train sensor-based autonomous driving models have been widely employed due to their efficiency. However, a simulated virtual environment is required to be highly similar to its real-world counterpart to ensure the applicability of such models to actual autonomous vehicles. Though advances in hardware and three-dimensional graphics engine technology have enabled the creation of realistic virtual driving environments, the myriad of scenarios occurring in the real world can only be simulated up to a limited extent. In this study, a scenario simulation and modeling framework that simulates the behavior of objects that may be encountered while driving is proposed to address this problem. This framework maximizes the number of scenarios, their types, and the driving experience in a virtual environment. Furthermore, a simulator was implemented and employed to evaluate the performance of the proposed framework.

2006 ◽  
Vol 532-533 ◽  
pp. 1128-1131
Author(s):  
Yan Fei Liang ◽  
Han Wu He ◽  
De Tao Zheng ◽  
Xin Chen

This paper established the framework of the decision-making model system for autonomous vehicles. Based on virtual reality environment modeling technology, the virtual scene was obtained. The driving performance of autonomous vehicles in real environment was simulated with that of the virtual vehicle in virtual environment. It was studied the influence of driver’s aggressiveness on lane-changed performance through considering human factors, and several longitudinal driving modes were classified and discussed. Three-power B spline function was used in this paper to plan path by interpolating characteristics points. The driving framework and the driving models described in this paper serve to address the problem of building more realistic traffic at the microscopic level in driving simulators. The autonomous vehicles based on this system can be used as the vehicles in simulators and help to design traffic or help to verify the performance of vehicles.


2015 ◽  
Vol 27 (6) ◽  
pp. 660-670 ◽  
Author(s):  
Udara Eshan Manawadu ◽  
◽  
Masaaki Ishikawa ◽  
Mitsuhiro Kamezaki ◽  
Shigeki Sugano ◽  
...  

<div class=""abs_img""><img src=""[disp_template_path]/JRM/abst-image/00270006/08.jpg"" width=""300"" /> Driving simulator</div>Intelligent passenger vehicles with autonomous capabilities will be commonplace on our roads in the near future. These vehicles will reshape the existing relationship between the driver and vehicle. Therefore, to create a new type of rewarding relationship, it is important to analyze when drivers prefer autonomous vehicles to manually-driven (conventional) vehicles. This paper documents a driving simulator-based study conducted to identify the preferences and individual driving experiences of novice and experienced drivers of autonomous and conventional vehicles under different traffic and road conditions. We first developed a simplified driving simulator that could connect to different driver-vehicle interfaces (DVI). We then created virtual environments consisting of scenarios and events that drivers encounter in real-world driving, and we implemented fully autonomous driving. We then conducted experiments to clarify how the autonomous driving experience differed for the two groups. The results showed that experienced drivers opt for conventional driving overall, mainly due to the flexibility and driving pleasure it offers, while novices tend to prefer autonomous driving due to its inherent ease and safety. A further analysis indicated that drivers preferred to use both autonomous and conventional driving methods interchangeably, depending on the road and traffic conditions.


2005 ◽  
Vol 32 (5) ◽  
pp. 777-785 ◽  
Author(s):  
Ebru Cubukcu ◽  
Jack L Nasar

Discrepanices between perceived and actual distance may affect people's spatial behavior. In a previous study Nasar, using self report of behavior, found that segmentation (measured through the number of buildings) along the route affected choice of parking garage and path from the parking garage to a destination. We recreated that same environment in a three-dimensional virtual environment and conducted a test to see whether the same factors emerged under these more controlled conditions and to see whether spatial behavior in the virtual environment accurately reflected behavior in the real environment. The results confirmed similar patterns of response in the virtual and real environments. This supports the use of virtual reality as a tool for predicting behavior in the real world and confirms increases in segmentation as related to increases in perceived distance.


Micromachines ◽  
2020 ◽  
Vol 11 (5) ◽  
pp. 456 ◽  
Author(s):  
Dingkang Wang ◽  
Connor Watkins ◽  
Huikai Xie

In recent years, Light Detection and Ranging (LiDAR) has been drawing extensive attention both in academia and industry because of the increasing demand for autonomous vehicles. LiDAR is believed to be the crucial sensor for autonomous driving and flying, as it can provide high-density point clouds with accurate three-dimensional information. This review presents an extensive overview of Microelectronechanical Systems (MEMS) scanning mirrors specifically for applications in LiDAR systems. MEMS mirror-based laser scanners have unrivalled advantages in terms of size, speed and cost over other types of laser scanners, making them ideal for LiDAR in a wide range of applications. A figure of merit (FoM) is defined for MEMS mirrors in LiDAR scanners in terms of aperture size, field of view (FoV) and resonant frequency. Various MEMS mirrors based on different actuation mechanisms are compared using the FoM. Finally, a preliminary assessment of off-the-shelf MEMS scanned LiDAR systems is given.


Author(s):  
Heungseok Chae ◽  
Yonghwan Jeong ◽  
Hojun Lee ◽  
Jongcherl Park ◽  
Kyongsu Yi

This article describes the design, implementation, and evaluation of an active lane change control algorithm for autonomous vehicles with human factor considerations. Lane changes need to be performed considering both driver acceptance and safety with surrounding vehicles. Therefore, autonomous driving systems need to be designed based on an analysis of human driving behavior. In this article, manual driving characteristics are investigated using real-world driving test data. In lane change situations, interactions with surrounding vehicles were mainly investigated. And safety indices were developed with kinematic analysis. A safety indices–based lane change decision and control algorithm has been developed. In order to improve safety, stochastic predictions of both the ego vehicle and surrounding vehicles have been conducted with consideration of sensor noise and model uncertainties. The desired driving mode is decided to cope with all lane changes on highway. To obtain desired reference and constraints, motion planning for lane changes has been designed taking stochastic prediction-based safety indices into account. A stochastic model predictive control with constraints has been adopted to determine vehicle control inputs: the steering angle and the longitudinal acceleration. The proposed active lane change algorithm has been successfully implemented on an autonomous vehicle and evaluated via real-world driving tests. Safe and comfortable lane changes in high-speed driving on highways have been demonstrated using our autonomous test vehicle.


Author(s):  
Matthew Anderson ◽  
Damian Schofield ◽  
Lisa Dethridge

As computer-driven display technology becomes more powerful and accessible, the online, virtual art gallery may provide a new platform for artists to exhibit their work. Virtual exhibits may afford opportunities for both the artist and the patron to display, view and perhaps purchase various digital art forms. The aim of this paper is to examine user interaction with digital artworks inside a virtual gallery space. We use a range of criteria to describe conditions for both the designer and the user of such a virtual display system. The paper describes a number of experiments where users interacted with a virtual art gallery and were then extensively interviewed and surveyed. Measures of what Manovich (2002) describes as ‘immersion' and what Slater et al (1994) would term ‘presence' are observed in relation to the user experience. The gallery is a three-dimensional graphic digital construction built in Second Life. The experiment aimed to describe and delineate the user's perception and navigation of space and compares their perception of art objects in the virtual environment to digital objects in a ‘real world' gallery. The data collected in this study provide the basis for a discussion of how users may perceive and navigate virtual objects and spaces in an online environment such as a game or art gallery. The results may be of use to those designing interactive three-dimensional environments.


Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 371
Author(s):  
Shiwu Li ◽  
Mengyuan Huang ◽  
Mengzhu Guo ◽  
Miao Yu

Speed judgment is a vital component of autonomous driving perception systems. Automobile drivers were able to evaluate their speed as a result of their driving experience. However, driverless automobiles cannot autonomously evaluate their speed suitability through external environmental factors such as the surrounding conditions and traffic flows. This study introduced the parameter of overtaking frequency (OTF) based on the state of the traffic flow on both sides of the lane to reflect the difference between the speed of a driverless automobile and its surrounding traffic to solve the above problem. In addition, a speed evaluation algorithm was proposed based on the long short-term memory (LSTM) model. To train the LSTM model, we extracted OTF as the first observation variable, and the characteristic parameters of the vehicle’s longitudinal motion and the comparison parameters with the leading vehicle were used as the second observation variables. The algorithm judged the velocity using a hierarchical method. We conducted a road test by using real vehicles and the algorithms verified the data, which showed the accuracy rate of the model is 93%. As a result, OTF is introduced as one of the observed variables that can support the accuracy of the algorithm used to judge speed.


2019 ◽  
Vol 4 (28) ◽  
pp. eaaw0863 ◽  
Author(s):  
W. Li ◽  
C. W. Pan ◽  
R. Zhang ◽  
J. P. Ren ◽  
Y. X. Ma ◽  
...  

Simulation systems have become essential to the development and validation of autonomous driving (AD) technologies. The prevailing state-of-the-art approach for simulation uses game engines or high-fidelity computer graphics (CG) models to create driving scenarios. However, creating CG models and vehicle movements (the assets for simulation) remain manual tasks that can be costly and time consuming. In addition, CG images still lack the richness and authenticity of real-world images, and using CG images for training leads to degraded performance. Here, we present our augmented autonomous driving simulation (AADS). Our formulation augmented real-world pictures with a simulated traffic flow to create photorealistic simulation images and renderings. More specifically, we used LiDAR and cameras to scan street scenes. From the acquired trajectory data, we generated plausible traffic flows for cars and pedestrians and composed them into the background. The composite images could be resynthesized with different viewpoints and sensor models (camera or LiDAR). The resulting images are photorealistic, fully annotated, and ready for training and testing of AD systems from perception to planning. We explain our system design and validate our algorithms with a number of AD tasks from detection to segmentation and predictions. Compared with traditional approaches, our method offers scalability and realism. Scalability is particularly important for AD simulations, and we believe that real-world complexity and diversity cannot be realistically captured in a virtual environment. Our augmented approach combines the flexibility of a virtual environment (e.g., vehicle movements) with the richness of the real world to allow effective simulation.


Author(s):  
Rupak Majumdar ◽  
Aman Mathur ◽  
Marcus Pirron ◽  
Laura Stegner ◽  
Damien Zufferey

AbstractSystematic testing of autonomous vehicles operating in complex real-world scenarios is a difficult and expensive problem. We present Paracosm, a framework for writing systematic test scenarios for autonomous driving simulations. Paracosm allows users to programmatically describe complex driving situations with specific features, e.g., road layouts and environmental conditions, as well as reactive temporal behaviors of other cars and pedestrians. A systematic exploration of the state space, both for visual features and for reactive interactions with the environment is made possible. We define a notion of test coverage for parameter configurations based on combinatorial testing and low dispersion sequences. Using fuzzing on parameter configurations, our automatic test generator can maximize coverage of various behaviors and find problematic cases. Through empirical evaluations, we demonstrate the capabilities of Paracosm in programmatically modeling parameterized test environments, and in finding problematic scenarios.


2018 ◽  
Vol 66 (9) ◽  
pp. 745-751
Author(s):  
Lukas Schneider ◽  
Michael Hafner ◽  
Uwe Franke

Abstract Autonomous vehicles as well as sophisticated driver assistance systems use stereo vision to perceive their environment in 3D. At least two Million 3D points will be delivered by next generation automotive stereo vision systems. In order to cope with this huge amount of data in real-time, we developed a medium level representation, named Stixel world. This representation condenses the relevant scene information by three orders of magnitude. Since traffic scenes are dominated by planar horizontal and vertical surfaces our representation approximates the three-dimensional scene by means of thin planar rectangles called Stixel. This survey paper summarizes the progress of the Stixel world. The evolution started with a rather simple representation based on a flat world assumption. A major break-through was achieved by introducing deep-learning that allows to incorporate rich semantic information. In its most recent form, the Stixel world encodes geometric, semantic and motion cues and is capable to handle even steepest roads in San Francisco.


Sign in / Sign up

Export Citation Format

Share Document