scholarly journals Construction and Verification of a High-Precision Base Map for an Autonomous Vehicle Monitoring System

2019 ◽  
Vol 8 (11) ◽  
pp. 501
Author(s):  
Sungil Ham ◽  
Junhyuck Im ◽  
Minjun Kim ◽  
Kuk Cho

For autonomous driving, a control system that supports precise road maps is required to monitor the operation status of autonomous vehicles in the research stage. Such a system is also required for research related to automobile engineering, sensors, and artificial intelligence. The design of Google Maps and other map services is limited to the provision of map support at 20 levels of high-resolution precision. An ideal map should include information on roads, autonomous vehicles, and Internet of Things (IOT) facilities that support autonomous driving. The aim of this study was to design a map suitable for the control of autonomous vehicles in Gyeonggi Province in Korea. This work was part of the project “Building a Testbed for Pilot Operations of Autonomous Vehicles”. The map design scheme was redesigned for an autonomous vehicle control system based on the “Easy Map” developed by the National Geography Center, which provides free design schema. In addition, a vector-based precision map, including roads, sidewalks, and road markings, was produced to provide content suitable for 20 levels. A hybrid map that combines the vector layer of the road and an unmanned aerial vehicle (UAV) orthographic map was designed to facilitate vehicle identification. A control system that can display vehicle and sensor information based on the designed map was developed, and an environment to monitor the operation of autonomous vehicles was established. Finally, the high-precision map was verified through an accuracy test and driving data from autonomous vehicles.

2021 ◽  
Vol 34 (1) ◽  
Author(s):  
Ze Liu ◽  
Yingfeng Cai ◽  
Hai Wang ◽  
Long Chen

AbstractRadar and LiDAR are two environmental sensors commonly used in autonomous vehicles, Lidars are accurate in determining objects’ positions but significantly less accurate as Radars on measuring their velocities. However, Radars relative to Lidars are more accurate on measuring objects velocities but less accurate on determining their positions as they have a lower spatial resolution. In order to compensate for the low detection accuracy, incomplete target attributes and poor environmental adaptability of single sensors such as Radar and LiDAR, in this paper, an effective method for high-precision detection and tracking of surrounding targets of autonomous vehicles. By employing the Unscented Kalman Filter, Radar and LiDAR information is effectively fused to achieve high-precision detection of the position and speed information of targets around the autonomous vehicle. Finally, the real vehicle test under various driving environment scenarios is carried out. The experimental results show that the proposed sensor fusion method can effectively detect and track the vehicle peripheral targets with high accuracy. Compared with a single sensor, it has obvious advantages and can improve the intelligence level of autonomous cars.


Electronics ◽  
2021 ◽  
Vol 10 (15) ◽  
pp. 1753
Author(s):  
Pablo Marin-Plaza ◽  
David Yagüe ◽  
Francisco Royo ◽  
Miguel Ángel de Miguel ◽  
Francisco Miguel Moreno ◽  
...  

The expansion of electric vehicles in urban areas has paved the way toward the era of autonomous vehicles, improving the performance in smart cities and upgrading related driving problems. This field of research opens immediate applications in the tourism areas, airports or business centres by greatly improving transport efficiency and reducing repetitive human tasks. This project shows the problems derived from autonomous driving such as vehicle localization, low coverage of 4G/5G and GPS, detection of the road and navigable zones including intersections, detection of static and dynamic obstacles, longitudinal and lateral control and cybersecurity aspects. The approaches proposed in this article are sufficient to solve the operational design of the problems related to autonomous vehicle application in the special locations such as rough environment, high slopes and unstructured terrain without traffic rules.


Author(s):  
László Orgován ◽  
Tamás Bécsi ◽  
Szilárd Aradi

Autonomous vehicles or self-driving cars are prevalent nowadays, many vehicle manufacturers, and other tech companies are trying to develop autonomous vehicles. One major goal of the self-driving algorithms is to perform manoeuvres safely, even when some anomaly arises. To solve these kinds of complex issues, Artificial Intelligence and Machine Learning methods are used. One of these motion planning problems is when the tires lose their grip on the road, an autonomous vehicle should handle this situation. Thus the paper provides an Autonomous Drifting algorithm using Reinforcement Learning. The algorithm is based on a model-free learning algorithm, Twin Delayed Deep Deterministic Policy Gradients (TD3). The model is trained on six different tracks in a simulator, which is developed specifically for autonomous driving systems; namely CARLA.


2020 ◽  
Author(s):  
Ze Liu ◽  
Feng Ying Cai

Abstract Radar and Lidar are two environmental sensors commonly used in autonomous vehicles,Lidars are accurate in determining objects’ positions but significantly less accurate on measuring their velocities. However, Radars are more accurate on measuring objects velocities but less accurate on determining their positions as they have a lower spatial resolution. In order to compensate for the low detection accuracy, incomplete target attributes and poor environmental adaptability of single sensors such as Radar and LIDAR, we proposed an effective method for high-precision detection and tracking of surrounding targets of autonomous vehicles. By employing the Unscented Kalman Filter, radar and LIDAR information is effectively fused to achieve high-precision detection of the position and speed information of targets around the autonomous vehicle. Finally, we do a variety of driving environment under the real car algorithm verification test. The experimental results show that the proposed sensor fusion method can effectively detect and track the vehicle peripheral targets with high accuracy. Compared with a single sensor, it has obvious advantages and can improve the intelligence level of driverless cars.


2020 ◽  
Vol 2020 ◽  
pp. 1-10
Author(s):  
Derek Hungness ◽  
Raj Bridgelall

The adoption of connected and autonomous vehicles (CAVs) is in its infancy. Therefore, very little is known about their potential impacts on traffic. Meanwhile, researchers and market analysts predict a wide range of possibilities about their potential benefits and the timing of their deployments. Planners traditionally use various types of travel demand models to forecast future traffic conditions. However, such models do not yet integrate any expected impacts from CAV deployments. Consequently, many long-range transportation plans do not yet account for their eventual deployment. To address some of these uncertainties, this work modified an existing model for Madison, Wisconsin. To compare outcomes, the authors used identical parameter changes and simulation scenarios for a model of Gainesville, Florida. Both models show that with increasing levels of CAV deployment, both the vehicle miles traveled and the average congestion speed will increase. However, there are some important exceptions due to differences in the road network layout, geospatial features, sociodemographic factors, land-use, and access to transit.


Sensors ◽  
2021 ◽  
Vol 21 (20) ◽  
pp. 6733
Author(s):  
Min-Joong Kim ◽  
Sung-Hun Yu ◽  
Tong-Hyun Kim ◽  
Joo-Uk Kim ◽  
Young-Min Kim

Today, a lot of research on autonomous driving technology is being conducted, and various vehicles with autonomous driving functions, such as ACC (adaptive cruise control) are being released. The autonomous vehicle recognizes obstacles ahead by the fusion of data from various sensors, such as lidar and radar sensors, including camera sensors. As the number of vehicles equipped with such autonomous driving functions increases, securing safety and reliability is a big issue. Recently, Mobileye proposed the RSS (responsibility-sensitive safety) model, which is a white box mathematical model, to secure the safety of autonomous vehicles and clarify responsibility in the case of an accident. In this paper, a method of applying the RSS model to a variable focus function camera that can cover the recognition range of a lidar sensor and a radar sensor with a single camera sensor is considered. The variables of the RSS model suitable for the variable focus function camera were defined, the variable values were determined, and the safe distances for each velocity were derived by applying the determined variable values. In addition, as a result of considering the time required to obtain the data, and the time required to change the focal length of the camera, it was confirmed that the response time obtained using the derived safe distance was a valid result.


Sensors ◽  
2020 ◽  
Vol 20 (17) ◽  
pp. 4703
Author(s):  
Yookhyun Yoon ◽  
Taeyeon Kim ◽  
Ho Lee ◽  
Jahnghyon Park

For driving safely and comfortably, the long-term trajectory prediction of surrounding vehicles is essential for autonomous vehicles. For handling the uncertain nature of trajectory prediction, deep-learning-based approaches have been proposed previously. An on-road vehicle must obey road geometry, i.e., it should run within the constraint of the road shape. Herein, we present a novel road-aware trajectory prediction method which leverages the use of high-definition maps with a deep learning network. We developed a data-efficient learning framework for the trajectory prediction network in the curvilinear coordinate system of the road and a lane assignment for the surrounding vehicles. Then, we proposed a novel output-constrained sequence-to-sequence trajectory prediction network to incorporate the structural constraints of the road. Our method uses these structural constraints as prior knowledge for the prediction network. It is not only used as an input to the trajectory prediction network, but is also included in the constrained loss function of the maneuver recognition network. Accordingly, the proposed method can predict a feasible and realistic intention of the driver and trajectory. Our method has been evaluated using a real traffic dataset, and the results thus obtained show that it is data-efficient and can predict reasonable trajectories at merging sections.


2015 ◽  
Vol 27 (6) ◽  
pp. 660-670 ◽  
Author(s):  
Udara Eshan Manawadu ◽  
◽  
Masaaki Ishikawa ◽  
Mitsuhiro Kamezaki ◽  
Shigeki Sugano ◽  
...  

<div class=""abs_img""><img src=""[disp_template_path]/JRM/abst-image/00270006/08.jpg"" width=""300"" /> Driving simulator</div>Intelligent passenger vehicles with autonomous capabilities will be commonplace on our roads in the near future. These vehicles will reshape the existing relationship between the driver and vehicle. Therefore, to create a new type of rewarding relationship, it is important to analyze when drivers prefer autonomous vehicles to manually-driven (conventional) vehicles. This paper documents a driving simulator-based study conducted to identify the preferences and individual driving experiences of novice and experienced drivers of autonomous and conventional vehicles under different traffic and road conditions. We first developed a simplified driving simulator that could connect to different driver-vehicle interfaces (DVI). We then created virtual environments consisting of scenarios and events that drivers encounter in real-world driving, and we implemented fully autonomous driving. We then conducted experiments to clarify how the autonomous driving experience differed for the two groups. The results showed that experienced drivers opt for conventional driving overall, mainly due to the flexibility and driving pleasure it offers, while novices tend to prefer autonomous driving due to its inherent ease and safety. A further analysis indicated that drivers preferred to use both autonomous and conventional driving methods interchangeably, depending on the road and traffic conditions.


2017 ◽  
Vol 29 (4) ◽  
pp. 660-667 ◽  
Author(s):  
Yoshihiro Takita ◽  

This paper discusses the generated trajectory of an extended lateral guided sensor steering mechanism (SSM) method for a steered autonomous vehicle moving in a real world environment. In a previous study, an extended SSM was applied to the Smart Dump 9 and AR Chair robots for following preset waypoints on a map. These studies showed only the schematic idea of the method; the precise performance of the generated trajectory was not shown. This paper compares the Smart Dump 9 robot with a newly developed AR Skipper robot; these robots participated in the Tsukuba Challenge in 2015 and 2016, respectively. Finally, experimental data from the Tsukuba Challenge 2016 demonstrates the advantages of the extended SSM and developed control system.


In this paper, we propose a method to automatically segment the road area from the input road images to support safe driving of autonomous vehicles. In the proposed method, the semantic segmentation network (SSN) is trained by using the deep learning method and the road area is segmented by utilizing the SSN. The SSN uses the weights initialized from the VGC-16 network to create the SegNet network. In order to fast the learning time and to obtain results, the class is simplified and learned so that it can be divided into two classes as the road area and the non-road area in the trained SegNet CNN network. In order to improve the accuracy of the road segmentation result, the boundary line of the road region with the straight-line component is detected through the Hough transform and the result is shown by dividing the accurate road region by combining with the segmentation result of the SSN. The proposed method can be applied to safe driving support by autonomously driving the autonomous vehicle by automatically classifying the road area during operation and applying it to the road area departure warning system


Sign in / Sign up

Export Citation Format

Share Document