scholarly journals A Connected Autonomous Vehicle Testbed: Capabilities, Experimental Processes and Lessons Learned

Automation ◽  
2020 ◽  
Vol 1 (1) ◽  
pp. 17-32
Author(s):  
Thomas Kent ◽  
Anthony Pipe ◽  
Arthur Richards ◽  
Jim Hutchinson ◽  
Wolfgang Schuster

VENTURER was one of the first three UK government funded research and innovation projects on Connected Autonomous Vehicles (CAVs) and was conducted predominantly in the South West region of the country. A series of increasingly complex scenarios conducted in an urban setting were used to: (i) evaluate the technology created as a part of the project; (ii) systematically assess participant responses to CAVs and; (iii) inform the development of potential insurance models and legal frameworks. Developing this understanding contributed key steps towards facilitating the deployment of CAVs on UK roads. This paper aims to describe the VENTURER Project trials, their objectives and detail some of the key technologies used. Importantly we aim to introduce some informative challenges that were overcame and the subsequent project and technological lessons learned in a hope to help others plan and execute future CAV research. The project successfully integrated several technologies crucial to CAV development. These included, a Decision Making System using behaviour trees to make high level decisions; A pilot-control system to smoothly and comfortably turn plans into throttle and steering actuation; Sensing and perception systems to make sense of raw sensor data; Inter-CAV Wireless communication capable of demonstrating vehicle-to-vehicle communication of potential hazards. The closely coupled technology integration, testing and participant-focused trial schedule led to a greatly improved understanding of the engineering and societal barriers that CAV development faces. From a behavioural standpoint the importance of reliability and repeatability far outweighs a need for novel trajectories, while the sensor-to-perception capabilities are critical, the process of verification and validation is extremely time consuming. Additionally, the added capabilities that can be leveraged from inter-CAV communications shows the potential for improved road safety that could result. Importantly, to effectively conduct human factors experiments in the CAV sector under consistent and repeatable conditions, one needs to define a scripted and stable set of scenarios that uses reliable equipment and a controllable environmental setting. This requirement can often be at odds with making significant technology developments, and if both are part of a project’s goals then they may need to be separated from each other.

Sensors ◽  
2021 ◽  
Vol 21 (11) ◽  
pp. 3850
Author(s):  
Bastien Vincke ◽  
Sergio Rodriguez Rodriguez Florez ◽  
Pascal Aubert

Emerging technologies in the context of Autonomous Vehicles (AV) have drastically evolved the industry’s qualification requirements. AVs incorporate complex perception and control systems. Teaching the associated skills that are necessary for the analysis of such systems becomes a very difficult process and existing solutions do not facilitate learning. In this study, our efforts are devoted to proposingan open-source scale model vehicle platform that is designed for teaching the fundamental concepts of autonomous vehicles technologies that are adapted to undergraduate and technical students. The proposed platform is as realistic as possible in order to present and address all of the fundamental concepts that are associated with AV. It includes all on-board components of a stand-alone system, including low and high level functions. Such functionalities are detailed and a proof of concept prototype is presented. A set of experiments is carried out, and the results obtained using this prototype validate the usability of the model for the analysis of time- and energy-constrained systems, as well as distributed embedded perception systems.


2016 ◽  
Author(s):  
Georg Tanzmeister

This dissertation is focused on the environment model for automated vehicles. A reliable model of the local environment available in real-time is a prerequisite to enable almost any useful ­activity performed by a robot, such as planning motions to fulfill tasks. It is particularly important in safety critical applications, such as for autonomous vehicles in regular traffic. In this thesis, novel concepts for local mapping, tracking, the detection of principal moving directions, cost evaluations in motion planning, and road course estimation have been developed. An object- and sensor-independent grid representation forms the basis of all presented methods enabling a generic and robust estimation of the environment. All approaches have been evaluated with sensor data from real road scenarios, and their performance has been experimentally demonstrated with a test vehicle. ...


2020 ◽  
Vol 14 (1) ◽  
pp. 164-173
Author(s):  
Yair Wiseman

Background: An autonomous vehicle will go unaccompanied to park itself in a remote parking lot without a driver or a passenger inside. Unlike traditional vehicles, an autonomous vehicle can drop passengers off near any location. Afterward, instead of cruising for a nearby free parking, the vehicle can be automatically parked in a remote parking lot which can be in a rural fringe of the city where inexpensive land is more readily available. Objective: The study aimed at avoidance of mistakes in the identification of the vehicle with the help of the automatic identification device. Methods: It is proposed to back up license plate identification procedure by making use of three distinct identification techniques: RFID, Bluetooth and OCR with the aim of considerably reducing identification mistakes. Results: The RFID is the most reliable identification device but the Bluetooth and the OCR can improve the reliability of RFID. Conclusion: A very high level of reliable vehicle identification device is achievable. Parking lots for autonomous vehicles can be very efficient and low-priced. The critical difficulty is to automatically make sure that the autonomous vehicle is correctly identified at the gate.


2021 ◽  
Vol 1 (2) ◽  
Author(s):  
Asher Elmquist ◽  
Radu Serban ◽  
Dan Negrut

Abstract Computer simulation can be a useful tool when designing robots expected to operate independently in unstructured environments. In this context, one needs to simulate the dynamics of the robot’s mechanical system, the environment in which the robot operates, and the sensors which facilitate the robot’s perception of the environment. Herein, we focus on the sensing simulation task by presenting a virtual sensing framework built alongside an open-source, multi-physics simulation platform called Chrono. This framework supports camera, lidar, GPS, and IMU simulation. We discuss their modeling as well as the noise and distortion implemented to increase the realism of the synthetic sensor data. We close with two examples that show the sensing simulation framework at work: one pertains to a reduced scale autonomous vehicle and the second is related to a vehicle driven in a digital replica of a Madison neighborhood.


Author(s):  
Sai Rajeev Devaragudi ◽  
Bo Chen

Abstract This paper presents a Model Predictive Control (MPC) approach for longitudinal and lateral control of autonomous vehicles with a real-time local path planning algorithm. A heuristic graph search method (A* algorithm) combined with piecewise Bezier curve generation is implemented for obstacle avoidance in autonomous driving applications. Constant time headway control is implemented for a longitudinal motion to track lead vehicles and maintain a constant time gap. MPC is used to control the steering angle and the tractive force of the autonomous vehicle. Furthermore, a new method of developing Advanced Driver Assistance Systems (ADAS) algorithms and vehicle controllers using Model-In-the-Loop (MIL) testing is explored with the use of PreScan®. With PreScan®, various traffic scenarios are modeled and the sensor data are simulated by using physics-based sensor models, which are fed to the controller for data processing and motion planning. Obstacle detection and collision avoidance are demonstrated using the presented MPC controller.


Sensors ◽  
2019 ◽  
Vol 19 (22) ◽  
pp. 5035 ◽  
Author(s):  
Son ◽  
Jeong ◽  
Lee

When blind and deaf people are passengers in fully autonomous vehicles, an intuitive and accurate visualization screen should be provided for the deaf, and an audification system with speech-to-text (STT) and text-to-speech (TTS) functions should be provided for the blind. However, these systems cannot know the fault self-diagnosis information and the instrument cluster information that indicates the current state of the vehicle when driving. This paper proposes an audification and visualization system (AVS) of an autonomous vehicle for blind and deaf people based on deep learning to solve this problem. The AVS consists of three modules. The data collection and management module (DCMM) stores and manages the data collected from the vehicle. The audification conversion module (ACM) has a speech-to-text submodule (STS) that recognizes a user’s speech and converts it to text data, and a text-to-wave submodule (TWS) that converts text data to voice. The data visualization module (DVM) visualizes the collected sensor data, fault self-diagnosis data, etc., and places the visualized data according to the size of the vehicle’s display. The experiment shows that the time taken to adjust visualization graphic components in on-board diagnostics (OBD) was approximately 2.5 times faster than the time taken in a cloud server. In addition, the overall computational time of the AVS system was approximately 2 ms faster than the existing instrument cluster. Therefore, because the AVS proposed in this paper can enable blind and deaf people to select only what they want to hear and see, it reduces the overload of transmission and greatly increases the safety of the vehicle. If the AVS is introduced in a real vehicle, it can prevent accidents for disabled and other passengers in advance.


Sensors ◽  
2019 ◽  
Vol 19 (20) ◽  
pp. 4357 ◽  
Author(s):  
Babak Shahian Jahromi ◽  
Theja Tulabandhula ◽  
Sabri Cetin

There are many sensor fusion frameworks proposed in the literature using different sensors and fusion methods combinations and configurations. More focus has been on improving the accuracy performance; however, the implementation feasibility of these frameworks in an autonomous vehicle is less explored. Some fusion architectures can perform very well in lab conditions using powerful computational resources; however, in real-world applications, they cannot be implemented in an embedded edge computer due to their high cost and computational need. We propose a new hybrid multi-sensor fusion pipeline configuration that performs environment perception for autonomous vehicles such as road segmentation, obstacle detection, and tracking. This fusion framework uses a proposed encoder-decoder based Fully Convolutional Neural Network (FCNx) and a traditional Extended Kalman Filter (EKF) nonlinear state estimator method. It also uses a configuration of camera, LiDAR, and radar sensors that are best suited for each fusion method. The goal of this hybrid framework is to provide a cost-effective, lightweight, modular, and robust (in case of a sensor failure) fusion system solution. It uses FCNx algorithm that improve road detection accuracy compared to benchmark models while maintaining real-time efficiency that can be used in an autonomous vehicle embedded computer. Tested on over 3K road scenes, our fusion algorithm shows better performance in various environment scenarios compared to baseline benchmark networks. Moreover, the algorithm is implemented in a vehicle and tested using actual sensor data collected from a vehicle, performing real-time environment perception.


2020 ◽  
Vol 10 (16) ◽  
pp. 5655
Author(s):  
Miguel Ángel de Miguel ◽  
Francisco Miguel Moreno ◽  
Pablo Marín-Plaza ◽  
Abdulla Al-Kaff ◽  
Martín Palos ◽  
...  

This work presents a novel platform for autonomous vehicle technologies research for the insurance sector. The platform has been collaboratively developed by the insurance company MAPFRE-CESVIMAP, Universidad Carlos III de Madrid and INSIA of the Universidad Politécnica de Madrid. The high-level architecture and several autonomous vehicle technologies developed using the framework of this collaboration are introduced and described in this work. Computer vision technologies for environment perception, V2X communication capabilities, enhanced localization, human–machine interaction and self awareness are among the technologies which have been developed and tested. Some use cases that validate the technologies presented in the platform are also presented; these use cases include public demonstrations, tests of the technologies and international competitions for self-driving technologies.


2021 ◽  
Vol 102 (4) ◽  
Author(s):  
Ranulfo Plutarco Bezerra Neto ◽  
Kazunori Ohno ◽  
Thomas Westfechtel ◽  
Shotaro Kojima ◽  
Kento Yamada ◽  
...  

AbstractAutonomous vehicles require high-level semantic maps, which contain the activities of pedestrians and cars, to ensure safe navigation. High-level semantics can be obtained from mobile probe sensor data. Analyzing pedestrian trajectories obtained from mobile probe data is an effective approach to avoid collisions between autonomous vehicles and pedestrians. Such analyses of pedestrian trajectories can generate new information such as pedestrian behaviors in violation of traffic regulations. However, pedestrian trajectories obtained from mobile probe data significantly sparse and noisy, making it challenging to analyze pedestrian activity. To address this issue, we propose multiple daily data and graph-based approaches to treat sparse and noisy data for estimating the flow of pedestrians based on mobile probe data. To improve the sparseness of the data, multiple daily data are fused. After that, a pedestrian graph is created to enhance the region’s coverage by connecting the sparse data indicating the flow of pedestrians. This proposed approach successfully obtained pedestrian trajectory data from the sparse and noisy data. Moreover, it was possible to identify the potential locations where pedestrians tend to cross the street by analyzing the pedestrian flow. The results indicate that 83% of well-known regions where pedestrians tend to cross the street corresponded with those extracted using the proposed approach. Furthermore, a high-level semantic map of the regions where pedestrians tend to cross the street along a 1-km road is presented. The trajectory information obtained using the proposed approach is expected to be essential for understanding different scenarios of the interactions between individuals and autonomous vehicles.


Sensors ◽  
2020 ◽  
Vol 20 (3) ◽  
pp. 899 ◽  
Author(s):  
Veli Ilci ◽  
Charles Toth

Recent developments in sensor technologies such as Global Navigation Satellite Systems (GNSS), Inertial Measurement Unit (IMU), Light Detection and Ranging (LiDAR), radar, and camera have led to emerging state-of-the-art autonomous systems, such as driverless vehicles or UAS (Unmanned Airborne Systems) swarms. These technologies necessitate the use of accurate object space information about the physical environment around the platform. This information can be generally provided by the suitable selection of the sensors, including sensor types and capabilities, the number of sensors, and their spatial arrangement. Since all these sensor technologies have different error sources and characteristics, rigorous sensor modeling is needed to eliminate/mitigate errors to obtain an accurate, reliable, and robust integrated solution. Mobile mapping systems are very similar to autonomous vehicles in terms of being able to reconstruct the environment around the platforms. However, they differ a lot in operations and objectives. Mobile mapping vehicles use professional grade sensors, such as geodetic grade GNSS, tactical grade IMU, mobile LiDAR, and metric cameras, and the solution is created in post-processing. In contrast, autonomous vehicles use simple/inexpensive sensors, require real-time operations, and are primarily interested in identifying and tracking moving objects. In this study, the main objective was to assess the performance potential of autonomous vehicle sensor systems to obtain high-definition maps based on only using Velodyne sensor data for creating accurate point clouds. In other words, no other sensor data were considered in this investigation. The results have confirmed that cm-level accuracy can be achieved.


Sign in / Sign up

Export Citation Format

Share Document