Development of a Predictive Collision Risk Estimation Scheme for Mixed Traffic

Author(s):  
Je Hong Yoo ◽  
Reza Langari

Driven by the emergence of autonomous/semi-autonomous driving technologies, the mixed situation of autonomous vehicles and human drivers is of considerable significance. Toward this end, it is necessary to better understand human driving characteristics so as to predict the actions of the other cars. In this regard, we develop a basic framework for modeling driver behaviors in view of human prediction ability. Through the game theoretic estimation of the counterpart’s behaviors and the corresponding time-evolution of unsafe collision areas, we compute an objective collision model. In turn, we design a human-like predictive perception model on collision with an adjacent vehicle based on the objective collision model and the driver’s subjective level of safety assurance. Since drivers have different safety requirements, the subjective estimate on the collision was designed as a region in which has less safety than the driver’s own safety requirement in the objective probabilistic collision prediction. The region that is subjectively perceived based on the driver’s own safety standard is regarded as a deterministic unsafe region for the driver. That is to say, the subjective perception acts as a collision area with the collision probability of 1 so that the driver should avoid while driving. In our subsequent work, we will address the issue of controller design to avoid the subjective collision estimation.

Author(s):  
Wenhao Deng ◽  
Skyler Moore ◽  
Jonathan Bush ◽  
Miles Mabey ◽  
Wenlong Zhang

In recent years, researchers from both academia and industry have worked on connected and automated vehicles and they have made great progress toward bringing them into reality. Compared to automated cars, bicycles are more affordable to daily commuters, as well as more environmentally friendly. When comparing the risk posed by autonomous vehicles to pedestrians and motorists, automated bicycles are much safer than autonomous cars, which also allows potential applications in smart cities, rehabilitation, and exercise. The biggest challenge in automating bicycles is the inherent problem of staying balanced. This paper presents a modified electric bicycle to allow real-time monitoring of the roll angles and motor-assisted steering. Stable and robust steering controllers for bicycle are designed and implemented to achieve self-balance at different forward speeds. Tests at different speeds have been conducted to verify the effectiveness of hardware development and controller design. The preliminary design using a control moment gyroscope (CMG) to achieve self-balancing at lower speeds are also presented in this work. This work can serve as a solid foundation for future study of human-robot interaction and autonomous driving.


Author(s):  
Abasi-amefon O. Affia ◽  
Raimundas Matulevičius ◽  
Rando Tõnisson

AbstractAutonomous vehicles (AV) are intelligent information systems that perceive, collect, generate and disseminate information to improve knowledge to act autonomously and provide its required services of mobility, safety, and comfort to humans. This paper combines the security risk management (ISSRM) and operationally critical threat, asset, and vulnerability evaluation (OCTAVE allegro) methods to define and assess the AV protected assets, security risks, and countermeasures.


Author(s):  
Jiayuan Dong ◽  
Emily Lawson ◽  
Jack Olsen ◽  
Myounghoon Jeon

Driving agents can provide an effective solution to improve drivers’ trust in and to manage interactions with autonomous vehicles. Research has focused on voice-agents, while few have explored robot-agents or the comparison between the two. The present study tested two variables - voice gender and agent embodiment, using conversational scripts. Twenty participants experienced autonomous driving using the simulator for four agent conditions and filled out subjective questionnaires for their perception of each agent. Results showed that the participants perceived the voice only female agent as more likeable, more comfortable, and more competent than other conditions. Their final preference ranking also favored this agent over the others. Interestingly, eye-tracking data showed that embodied agents did not add more visual distractions than the voice only agents. The results are discussed with the traditional gender stereotype, uncanny valley, and participants’ gender. This study can contribute to the design of in-vehicle agents in the autonomous vehicles and future studies are planned to further identify the underlying mechanisms of user perception on different agents.


Sensors ◽  
2021 ◽  
Vol 21 (11) ◽  
pp. 3783
Author(s):  
Sumbal Malik ◽  
Manzoor Ahmed Khan ◽  
Hesham El-Sayed

Sooner than expected, roads will be populated with a plethora of connected and autonomous vehicles serving diverse mobility needs. Rather than being stand-alone, vehicles will be required to cooperate and coordinate with each other, referred to as cooperative driving executing the mobility tasks properly. Cooperative driving leverages Vehicle to Vehicle (V2V) and Vehicle to Infrastructure (V2I) communication technologies aiming to carry out cooperative functionalities: (i) cooperative sensing and (ii) cooperative maneuvering. To better equip the readers with background knowledge on the topic, we firstly provide the detailed taxonomy section describing the underlying concepts and various aspects of cooperation in cooperative driving. In this survey, we review the current solution approaches in cooperation for autonomous vehicles, based on various cooperative driving applications, i.e., smart car parking, lane change and merge, intersection management, and platooning. The role and functionality of such cooperation become more crucial in platooning use-cases, which is why we also focus on providing more details of platooning use-cases and focus on one of the challenges, electing a leader in high-level platooning. Following, we highlight a crucial range of research gaps and open challenges that need to be addressed before cooperative autonomous vehicles hit the roads. We believe that this survey will assist the researchers in better understanding vehicular cooperation, its various scenarios, solution approaches, and challenges.


Author(s):  
Gaojian Huang ◽  
Christine Petersen ◽  
Brandon J. Pitts

Semi-autonomous vehicles still require drivers to occasionally resume manual control. However, drivers of these vehicles may have different mental states. For example, drivers may be engaged in non-driving related tasks or may exhibit mind wandering behavior. Also, monitoring monotonous driving environments can result in passive fatigue. Given the potential for different types of mental states to negatively affect takeover performance, it will be critical to highlight how mental states affect semi-autonomous takeover. A systematic review was conducted to synthesize the literature on mental states (such as distraction, fatigue, emotion) and takeover performance. This review focuses specifically on five fatigue studies. Overall, studies were too few to observe consistent findings, but some suggest that response times to takeover alerts and post-takeover performance may be affected by fatigue. Ultimately, this review may help researchers improve and develop real-time mental states monitoring systems for a wide range of application domains.


2021 ◽  
Vol 11 (13) ◽  
pp. 6016
Author(s):  
Jinsoo Kim ◽  
Jeongho Cho

For autonomous vehicles, it is critical to be aware of the driving environment to avoid collisions and drive safely. The recent evolution of convolutional neural networks has contributed significantly to accelerating the development of object detection techniques that enable autonomous vehicles to handle rapid changes in various driving environments. However, collisions in an autonomous driving environment can still occur due to undetected obstacles and various perception problems, particularly occlusion. Thus, we propose a robust object detection algorithm for environments in which objects are truncated or occluded by employing RGB image and light detection and ranging (LiDAR) bird’s eye view (BEV) representations. This structure combines independent detection results obtained in parallel through “you only look once” networks using an RGB image and a height map converted from the BEV representations of LiDAR’s point cloud data (PCD). The region proposal of an object is determined via non-maximum suppression, which suppresses the bounding boxes of adjacent regions. A performance evaluation of the proposed scheme was performed using the KITTI vision benchmark suite dataset. The results demonstrate the detection accuracy in the case of integration of PCD BEV representations is superior to when only an RGB camera is used. In addition, robustness is improved by significantly enhancing detection accuracy even when the target objects are partially occluded when viewed from the front, which demonstrates that the proposed algorithm outperforms the conventional RGB-based model.


2020 ◽  
Vol 13 (1) ◽  
pp. 89
Author(s):  
Manuel Carranza-García ◽  
Jesús Torres-Mateo ◽  
Pedro Lara-Benítez ◽  
Jorge García-Gutiérrez

Object detection using remote sensing data is a key task of the perception systems of self-driving vehicles. While many generic deep learning architectures have been proposed for this problem, there is little guidance on their suitability when using them in a particular scenario such as autonomous driving. In this work, we aim to assess the performance of existing 2D detection systems on a multi-class problem (vehicles, pedestrians, and cyclists) with images obtained from the on-board camera sensors of a car. We evaluate several one-stage (RetinaNet, FCOS, and YOLOv3) and two-stage (Faster R-CNN) deep learning meta-architectures under different image resolutions and feature extractors (ResNet, ResNeXt, Res2Net, DarkNet, and MobileNet). These models are trained using transfer learning and compared in terms of both precision and efficiency, with special attention to the real-time requirements of this context. For the experimental study, we use the Waymo Open Dataset, which is the largest existing benchmark. Despite the rising popularity of one-stage detectors, our findings show that two-stage detectors still provide the most robust performance. Faster R-CNN models outperform one-stage detectors in accuracy, being also more reliable in the detection of minority classes. Faster R-CNN Res2Net-101 achieves the best speed/accuracy tradeoff but needs lower resolution images to reach real-time speed. Furthermore, the anchor-free FCOS detector is a slightly faster alternative to RetinaNet, with similar precision and lower memory usage.


2022 ◽  
Vol 0 (0) ◽  
Author(s):  
Hannes Weinreuter ◽  
Balázs Szigeti ◽  
Nadine-Rebecca Strelau ◽  
Barbara Deml ◽  
Michael Heizmann

Abstract Autonomous driving is a promising technology to, among many aspects, improve road safety. There are however several scenarios that are challenging for autonomous vehicles. One of these are unsignalized junctions. There exist scenarios in which there is no clear regulation as to is allowed to drive first. Instead, communication and cooperation are necessary to solve such scenarios. This is especially challenging when interacting with human drivers. In this work we focus on unsignalized T-intersections. For that scenario we propose a discrete event system (DES) that is able to solve the cooperation with human drivers at a T-intersection with limited visibility and no direct communication. The algorithm is validated in a simulation environment, and the parameters for the algorithm are based on an analysis of typical human behavior at intersections using real-world data.


2017 ◽  
Vol 139 (12) ◽  
pp. S21-S23
Author(s):  
Ross Mckenzie ◽  
John Mcphee

This article presents an overview of the research and educational programs for connected and autonomous vehicles at the University of Waterloo (UWaterloo). UWaterloo is Canada’s largest engineering school, with 9,500 engineering students and 309 engineering faculty. The University of Waterloo Centre for Automotive Research (WatCAR) for faculty, staff and students is contributing to the development of in-vehicle systems education programs for connected and autonomous vehicles (CAVs) at Waterloo. Over 130 Waterloo faculty, 110 from engineering, are engaged in WatCAR’s automotive and transportation systems research programs. The school’s CAV efforts leverage WatCAR research expertise from five areas: (1) Connected and Autonomous; (2) Software and Data; (3) Lightweighting and Fabrication; (4) Structure and Safety; and (5) Advanced Powertrain and Emissions. Foundational and operational artificial intelligence expertise from the University of Waterloo Artificial Intelligence Institute complements the autonomous driving efforts, in disciplines that include neural networks, pattern analysis and machine learning.


Sensors ◽  
2021 ◽  
Vol 21 (20) ◽  
pp. 6733
Author(s):  
Min-Joong Kim ◽  
Sung-Hun Yu ◽  
Tong-Hyun Kim ◽  
Joo-Uk Kim ◽  
Young-Min Kim

Today, a lot of research on autonomous driving technology is being conducted, and various vehicles with autonomous driving functions, such as ACC (adaptive cruise control) are being released. The autonomous vehicle recognizes obstacles ahead by the fusion of data from various sensors, such as lidar and radar sensors, including camera sensors. As the number of vehicles equipped with such autonomous driving functions increases, securing safety and reliability is a big issue. Recently, Mobileye proposed the RSS (responsibility-sensitive safety) model, which is a white box mathematical model, to secure the safety of autonomous vehicles and clarify responsibility in the case of an accident. In this paper, a method of applying the RSS model to a variable focus function camera that can cover the recognition range of a lidar sensor and a radar sensor with a single camera sensor is considered. The variables of the RSS model suitable for the variable focus function camera were defined, the variable values were determined, and the safe distances for each velocity were derived by applying the determined variable values. In addition, as a result of considering the time required to obtain the data, and the time required to change the focal length of the camera, it was confirmed that the response time obtained using the derived safe distance was a valid result.


Sign in / Sign up

Export Citation Format

Share Document