scholarly journals Highly Accurate Pose Estimation as a Reference for Autonomous Vehicles in Near-Range Scenarios

2021 ◽  
Vol 14 (1) ◽  
pp. 90
Author(s):  
Ursula Kälin ◽  
Louis Staffa ◽  
David Eugen Grimm ◽  
Axel Wendt

To validate the accuracy and reliability of onboard sensors for object detection and localization for driver assistance, as well as autonomous driving applications under realistic conditions (indoors and outdoors), a novel tracking system is presented. This tracking system is developed to determine the position and orientation of a slow-moving vehicle during test maneuvers within a reference environment (e.g., car during parking maneuvers), independent of the onboard sensors. One requirement is a 6 degree of freedom (DoF) pose with position uncertainty below 5 mm (3σ), orientation uncertainty below 0.3° (3σ), at a frequency higher than 20 Hz, and with a latency smaller than 500 ms. To compare the results from the reference system with the vehicle’s onboard system, synchronization via a Precision Time Protocol (PTP) and system interoperability to a robot operating system (ROS) are achieved. The developed system combines motion capture cameras mounted in a 360° panorama view setup on the vehicle, measuring retroreflective markers distributed over the test site with known coordinates, while robotic total stations measure a prism on the vehicle. A point cloud of the test site serves as a digital twin of the environment, in which the movement of the vehicle is visualized. The results have shown that the fused measurements of these sensors complement each other, so that the accuracy requirements for the 6 DoF pose can be met while allowing a flexible installation in different environments.

Author(s):  
Ursula Kälin ◽  
Louis Staffa ◽  
David Eugen Grimm ◽  
Axel Wendt

To validate the accuracy and reliability of onboard sensors for object detection and localization in driver assistance, as well as autonomous driving applications under realistic conditions (indoors and outdoors), a novel tracking system is presented. This tracking system is developed to determine the position and orientation of a slow-moving vehicle (e.g. car during parking maneuvers), independent of the onboard sensors, during test maneuvers within a reference environment. One requirement is a 6 degree of freedom (DoF) pose with a position uncertainty below 5 mm (3σ), an orientation uncertainty below 0.3° (3σ) at a frequency higher than 20 Hz, and a latency smaller than 500 ms. To compare the results from the reference system with the vehicle’s onboard system, a synchronization via Precision Time Protocol (PTP) and a system interoperability to Robot Operating System (ROS) is implemented. The developed system combines motion capture cameras mounted in a 360° panorama view set-up on the vehicle with robotic total stations. A point cloud of the test site serves as a digital twin of the environment, in which the movement of the vehicle is simulated. Results have shown that the fused measurements of these sensors complement each other, so that the accuracy requirements for the 6 DoF pose can be met, while allowing a flexible installation in different environments.


2017 ◽  
Vol 17 (AEROSPACE SCIENCES) ◽  
pp. 1-6
Author(s):  
Bahaaeldin Abdelaty ◽  
Ahmed Ouda ◽  
Yehia Elhalwagy ◽  
Gamal Elnashar

Author(s):  
Jiayuan Dong ◽  
Emily Lawson ◽  
Jack Olsen ◽  
Myounghoon Jeon

Driving agents can provide an effective solution to improve drivers’ trust in and to manage interactions with autonomous vehicles. Research has focused on voice-agents, while few have explored robot-agents or the comparison between the two. The present study tested two variables - voice gender and agent embodiment, using conversational scripts. Twenty participants experienced autonomous driving using the simulator for four agent conditions and filled out subjective questionnaires for their perception of each agent. Results showed that the participants perceived the voice only female agent as more likeable, more comfortable, and more competent than other conditions. Their final preference ranking also favored this agent over the others. Interestingly, eye-tracking data showed that embodied agents did not add more visual distractions than the voice only agents. The results are discussed with the traditional gender stereotype, uncanny valley, and participants’ gender. This study can contribute to the design of in-vehicle agents in the autonomous vehicles and future studies are planned to further identify the underlying mechanisms of user perception on different agents.


Sensors ◽  
2021 ◽  
Vol 21 (11) ◽  
pp. 3783
Author(s):  
Sumbal Malik ◽  
Manzoor Ahmed Khan ◽  
Hesham El-Sayed

Sooner than expected, roads will be populated with a plethora of connected and autonomous vehicles serving diverse mobility needs. Rather than being stand-alone, vehicles will be required to cooperate and coordinate with each other, referred to as cooperative driving executing the mobility tasks properly. Cooperative driving leverages Vehicle to Vehicle (V2V) and Vehicle to Infrastructure (V2I) communication technologies aiming to carry out cooperative functionalities: (i) cooperative sensing and (ii) cooperative maneuvering. To better equip the readers with background knowledge on the topic, we firstly provide the detailed taxonomy section describing the underlying concepts and various aspects of cooperation in cooperative driving. In this survey, we review the current solution approaches in cooperation for autonomous vehicles, based on various cooperative driving applications, i.e., smart car parking, lane change and merge, intersection management, and platooning. The role and functionality of such cooperation become more crucial in platooning use-cases, which is why we also focus on providing more details of platooning use-cases and focus on one of the challenges, electing a leader in high-level platooning. Following, we highlight a crucial range of research gaps and open challenges that need to be addressed before cooperative autonomous vehicles hit the roads. We believe that this survey will assist the researchers in better understanding vehicular cooperation, its various scenarios, solution approaches, and challenges.


Author(s):  
Gaojian Huang ◽  
Christine Petersen ◽  
Brandon J. Pitts

Semi-autonomous vehicles still require drivers to occasionally resume manual control. However, drivers of these vehicles may have different mental states. For example, drivers may be engaged in non-driving related tasks or may exhibit mind wandering behavior. Also, monitoring monotonous driving environments can result in passive fatigue. Given the potential for different types of mental states to negatively affect takeover performance, it will be critical to highlight how mental states affect semi-autonomous takeover. A systematic review was conducted to synthesize the literature on mental states (such as distraction, fatigue, emotion) and takeover performance. This review focuses specifically on five fatigue studies. Overall, studies were too few to observe consistent findings, but some suggest that response times to takeover alerts and post-takeover performance may be affected by fatigue. Ultimately, this review may help researchers improve and develop real-time mental states monitoring systems for a wide range of application domains.


2021 ◽  
Vol 11 (13) ◽  
pp. 6016
Author(s):  
Jinsoo Kim ◽  
Jeongho Cho

For autonomous vehicles, it is critical to be aware of the driving environment to avoid collisions and drive safely. The recent evolution of convolutional neural networks has contributed significantly to accelerating the development of object detection techniques that enable autonomous vehicles to handle rapid changes in various driving environments. However, collisions in an autonomous driving environment can still occur due to undetected obstacles and various perception problems, particularly occlusion. Thus, we propose a robust object detection algorithm for environments in which objects are truncated or occluded by employing RGB image and light detection and ranging (LiDAR) bird’s eye view (BEV) representations. This structure combines independent detection results obtained in parallel through “you only look once” networks using an RGB image and a height map converted from the BEV representations of LiDAR’s point cloud data (PCD). The region proposal of an object is determined via non-maximum suppression, which suppresses the bounding boxes of adjacent regions. A performance evaluation of the proposed scheme was performed using the KITTI vision benchmark suite dataset. The results demonstrate the detection accuracy in the case of integration of PCD BEV representations is superior to when only an RGB camera is used. In addition, robustness is improved by significantly enhancing detection accuracy even when the target objects are partially occluded when viewed from the front, which demonstrates that the proposed algorithm outperforms the conventional RGB-based model.


2020 ◽  
Vol 13 (1) ◽  
pp. 89
Author(s):  
Manuel Carranza-García ◽  
Jesús Torres-Mateo ◽  
Pedro Lara-Benítez ◽  
Jorge García-Gutiérrez

Object detection using remote sensing data is a key task of the perception systems of self-driving vehicles. While many generic deep learning architectures have been proposed for this problem, there is little guidance on their suitability when using them in a particular scenario such as autonomous driving. In this work, we aim to assess the performance of existing 2D detection systems on a multi-class problem (vehicles, pedestrians, and cyclists) with images obtained from the on-board camera sensors of a car. We evaluate several one-stage (RetinaNet, FCOS, and YOLOv3) and two-stage (Faster R-CNN) deep learning meta-architectures under different image resolutions and feature extractors (ResNet, ResNeXt, Res2Net, DarkNet, and MobileNet). These models are trained using transfer learning and compared in terms of both precision and efficiency, with special attention to the real-time requirements of this context. For the experimental study, we use the Waymo Open Dataset, which is the largest existing benchmark. Despite the rising popularity of one-stage detectors, our findings show that two-stage detectors still provide the most robust performance. Faster R-CNN models outperform one-stage detectors in accuracy, being also more reliable in the detection of minority classes. Faster R-CNN Res2Net-101 achieves the best speed/accuracy tradeoff but needs lower resolution images to reach real-time speed. Furthermore, the anchor-free FCOS detector is a slightly faster alternative to RetinaNet, with similar precision and lower memory usage.


2022 ◽  
Vol 0 (0) ◽  
Author(s):  
Hannes Weinreuter ◽  
Balázs Szigeti ◽  
Nadine-Rebecca Strelau ◽  
Barbara Deml ◽  
Michael Heizmann

Abstract Autonomous driving is a promising technology to, among many aspects, improve road safety. There are however several scenarios that are challenging for autonomous vehicles. One of these are unsignalized junctions. There exist scenarios in which there is no clear regulation as to is allowed to drive first. Instead, communication and cooperation are necessary to solve such scenarios. This is especially challenging when interacting with human drivers. In this work we focus on unsignalized T-intersections. For that scenario we propose a discrete event system (DES) that is able to solve the cooperation with human drivers at a T-intersection with limited visibility and no direct communication. The algorithm is validated in a simulation environment, and the parameters for the algorithm are based on an analysis of typical human behavior at intersections using real-world data.


Author(s):  
Wenhao Deng ◽  
Skyler Moore ◽  
Jonathan Bush ◽  
Miles Mabey ◽  
Wenlong Zhang

In recent years, researchers from both academia and industry have worked on connected and automated vehicles and they have made great progress toward bringing them into reality. Compared to automated cars, bicycles are more affordable to daily commuters, as well as more environmentally friendly. When comparing the risk posed by autonomous vehicles to pedestrians and motorists, automated bicycles are much safer than autonomous cars, which also allows potential applications in smart cities, rehabilitation, and exercise. The biggest challenge in automating bicycles is the inherent problem of staying balanced. This paper presents a modified electric bicycle to allow real-time monitoring of the roll angles and motor-assisted steering. Stable and robust steering controllers for bicycle are designed and implemented to achieve self-balance at different forward speeds. Tests at different speeds have been conducted to verify the effectiveness of hardware development and controller design. The preliminary design using a control moment gyroscope (CMG) to achieve self-balancing at lower speeds are also presented in this work. This work can serve as a solid foundation for future study of human-robot interaction and autonomous driving.


2017 ◽  
Vol 139 (12) ◽  
pp. S21-S23
Author(s):  
Ross Mckenzie ◽  
John Mcphee

This article presents an overview of the research and educational programs for connected and autonomous vehicles at the University of Waterloo (UWaterloo). UWaterloo is Canada’s largest engineering school, with 9,500 engineering students and 309 engineering faculty. The University of Waterloo Centre for Automotive Research (WatCAR) for faculty, staff and students is contributing to the development of in-vehicle systems education programs for connected and autonomous vehicles (CAVs) at Waterloo. Over 130 Waterloo faculty, 110 from engineering, are engaged in WatCAR’s automotive and transportation systems research programs. The school’s CAV efforts leverage WatCAR research expertise from five areas: (1) Connected and Autonomous; (2) Software and Data; (3) Lightweighting and Fabrication; (4) Structure and Safety; and (5) Advanced Powertrain and Emissions. Foundational and operational artificial intelligence expertise from the University of Waterloo Artificial Intelligence Institute complements the autonomous driving efforts, in disciplines that include neural networks, pattern analysis and machine learning.


Sign in / Sign up

Export Citation Format

Share Document