scholarly journals Highly Accurate Pose Estimation as Reference for Autonomous Vehicles in Near-Range Scenarios

Author(s):  
Ursula Kälin ◽  
Louis Staffa ◽  
David Eugen Grimm ◽  
Axel Wendt

To validate the accuracy and reliability of onboard sensors for object detection and localization in driver assistance, as well as autonomous driving applications under realistic conditions (indoors and outdoors), a novel tracking system is presented. This tracking system is developed to determine the position and orientation of a slow-moving vehicle (e.g. car during parking maneuvers), independent of the onboard sensors, during test maneuvers within a reference environment. One requirement is a 6 degree of freedom (DoF) pose with a position uncertainty below 5 mm (3σ), an orientation uncertainty below 0.3° (3σ) at a frequency higher than 20 Hz, and a latency smaller than 500 ms. To compare the results from the reference system with the vehicle’s onboard system, a synchronization via Precision Time Protocol (PTP) and a system interoperability to Robot Operating System (ROS) is implemented. The developed system combines motion capture cameras mounted in a 360° panorama view set-up on the vehicle with robotic total stations. A point cloud of the test site serves as a digital twin of the environment, in which the movement of the vehicle is simulated. Results have shown that the fused measurements of these sensors complement each other, so that the accuracy requirements for the 6 DoF pose can be met, while allowing a flexible installation in different environments.

2021 ◽  
Vol 14 (1) ◽  
pp. 90
Author(s):  
Ursula Kälin ◽  
Louis Staffa ◽  
David Eugen Grimm ◽  
Axel Wendt

To validate the accuracy and reliability of onboard sensors for object detection and localization for driver assistance, as well as autonomous driving applications under realistic conditions (indoors and outdoors), a novel tracking system is presented. This tracking system is developed to determine the position and orientation of a slow-moving vehicle during test maneuvers within a reference environment (e.g., car during parking maneuvers), independent of the onboard sensors. One requirement is a 6 degree of freedom (DoF) pose with position uncertainty below 5 mm (3σ), orientation uncertainty below 0.3° (3σ), at a frequency higher than 20 Hz, and with a latency smaller than 500 ms. To compare the results from the reference system with the vehicle’s onboard system, synchronization via a Precision Time Protocol (PTP) and system interoperability to a robot operating system (ROS) are achieved. The developed system combines motion capture cameras mounted in a 360° panorama view setup on the vehicle, measuring retroreflective markers distributed over the test site with known coordinates, while robotic total stations measure a prism on the vehicle. A point cloud of the test site serves as a digital twin of the environment, in which the movement of the vehicle is visualized. The results have shown that the fused measurements of these sensors complement each other, so that the accuracy requirements for the 6 DoF pose can be met while allowing a flexible installation in different environments.


Sensors ◽  
2020 ◽  
Vol 20 (21) ◽  
pp. 6322
Author(s):  
Razvan-Catalin Miclea ◽  
Ciprian Dughir ◽  
Florin Alexa ◽  
Florin Sandru ◽  
Ioan Silea

Visibility is a critical factor for transportation, even if we refer to air, water, or ground transportation. The biggest trend in the automotive industry is autonomous driving, the number of autonomous vehicles will increase exponentially, prompting changes in the industry and user segment. Unfortunately, these vehicles still have some drawbacks and one, always in attention and topical, will be treated in this paper—visibility distance issue in bad weather conditions, particularly in fog. The way and the speed with which vehicles will determine objects, obstacles, pedestrians, or traffic signs, especially in bad visibility, will determine how the vehicle will behave. In this paper, a new experimental set up is featured, for analyzing the effect of the fog when the laser and LIDAR (Light Detection And Ranging) radiation are used in visibility distance estimation on public roads. While using our experimental set up, in the laboratory, the information offered by these measurement systems (laser and LIDAR) are evaluated and compared with results offered by human observers in the same fog conditions. The goal is to validate and unitarily apply the results regarding visibility distance, based on information arrives from different systems that are able to estimate this parameter (in foggy weather conditions). Finally, will be notifying the drivers in case of unexpected situations. It is a combination of stationary and of moving systems. The stationary system will be installed on highways or express roads in areas prone to fog, while the moving systems are, or can be, directly installed on the vehicles (autonomous but also non-autonomous).


2020 ◽  
Vol 71 (7) ◽  
pp. 828-839
Author(s):  
Thinh Hoang Dinh ◽  
Hieu Le Thi Hong

Autonomous landing of rotary wing type unmanned aerial vehicles is a challenging problem and key to autonomous aerial fleet operation. We propose a method for localizing the UAV around the helipad, that is to estimate the relative position of the helipad with respect to the UAV. This data is highly desirable to design controllers that have robust and consistent control characteristics and can find applications in search – rescue operations. AI-based neural network is set up for helipad detection, followed by optimization by the localization algorithm. The performance of this approach is compared against fiducial marker approach, demonstrating good consensus between two estimations


2017 ◽  
Vol 17 (AEROSPACE SCIENCES) ◽  
pp. 1-6
Author(s):  
Bahaaeldin Abdelaty ◽  
Ahmed Ouda ◽  
Yehia Elhalwagy ◽  
Gamal Elnashar

Author(s):  
Jiayuan Dong ◽  
Emily Lawson ◽  
Jack Olsen ◽  
Myounghoon Jeon

Driving agents can provide an effective solution to improve drivers’ trust in and to manage interactions with autonomous vehicles. Research has focused on voice-agents, while few have explored robot-agents or the comparison between the two. The present study tested two variables - voice gender and agent embodiment, using conversational scripts. Twenty participants experienced autonomous driving using the simulator for four agent conditions and filled out subjective questionnaires for their perception of each agent. Results showed that the participants perceived the voice only female agent as more likeable, more comfortable, and more competent than other conditions. Their final preference ranking also favored this agent over the others. Interestingly, eye-tracking data showed that embodied agents did not add more visual distractions than the voice only agents. The results are discussed with the traditional gender stereotype, uncanny valley, and participants’ gender. This study can contribute to the design of in-vehicle agents in the autonomous vehicles and future studies are planned to further identify the underlying mechanisms of user perception on different agents.


Sensors ◽  
2021 ◽  
Vol 21 (11) ◽  
pp. 3783
Author(s):  
Sumbal Malik ◽  
Manzoor Ahmed Khan ◽  
Hesham El-Sayed

Sooner than expected, roads will be populated with a plethora of connected and autonomous vehicles serving diverse mobility needs. Rather than being stand-alone, vehicles will be required to cooperate and coordinate with each other, referred to as cooperative driving executing the mobility tasks properly. Cooperative driving leverages Vehicle to Vehicle (V2V) and Vehicle to Infrastructure (V2I) communication technologies aiming to carry out cooperative functionalities: (i) cooperative sensing and (ii) cooperative maneuvering. To better equip the readers with background knowledge on the topic, we firstly provide the detailed taxonomy section describing the underlying concepts and various aspects of cooperation in cooperative driving. In this survey, we review the current solution approaches in cooperation for autonomous vehicles, based on various cooperative driving applications, i.e., smart car parking, lane change and merge, intersection management, and platooning. The role and functionality of such cooperation become more crucial in platooning use-cases, which is why we also focus on providing more details of platooning use-cases and focus on one of the challenges, electing a leader in high-level platooning. Following, we highlight a crucial range of research gaps and open challenges that need to be addressed before cooperative autonomous vehicles hit the roads. We believe that this survey will assist the researchers in better understanding vehicular cooperation, its various scenarios, solution approaches, and challenges.


Author(s):  
Gaojian Huang ◽  
Christine Petersen ◽  
Brandon J. Pitts

Semi-autonomous vehicles still require drivers to occasionally resume manual control. However, drivers of these vehicles may have different mental states. For example, drivers may be engaged in non-driving related tasks or may exhibit mind wandering behavior. Also, monitoring monotonous driving environments can result in passive fatigue. Given the potential for different types of mental states to negatively affect takeover performance, it will be critical to highlight how mental states affect semi-autonomous takeover. A systematic review was conducted to synthesize the literature on mental states (such as distraction, fatigue, emotion) and takeover performance. This review focuses specifically on five fatigue studies. Overall, studies were too few to observe consistent findings, but some suggest that response times to takeover alerts and post-takeover performance may be affected by fatigue. Ultimately, this review may help researchers improve and develop real-time mental states monitoring systems for a wide range of application domains.


2021 ◽  
Vol 11 (13) ◽  
pp. 6016
Author(s):  
Jinsoo Kim ◽  
Jeongho Cho

For autonomous vehicles, it is critical to be aware of the driving environment to avoid collisions and drive safely. The recent evolution of convolutional neural networks has contributed significantly to accelerating the development of object detection techniques that enable autonomous vehicles to handle rapid changes in various driving environments. However, collisions in an autonomous driving environment can still occur due to undetected obstacles and various perception problems, particularly occlusion. Thus, we propose a robust object detection algorithm for environments in which objects are truncated or occluded by employing RGB image and light detection and ranging (LiDAR) bird’s eye view (BEV) representations. This structure combines independent detection results obtained in parallel through “you only look once” networks using an RGB image and a height map converted from the BEV representations of LiDAR’s point cloud data (PCD). The region proposal of an object is determined via non-maximum suppression, which suppresses the bounding boxes of adjacent regions. A performance evaluation of the proposed scheme was performed using the KITTI vision benchmark suite dataset. The results demonstrate the detection accuracy in the case of integration of PCD BEV representations is superior to when only an RGB camera is used. In addition, robustness is improved by significantly enhancing detection accuracy even when the target objects are partially occluded when viewed from the front, which demonstrates that the proposed algorithm outperforms the conventional RGB-based model.


2020 ◽  
Vol 13 (1) ◽  
pp. 89
Author(s):  
Manuel Carranza-García ◽  
Jesús Torres-Mateo ◽  
Pedro Lara-Benítez ◽  
Jorge García-Gutiérrez

Object detection using remote sensing data is a key task of the perception systems of self-driving vehicles. While many generic deep learning architectures have been proposed for this problem, there is little guidance on their suitability when using them in a particular scenario such as autonomous driving. In this work, we aim to assess the performance of existing 2D detection systems on a multi-class problem (vehicles, pedestrians, and cyclists) with images obtained from the on-board camera sensors of a car. We evaluate several one-stage (RetinaNet, FCOS, and YOLOv3) and two-stage (Faster R-CNN) deep learning meta-architectures under different image resolutions and feature extractors (ResNet, ResNeXt, Res2Net, DarkNet, and MobileNet). These models are trained using transfer learning and compared in terms of both precision and efficiency, with special attention to the real-time requirements of this context. For the experimental study, we use the Waymo Open Dataset, which is the largest existing benchmark. Despite the rising popularity of one-stage detectors, our findings show that two-stage detectors still provide the most robust performance. Faster R-CNN models outperform one-stage detectors in accuracy, being also more reliable in the detection of minority classes. Faster R-CNN Res2Net-101 achieves the best speed/accuracy tradeoff but needs lower resolution images to reach real-time speed. Furthermore, the anchor-free FCOS detector is a slightly faster alternative to RetinaNet, with similar precision and lower memory usage.


2022 ◽  
Vol 0 (0) ◽  
Author(s):  
Hannes Weinreuter ◽  
Balázs Szigeti ◽  
Nadine-Rebecca Strelau ◽  
Barbara Deml ◽  
Michael Heizmann

Abstract Autonomous driving is a promising technology to, among many aspects, improve road safety. There are however several scenarios that are challenging for autonomous vehicles. One of these are unsignalized junctions. There exist scenarios in which there is no clear regulation as to is allowed to drive first. Instead, communication and cooperation are necessary to solve such scenarios. This is especially challenging when interacting with human drivers. In this work we focus on unsignalized T-intersections. For that scenario we propose a discrete event system (DES) that is able to solve the cooperation with human drivers at a T-intersection with limited visibility and no direct communication. The algorithm is validated in a simulation environment, and the parameters for the algorithm are based on an analysis of typical human behavior at intersections using real-world data.


Sign in / Sign up

Export Citation Format

Share Document