scholarly journals Radar Target Simulation for Vehicle-in-the-Loop Testing

Vehicles ◽  
2021 ◽  
Vol 3 (2) ◽  
pp. 257-271
Author(s):  
Axel Diewald ◽  
Clemens Kurz ◽  
Prasanna Venkatesan Kannan ◽  
Martin Gießler ◽  
Mario Pauli ◽  
...  

Automotive radar sensors play a vital role in the current development of autonomous driving. Their ability to detect objects even under adverse conditions makes them indispensable for environment-sensing tasks in autonomous vehicles. As their functional operation must be validated in-place, a fully integrated test system is required. Radar Target Simulators (RTS) are capable of executing end-of-line, over-the-air validation tests by looping back a received and afterward modified radar signal and have been incorporated into existing Vehicle-in-the-Loop (ViL) test beds before. However, the currently available ViL test beds and the RTS systems that they consist of lack the ability to generate authentic radar echoes with respect to their complexity. The paper at hand reviews the current development stage of the research as well as commercial ViL and RTS systems. Furthermore, the concept and implementation of a new test setup for the rapid prototyping and validation of ADAS functions is presented. This represents the first-ever integrated radar validation test system to comprise multiple angle-resolved radar target channels, each capable of generating multiple radar echoes. A measurement campaign that supports this claim has been conducted.

Author(s):  
Andreas Gruber ◽  
Michael Gadringer ◽  
Helmut Schreiber ◽  
Dominik Amschl ◽  
Wolfgang Bosch ◽  
...  

2021 ◽  
Vol 15 ◽  
Author(s):  
Javier López-Randulfe ◽  
Tobias Duswald ◽  
Zhenshan Bing ◽  
Alois Knoll

The development of advanced autonomous driving applications is hindered by the complex temporal structure of sensory data, as well as by the limited computational and energy resources of their on-board systems. Currently, neuromorphic engineering is a rapidly growing field that aims to design information processing systems similar to the human brain by leveraging novel algorithms based on spiking neural networks (SNNs). These systems are well-suited to recognize temporal patterns in data while maintaining a low energy consumption and offering highly parallel architectures for fast computation. However, the lack of effective algorithms for SNNs impedes their wide usage in mobile robot applications. This paper addresses the problem of radar signal processing by introducing a novel SNN that substitutes the discrete Fourier transform and constant false-alarm rate algorithm for raw radar data, where the weights and architecture of the SNN are derived from the original algorithms. We demonstrate that our proposed SNN can achieve competitive results compared to that of the original algorithms in simulated driving scenarios while retaining its spike-based nature.


2017 ◽  
Author(s):  
Sujeet Patole ◽  
Murat Torlak ◽  
Dan Wang ◽  
Murtaza Ali

Automotive radars, along with other sensors such as lidar, (which stands for “light detection and ranging”), ultrasound, and cameras, form the backbone of self-driving cars and advanced driver assistant systems (ADASs). These technological advancements are enabled by extremely complex systems with a long signal processing path from radars/sensors to the controller. Automotive radar systems are responsible for the detection of objects and obstacles, their position, and speed relative to the vehicle. The development of signal processing techniques along with progress in the millimeter- wave (mm-wave) semiconductor technology plays a key role in automotive radar systems. Various signal processing techniques have been developed to provide better resolution and estimation performance in all measurement dimensions: range, azimuth-elevation angles, and velocity of the targets surrounding the vehicles. This article summarizes various aspects of automotive radar signal processing techniques, including waveform design, possible radar architectures, estimation algorithms, implementation complexity-resolution trade-off, and adaptive processing for complex environments, as well as unique problems associated with automotive radars such as pedestrian detection. We believe that this review article will combine the several contributions scattered in the literature to serve as a primary starting point to new researchers and to give a bird’s-eye view to the existing research community.


Author(s):  
Jiayuan Dong ◽  
Emily Lawson ◽  
Jack Olsen ◽  
Myounghoon Jeon

Driving agents can provide an effective solution to improve drivers’ trust in and to manage interactions with autonomous vehicles. Research has focused on voice-agents, while few have explored robot-agents or the comparison between the two. The present study tested two variables - voice gender and agent embodiment, using conversational scripts. Twenty participants experienced autonomous driving using the simulator for four agent conditions and filled out subjective questionnaires for their perception of each agent. Results showed that the participants perceived the voice only female agent as more likeable, more comfortable, and more competent than other conditions. Their final preference ranking also favored this agent over the others. Interestingly, eye-tracking data showed that embodied agents did not add more visual distractions than the voice only agents. The results are discussed with the traditional gender stereotype, uncanny valley, and participants’ gender. This study can contribute to the design of in-vehicle agents in the autonomous vehicles and future studies are planned to further identify the underlying mechanisms of user perception on different agents.


2021 ◽  
Vol 13 (6) ◽  
pp. 1064
Author(s):  
Zhangjing Wang ◽  
Xianhan Miao ◽  
Zhen Huang ◽  
Haoran Luo

The development of autonomous vehicles and unmanned aerial vehicles has led to a current research focus on improving the environmental perception of automation equipment. The unmanned platform detects its surroundings and then makes a decision based on environmental information. The major challenge of environmental perception is to detect and classify objects precisely; thus, it is necessary to perform fusion of different heterogeneous data to achieve complementary advantages. In this paper, a robust object detection and classification algorithm based on millimeter-wave (MMW) radar and camera fusion is proposed. The corresponding regions of interest (ROIs) are accurately calculated from the approximate position of the target detected by radar and cameras. A joint classification network is used to extract micro-Doppler features from the time-frequency spectrum and texture features from images in the ROIs. A fusion dataset between radar and camera is established using a fusion data acquisition platform and includes intersections, highways, roads, and playgrounds in schools during the day and at night. The traditional radar signal algorithm, the Faster R-CNN model and our proposed fusion network model, called RCF-Faster R-CNN, are evaluated in this dataset. The experimental results indicate that the mAP(mean Average Precision) of our network is up to 89.42% more accurate than the traditional radar signal algorithm and up to 32.76% higher than Faster R-CNN, especially in the environment of low light and strong electromagnetic clutter.


Sensors ◽  
2021 ◽  
Vol 21 (11) ◽  
pp. 3783
Author(s):  
Sumbal Malik ◽  
Manzoor Ahmed Khan ◽  
Hesham El-Sayed

Sooner than expected, roads will be populated with a plethora of connected and autonomous vehicles serving diverse mobility needs. Rather than being stand-alone, vehicles will be required to cooperate and coordinate with each other, referred to as cooperative driving executing the mobility tasks properly. Cooperative driving leverages Vehicle to Vehicle (V2V) and Vehicle to Infrastructure (V2I) communication technologies aiming to carry out cooperative functionalities: (i) cooperative sensing and (ii) cooperative maneuvering. To better equip the readers with background knowledge on the topic, we firstly provide the detailed taxonomy section describing the underlying concepts and various aspects of cooperation in cooperative driving. In this survey, we review the current solution approaches in cooperation for autonomous vehicles, based on various cooperative driving applications, i.e., smart car parking, lane change and merge, intersection management, and platooning. The role and functionality of such cooperation become more crucial in platooning use-cases, which is why we also focus on providing more details of platooning use-cases and focus on one of the challenges, electing a leader in high-level platooning. Following, we highlight a crucial range of research gaps and open challenges that need to be addressed before cooperative autonomous vehicles hit the roads. We believe that this survey will assist the researchers in better understanding vehicular cooperation, its various scenarios, solution approaches, and challenges.


Author(s):  
Gaojian Huang ◽  
Christine Petersen ◽  
Brandon J. Pitts

Semi-autonomous vehicles still require drivers to occasionally resume manual control. However, drivers of these vehicles may have different mental states. For example, drivers may be engaged in non-driving related tasks or may exhibit mind wandering behavior. Also, monitoring monotonous driving environments can result in passive fatigue. Given the potential for different types of mental states to negatively affect takeover performance, it will be critical to highlight how mental states affect semi-autonomous takeover. A systematic review was conducted to synthesize the literature on mental states (such as distraction, fatigue, emotion) and takeover performance. This review focuses specifically on five fatigue studies. Overall, studies were too few to observe consistent findings, but some suggest that response times to takeover alerts and post-takeover performance may be affected by fatigue. Ultimately, this review may help researchers improve and develop real-time mental states monitoring systems for a wide range of application domains.


2021 ◽  
Vol 11 (13) ◽  
pp. 6016
Author(s):  
Jinsoo Kim ◽  
Jeongho Cho

For autonomous vehicles, it is critical to be aware of the driving environment to avoid collisions and drive safely. The recent evolution of convolutional neural networks has contributed significantly to accelerating the development of object detection techniques that enable autonomous vehicles to handle rapid changes in various driving environments. However, collisions in an autonomous driving environment can still occur due to undetected obstacles and various perception problems, particularly occlusion. Thus, we propose a robust object detection algorithm for environments in which objects are truncated or occluded by employing RGB image and light detection and ranging (LiDAR) bird’s eye view (BEV) representations. This structure combines independent detection results obtained in parallel through “you only look once” networks using an RGB image and a height map converted from the BEV representations of LiDAR’s point cloud data (PCD). The region proposal of an object is determined via non-maximum suppression, which suppresses the bounding boxes of adjacent regions. A performance evaluation of the proposed scheme was performed using the KITTI vision benchmark suite dataset. The results demonstrate the detection accuracy in the case of integration of PCD BEV representations is superior to when only an RGB camera is used. In addition, robustness is improved by significantly enhancing detection accuracy even when the target objects are partially occluded when viewed from the front, which demonstrates that the proposed algorithm outperforms the conventional RGB-based model.


Sign in / Sign up

Export Citation Format

Share Document