scholarly journals Constraint-Based Hierarchical Cluster Selection in Automotive Radar Data

Sensors ◽  
2021 ◽  
Vol 21 (10) ◽  
pp. 3410
Author(s):  
Claudia Malzer ◽  
Marcus Baum

High-resolution automotive radar sensors play an increasing role in detection, classification and tracking of moving objects in traffic scenes. Clustering is frequently used to group detection points in this context. However, this is a particularly challenging task due to variations in number and density of available data points across different scans. Modified versions of the density-based clustering method DBSCAN have mostly been used so far, while hierarchical approaches are rarely considered. In this article, we explore the applicability of HDBSCAN, a hierarchical DBSCAN variant, for clustering radar measurements. To improve results achieved by its unsupervised version, we propose the use of cluster-level constraints based on aggregated background information from cluster candidates. Further, we propose the application of a distance threshold to avoid selection of small clusters at low hierarchy levels. Based on exemplary traffic scenes from nuScenes, a publicly available autonomous driving data set, we test our constraint-based approach along with other methods, including label-based semi-supervised HDBSCAN. Our experiments demonstrate that cluster-level constraints help to adjust HDBSCAN to the given application context and can therefore achieve considerably better results than the unsupervised method. However, the approach requires carefully selected constraint criteria that can be difficult to choose in constantly changing environments.

2017 ◽  
Vol 14 (1) ◽  
pp. 172988141668713 ◽  
Author(s):  
Seongjo Lee ◽  
Seoungjae Cho ◽  
Sungdae Sim ◽  
Kiho Kwak ◽  
Yong Woon Park ◽  
...  

Obstacle avoidance and available road identification technologies have been investigated for autonomous driving of an unmanned vehicle. In order to apply research results to autonomous driving in real environments, it is necessary to consider moving objects. This article proposes a preprocessing method to identify the dynamic zones where moving objects exist around an unmanned vehicle. This method accumulates three-dimensional points from a light detection and ranging sensor mounted on an unmanned vehicle in voxel space. Next, features are identified from the cumulative data at high speed, and zones with significant feature changes are estimated as zones where dynamic objects exist. The approach proposed in this article can identify dynamic zones even for a moving vehicle and processes data quickly using several features based on the geometry, height map and distribution of three-dimensional space data. The experiment for evaluating the performance of proposed approach was conducted using ground truth data on simulation and real environment data set.


Author(s):  
Nicolae-Catalin Ristea ◽  
Andrei Anghel ◽  
Radu Tudor Ionescu

The interest of the automotive industry has progressively focused on subjects related to driver assistance systems as well as autonomous cars. In order to achieve remarkable results, cars combine a variety of sensors to perceive their surroundings robustly. Among them, radar sensors are indispensable because of their independence of light conditions and the possibility to directly measure velocity. However, radar interference is an issue that becomes prevalent with the increasing amount of radar systems in automotive scenarios. In this paper, we address this issue for frequency modulated continuous wave (FMCW) radars with fully convolutional neural networks (FCNs), a state-of-the-art deep learning technique. We propose two FCNs that take spectrograms of the beat signals as input, and provide the corresponding clean range profiles as output. We propose two architectures for interference mitigation which outperform the classical zeroing technique. Moreover, considering the lack of databases for this task, we release as open source a large scale data set that closely replicates real world automotive scenarios for single-interference cases, allowing others to compare objectively their future work in this domain. The data set is available for download at: http://github.com/ristea/arim.


2012 ◽  
Vol 10 ◽  
pp. 45-55 ◽  
Author(s):  
A. Bartsch ◽  
F. Fitzek ◽  
R. H. Rasshofer

Abstract. The application of modern series production automotive radar sensors to pedestrian recognition is an important topic in research on future driver assistance systems. The aim of this paper is to understand the potential and limits of such sensors in pedestrian recognition. This knowledge could be used to develop next generation radar sensors with improved pedestrian recognition capabilities. A new raw radar data signal processing algorithm is proposed that allows deep insights into the object classification process. The impact of raw radar data properties can be directly observed in every layer of the classification system by avoiding machine learning and tracking. This gives information on the limiting factors of raw radar data in terms of classification decision making. To accomplish the very challenging distinction between pedestrians and static objects, five significant and stable object features from the spatial distribution and Doppler information are found. Experimental results with data from a 77 GHz automotive radar sensor show that over 95% of pedestrians can be classified correctly under optimal conditions, which is compareable to modern machine learning systems. The impact of the pedestrian's direction of movement, occlusion, antenna beam elevation angle, linear vehicle movement, and other factors are investigated and discussed. The results show that under real life conditions, radar only based pedestrian recognition is limited due to insufficient Doppler frequency and spatial resolution as well as antenna side lobe effects.


Sensors ◽  
2020 ◽  
Vol 20 (16) ◽  
pp. 4463
Author(s):  
Christoph Weber ◽  
Johannes von Eichel-Streiber ◽  
Jesús Rodrigo-Comino ◽  
Jens Altenburg ◽  
Thomas Udelhoven

The use of unmanned aerial vehicles (UAVs) in earth science research has drastically increased during the last decade. The reason being innumerable advantages to detecting and monitoring various environmental processes before and after certain events such as rain, wind, flood, etc. or to assess the current status of specific landforms such as gullies, rills, or ravines. The UAV equipped sensors are a key part to success. Besides commonly used sensors such as cameras, radar sensors are another possibility. They are less known for this application, but already well established in research. A vast number of research projects use professional radars, but they are expensive and difficult to handle. Therefore, the use of low-cost radar sensors is becoming more relevant. In this article, to make the usage of radar simpler and more efficient, we developed with automotive radar technology. We introduce basic radar techniques and present two radar sensors with their specifications. To record the radar data, we developed a system with an integrated camera and sensors. The weight of the whole system is about 315 g for the small radar and 450 g for the large one. The whole system was integrated into a UAV and test flights were performed. After that, several flights were carried out, to verify the system with both radar sensors. Thereby, the records provide an insight into the radar data. We demonstrated that the recording system works and the radar sensors are suitable for the usage in a UAV and future earth science research because of its autonomy, precision, and lightweight.


2021 ◽  
Vol 15 ◽  
Author(s):  
Javier López-Randulfe ◽  
Tobias Duswald ◽  
Zhenshan Bing ◽  
Alois Knoll

The development of advanced autonomous driving applications is hindered by the complex temporal structure of sensory data, as well as by the limited computational and energy resources of their on-board systems. Currently, neuromorphic engineering is a rapidly growing field that aims to design information processing systems similar to the human brain by leveraging novel algorithms based on spiking neural networks (SNNs). These systems are well-suited to recognize temporal patterns in data while maintaining a low energy consumption and offering highly parallel architectures for fast computation. However, the lack of effective algorithms for SNNs impedes their wide usage in mobile robot applications. This paper addresses the problem of radar signal processing by introducing a novel SNN that substitutes the discrete Fourier transform and constant false-alarm rate algorithm for raw radar data, where the weights and architecture of the SNN are derived from the original algorithms. We demonstrate that our proposed SNN can achieve competitive results compared to that of the original algorithms in simulated driving scenarios while retaining its spike-based nature.


2021 ◽  
Vol 3 (1) ◽  
Author(s):  
Nicolas Scheiner ◽  
Florian Kraus ◽  
Nils Appenrodt ◽  
Jürgen Dickmann ◽  
Bernhard Sick

AbstractAutomotive radar perception is an integral part of automated driving systems. Radar sensors benefit from their excellent robustness against adverse weather conditions such as snow, fog, or heavy rain. Despite the fact that machine-learning-based object detection is traditionally a camera-based domain, vast progress has been made for lidar sensors, and radar is also catching up. Recently, several new techniques for using machine learning algorithms towards the correct detection and classification of moving road users in automotive radar data have been introduced. However, most of them have not been compared to other methods or require next generation radar sensors which are far more advanced than current conventional automotive sensors. This article makes a thorough comparison of existing and novel radar object detection algorithms with some of the most successful candidates from the image and lidar domain. All experiments are conducted using a conventional automotive radar system. In addition to introducing all architectures, special attention is paid to the necessary point cloud preprocessing for all methods. By assessing all methods on a large and open real world data set, this evaluation provides the first representative algorithm comparison in this domain and outlines future research directions.


Author(s):  
Alicja Ossowska ◽  
Leen Sit ◽  
Sarath Manchala ◽  
Thomas Vogler ◽  
Kevin Krupinski ◽  
...  

2019 ◽  
Vol 12 (2) ◽  
pp. 120-127 ◽  
Author(s):  
Wael Farag

Background: In this paper, a Convolutional Neural Network (CNN) to learn safe driving behavior and smooth steering manoeuvring, is proposed as an empowerment of autonomous driving technologies. The training data is collected from a front-facing camera and the steering commands issued by an experienced driver driving in traffic as well as urban roads. Methods: This data is then used to train the proposed CNN to facilitate what it is called “Behavioral Cloning”. The proposed Behavior Cloning CNN is named as “BCNet”, and its deep seventeen-layer architecture has been selected after extensive trials. The BCNet got trained using Adam’s optimization algorithm as a variant of the Stochastic Gradient Descent (SGD) technique. Results: The paper goes through the development and training process in details and shows the image processing pipeline harnessed in the development. Conclusion: The proposed approach proved successful in cloning the driving behavior embedded in the training data set after extensive simulations.


Sensors ◽  
2021 ◽  
Vol 21 (9) ◽  
pp. 2894
Author(s):  
Minh-Quan Dao ◽  
Vincent Frémont

Multi-Object Tracking (MOT) is an integral part of any autonomous driving pipelines because it produces trajectories of other moving objects in the scene and predicts their future motion. Thanks to the recent advances in 3D object detection enabled by deep learning, track-by-detection has become the dominant paradigm in 3D MOT. In this paradigm, a MOT system is essentially made of an object detector and a data association algorithm which establishes track-to-detection correspondence. While 3D object detection has been actively researched, association algorithms for 3D MOT has settled at bipartite matching formulated as a Linear Assignment Problem (LAP) and solved by the Hungarian algorithm. In this paper, we adapt a two-stage data association method which was successfully applied to image-based tracking to the 3D setting, thus providing an alternative for data association for 3D MOT. Our method outperforms the baseline using one-stage bipartite matching for data association by achieving 0.587 Average Multi-Object Tracking Accuracy (AMOTA) in NuScenes validation set and 0.365 AMOTA (at level 2) in Waymo test set.


Sign in / Sign up

Export Citation Format

Share Document