Single-Frame Vulnerable Road Users Classification with a 77 GHz FMCW Radar Sensor and a Convolutional Neural Network

Author(s):  
Rodrigo Perez ◽  
Falk Schubert ◽  
Ralph Rasshofer ◽  
Erwin Biebl
2020 ◽  
Vol 12 (21) ◽  
pp. 3508
Author(s):  
Mohammed Elhenawy ◽  
Huthaifa I. Ashqar ◽  
Mahmoud Masoud ◽  
Mohammed H. Almannaa ◽  
Andry Rakotonirainy ◽  
...  

As the Autonomous Vehicle (AV) industry is rapidly advancing, the classification of non-motorized (vulnerable) road users (VRUs) becomes essential to ensure their safety and to smooth operation of road applications. The typical practice of non-motorized road users’ classification usually takes significant training time and ignores the temporal evolution and behavior of the signal. In this research effort, we attempt to detect VRUs with high accuracy be proposing a novel framework that includes using Deep Transfer Learning, which saves training time and cost, to classify images constructed from Recurrence Quantification Analysis (RQA) that reflect the temporal dynamics and behavior of the signal. Recurrence Plots (RPs) were constructed from low-power smartphone sensors without using GPS data. The resulted RPs were used as inputs for different pre-trained Convolutional Neural Network (CNN) classifiers including constructing 227 × 227 images to be used for AlexNet and SqueezeNet; and constructing 224 × 224 images to be used for VGG16 and VGG19. Results show that the classification accuracy of Convolutional Neural Network Transfer Learning (CNN-TL) reaches 98.70%, 98.62%, 98.71%, and 98.71% for AlexNet, SqueezeNet, VGG16, and VGG19, respectively. Moreover, we trained resnet101 and shufflenet for a very short time using one epoch of data and then used them as weak learners, which yielded 98.49% classification accuracy. The results of the proposed framework outperform other results in the literature (to the best of our knowledge) and show that using CNN-TL is promising for VRUs classification. Because of its relative straightforwardness, ability to be generalized and transferred, and potential high accuracy, we anticipate that this framework might be able to solve various problems related to signal classification.


2019 ◽  
Vol 17 ◽  
pp. 129-136 ◽  
Author(s):  
Rodrigo Pérez ◽  
Falk Schubert ◽  
Ralph Rasshofer ◽  
Erwin Biebl

Abstract. This work presents an approach to classify road users as pedestrians, cyclists or cars using a lidar sensor and a radar sensor. The lidar is used to detect moving road users in the surroundings of the car. A 2-dimensional range-Doppler window, a so called region of interest, of the radar power spectrum centered at the object's position is cut out and fed into a convolutional neural network to be classified. With this approach it is possible to classify multiple moving objects within a single radar measurement frame. The convolutional neural network is trained using data gathered with a test vehicle in real urban scenarios. An overall classification accuracy as high as 0.91 is achieved with this approach. The accuracy can be improved to 0.94 after applying a discrete Bayes filter on top of the classifier.


Author(s):  
Mohammed Elhenawy ◽  
Huthaifa Ashqar ◽  
Mahmoud Masoud ◽  
Mohammed Almannaa ◽  
Andry Rakotonirainy ◽  
...  

As the Autonomous Vehicle (AV) industry is rapidly advancing, classification of non-motorized (vulnerable) road users (VRUs) becomes essential to ensure their safety and to smooth operation of road applications. The typical practice of non-motorized road users’ classification usually takes numerous training time and ignores the temporal evolution and behavior of the signal. In this research effort, we attempt to detect VRUs with high accuracy be proposing a novel framework that includes using Deep Transfer Learning, which saves training time and cost, to classify images constructed from Recurrence Quantification Analysis (RQA) that reflect the temporal dynamics and behavior of the signal. Recurrence Plots (RPs) were constructed from low-power smartphone sensors without using GPS data. The resulted RPs were used as inputs for different pre-trained Convolutional Neural Network (CNN) classifiers including constructing 227×227 images to be used for AlexNet and SqueezeNet; and constructing 224×224 images to be used for VGG16 and VGG19. Results show that the classification accuracy of Convolutional Neural Network Transfer Learning (CNN-TL) reaches 98.70%, 98.62%, 98.71%, and 98.71% for AlexNet, SqueezeNet, VGG16, and VGG19, respectively. The results of the proposed framework outperform other results in the literature (to the best of our knowledge) and show that using CNN-TL is promising for VRUs classification. Because of its relative straightforwardness, ability to be generalized and transferred, and potential high accuracy, we anticipate that this framework might be able to solve various problems related to signal classification.


2021 ◽  
Author(s):  
Malte Oeljeklaus

This thesis investigates methods for traffic scene perception with monocular cameras for a basic environment model in the context of automated vehicles. The developed approach is designed with special attention to the computational limitations present in practical systems. For this purpose, three different scene representations are investigated. These consist of the prevalent road topology as the global scene context, the drivable road area and the detection and spatial reconstruction of other road users. An approach is developed that allows for the simultaneous perception of all environment representations based on a multi-task convolutional neural network. The obtained results demonstrate the efficiency of the multi-task approach. In particular, the effects of shareable image features for the perception of the individual scene representations were found to improve the computational performance. Contents Nomenclature VII 1 Introduction 1 1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Outline and contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2 Related Work and Fundamental Background 8 2.1 Advances in CNN...


Electronics ◽  
2020 ◽  
Vol 9 (4) ◽  
pp. 573 ◽  
Author(s):  
Onur Toker ◽  
Suleiman Alsweiss

In this paper, we propose a novel 77 GHz automotive radar sensor, and demonstrate its cyberattack resilience using real measurements. The proposed system is built upon a standard Frequency Modulated Continuous Wave (FMCW) radar RF-front end, and the novelty is in the DSP algorithm used at the firmware level. All attack scenarios are based on real radar signals generated by Texas Instruments AWR series 77 GHz radars, and all measurements are done using the same radar family. For sensor networks, including interconnected autonomous vehicles sharing radar measurements, cyberattacks at the network/communication layer is a known critical problem, and has been addressed by several different researchers. What is addressed in this paper is cyberattacks at the physical layer, that is, adversarial agents generating 77 GHz electromagnetic waves which may cause a false target detection, false distance/velocity estimation, or not detecting an existing target. The main algorithm proposed in this paper is not a predictive filtering based cyberattack detection scheme where an “unusual” difference between measured and predicted values triggers an alarm. The core idea is based on a kind of physical challenge-response authentication, and its integration into the radar DSP firmware.


IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 141648-141656
Author(s):  
Heonkyo Sim ◽  
The-Duong Do ◽  
Seongwook Lee ◽  
Yong-Hwa Kim ◽  
Seong-Cheol Kim

Sign in / Sign up

Export Citation Format

Share Document