Machine Learning and Deep Learning Techniques for Colocated MIMO Radars: A Tutorial Overview

Author(s):  
ALESSANDRO DAVOLI ◽  
Giorgio Guerzoni ◽  
Giorgio Matteo Vitetta

<p>Radars are expected to become the main sensors in various civilian applications, ranging from health-care monitoring to autonomous driving. Their success is mainly due to the availability of both low cost integrated devices, equipped with compact antenna arrays, and computationally efficient signal processing techniques. An increasingly important role in the field of radar signal processing is played by machine learning and deep learning techniques. Their use has been first taken into consideration in human gesture and motion recognition, and in various healthcare applications. More recently, their exploitation in object detection and localization has been also investigated. The research work accomplished in these areas has raised various technical problems that need to be carefully addressed before adopting the above mentioned techniques in real world radar systems. In this manuscript, a comprehensive overview of the machine learning and deep learning techniques currently being considered for their use in radar systems is provided. Moreover, some relevant open problems and current trends in this research area are analysed. Finally, various numerical results, based on both synthetically generated and experimental datasets, and referring to two different applications are illustrated. These allow readers to assess the efficacy of specific methods and to compare them in terms of accuracy and computational effort.</p>

2021 ◽  
Author(s):  
ALESSANDRO DAVOLI ◽  
Giorgio Guerzoni ◽  
Giorgio Matteo Vitetta

<p>Radars are expected to become the main sensors in various civilian applications, ranging from health-care monitoring to autonomous driving. Their success is mainly due to the availability of both low cost integrated devices, equipped with compact antenna arrays, and computationally efficient signal processing techniques. An increasingly important role in the field of radar signal processing is played by machine learning and deep learning techniques. Their use has been first taken into consideration in human gesture and motion recognition, and in various healthcare applications. More recently, their exploitation in object detection and localization has been also investigated. The research work accomplished in these areas has raised various technical problems that need to be carefully addressed before adopting the above mentioned techniques in real world radar systems. In this manuscript, a comprehensive overview of the machine learning and deep learning techniques currently being considered for their use in radar systems is provided. Moreover, some relevant open problems and current trends in this research area are analysed. Finally, various numerical results, based on both synthetically generated and experimental datasets, and referring to two different applications are illustrated. These allow readers to assess the efficacy of specific methods and to compare them in terms of accuracy and computational effort.</p>


2020 ◽  
Vol 9 (2) ◽  
pp. 1049-1054

In this paper, we have tried to predict flight delays using different machine learning and deep learning techniques. By using such a model it can be easier to predict whether the flight will be delayed or not. Factors like ‘WeatherDelay’, ‘NASDelay’, ‘Destination’, ‘Origin’ play a vital role in this model. Using machine learning algorithms like Random Forest, Support Vector Machine (SVM) and K-Nearest Neighbors (KNN), the f1-score, precision, recall, support and accuracy have been predicted. To add to the model, Long Short-Term Memory (LSTM) RNN architecture has also been employed. In the paper, the dataset from Bureau of Transportation Statistics (BTS) of the ‘Pittsburgh’ is being used. The results computed from the above mentioned algorithms have been compared. Further, the results were visualized for various airlines to find maximum delay and AUC-ROC curve has been plotted for Random Forest Algorithm. The aim of our research work is to predict the delay so as to minimize loses and increase customer satisfaction.


Change detection is used to find whether the changes happened or not between two different time periods using remote sensing images. We can use various machine learning techniques and deep learning techniques for the change detection analysis using remote sensing images. This paper mainly focused on computational and performance analysis of both techniques in the application of change detection .For each approach, we considered ten different kinds of algorithms and evaluated the performance. Moreover, in this research work, we have analyzed merits and demerits of each method which have used to change detection.


Author(s):  
V. I. Porsev ◽  
A. I. Gelesev ◽  
A. G. Krasko

We analysed existing publications concerning virtual antenna arrays and determined the limitations of using them in radar systems for the case of prior uncertainty regarding angular positions of signal sources. The paper shows that it is possible to increase angular coordinate resolution for the case of prior uncertainty regarding angular positions of signal sources by employing a virtual antenna array at typical signal-to-noise ratios used in radar signal processing. We provide results of simulating the signals numerically, which confirm our analytical calculations.


Electronics ◽  
2022 ◽  
Vol 11 (1) ◽  
pp. 156
Author(s):  
Wen Jiang ◽  
Yihui Ren ◽  
Ying Liu ◽  
Jiaxu Leng

Radar target detection (RTD) is a fundamental but important process of the radar system, which is designed to differentiate and measure targets from a complex background. Deep learning methods have gained great attention currently and have turned out to be feasible solutions in radar signal processing. Compared with the conventional RTD methods, deep learning-based methods can extract features automatically and yield more accurate results. Applying deep learning to RTD is considered as a novel concept. In this paper, we review the applications of deep learning in the field of RTD and summarize the possible limitations. This work is timely due to the increasing number of research works published in recent years. We hope that this survey will provide guidelines for future studies and applications of deep learning in RTD and related areas of radar signal processing.


2017 ◽  
Author(s):  
Sujeet Patole ◽  
Murat Torlak ◽  
Dan Wang ◽  
Murtaza Ali

Automotive radars, along with other sensors such as lidar, (which stands for “light detection and ranging”), ultrasound, and cameras, form the backbone of self-driving cars and advanced driver assistant systems (ADASs). These technological advancements are enabled by extremely complex systems with a long signal processing path from radars/sensors to the controller. Automotive radar systems are responsible for the detection of objects and obstacles, their position, and speed relative to the vehicle. The development of signal processing techniques along with progress in the millimeter- wave (mm-wave) semiconductor technology plays a key role in automotive radar systems. Various signal processing techniques have been developed to provide better resolution and estimation performance in all measurement dimensions: range, azimuth-elevation angles, and velocity of the targets surrounding the vehicles. This article summarizes various aspects of automotive radar signal processing techniques, including waveform design, possible radar architectures, estimation algorithms, implementation complexity-resolution trade-off, and adaptive processing for complex environments, as well as unique problems associated with automotive radars such as pedestrian detection. We believe that this review article will combine the several contributions scattered in the literature to serve as a primary starting point to new researchers and to give a bird’s-eye view to the existing research community.


2021 ◽  
pp. 1-12
Author(s):  
Mukul Kumar ◽  
Nipun Katyal ◽  
Nersisson Ruban ◽  
Elena Lyakso ◽  
A. Mary Mekala ◽  
...  

Over the years the need for differentiating various emotions from oral communication plays an important role in emotion based studies. There have been different algorithms to classify the kinds of emotion. Although there is no measure of fidelity of the emotion under consideration, which is primarily due to the reason that most of the readily available datasets that are annotated are produced by actors and not generated in real-world scenarios. Therefore, the predicted emotion lacks an important aspect called authenticity, which is whether an emotion is actual or stimulated. In this research work, we have developed a transfer learning and style transfer based hybrid convolutional neural network algorithm to classify the emotion as well as the fidelity of the emotion. The model is trained on features extracted from a dataset that contains stimulated as well as actual utterances. We have compared the developed algorithm with conventional machine learning and deep learning techniques by few metrics like accuracy, Precision, Recall and F1 score. The developed model performs much better than the conventional machine learning and deep learning models. The research aims to dive deeper into human emotion and make a model that understands it like humans do with precision, recall, F1 score values of 0.994, 0.996, 0.995 for speech authenticity and 0.992, 0.989, 0.99 for speech emotion classification respectively.


Sign in / Sign up

Export Citation Format

Share Document