scholarly journals Parameter Estimation and Target Detection of Phased-MIMO Radar Using Capon Estimator

2020 ◽  
Vol 20 (2) ◽  
pp. 60
Author(s):  
Syahfrizal Tahcfulloh ◽  
Muttaqin Hardiwansyah

Phased-Multiple Input Multiple Output (PMIMO) radar is multi-antenna radar that combines the main advantages of the phased array (PA) and the MIMO radars. The advantage of the PA radar is that it has a high directional coherent gain making it suitable for detecting distant and small radar cross-section (RCS) targets. Meanwhile, the main advantage of the MIMO radar is its high waveform diversity gain which makes it suitable for detecting multiple targets. The combination of these advantages is manifested by the use of overlapping subarrays in the transmit (Tx) array to improve the performance of parameters such as angle resolution and detection accuracy at amplitude and phase proportional to the maximum number of detectable targets. This paper derives a parameter estimation formula with Capon's adaptive estimator and evaluates it for the performance of these parameters. Likewise, derivation for expressions of detection performance such as the probability of false alarm and the probability of detection is also given. The effectiveness and validation of its performance are compared to conventional estimator for other types of radars in terms of the effect of the number of target angles, the RCS of targets, and variations in the number of subarrays at Tx of this radar. Meanwhile, the detection performance is evaluated based on the effect of Signal to Noise Ratio (SNR) and the number of subarrays at Tx. The evaluation results of the estimator show that it is superior to the conventional estimator for estimating the parameters of this radar as well as the detection performance. Having no sidelobe makes this estimator strong against the influence of interference and jamming so that it is suitable and attractive for the design of radar systems. Root mean square error (RMSE) on magnitude detection from LS and Capon estimators were 0.033 and 0.062, respectively. Meanwhile, the detection performance for this radar has the probability of false alarm above 10-4 and the probability of detection of more than 99%.

2017 ◽  
Vol 2017 ◽  
pp. 1-9
Author(s):  
Cheng Chen ◽  
Junjie He ◽  
Zeshi Yuan ◽  
Xiaohua Zhu ◽  
Hongtao Li

The detection performance of direct data domain (D3) space-time adaptive processing (STAP) will be extremely degraded when there are mismatches between the actual and the presumed signal steering vectors. In this paper, a robust D3 STAP method for multiple-input multiple-output (MIMO) radar is developed. The proposed method utilizes the worst-case performance optimization (WCPO) to prevent the target self-nulling effect. An upper bound for the norm of the signal steering vector error is given to ensure that the WCPO problem has an admissible solution. Meanwhile, to obtain better detection performance in the low signal-to-noise ratio (SNR) environment, the proposed method gives a modified objective function to minimize the array noise while mitigating the interferences. Simulation results demonstrate the validity of our proposed method.


2021 ◽  
Vol 13 (9) ◽  
pp. 1703
Author(s):  
He Yan ◽  
Chao Chen ◽  
Guodong Jin ◽  
Jindong Zhang ◽  
Xudong Wang ◽  
...  

The traditional method of constant false-alarm rate detection is based on the assumption of an echo statistical model. The target recognition accuracy rate and the high false-alarm rate under the background of sea clutter and other interferences are very low. Therefore, computer vision technology is widely discussed to improve the detection performance. However, the majority of studies have focused on the synthetic aperture radar because of its high resolution. For the defense radar, the detection performance is not satisfactory because of its low resolution. To this end, we herein propose a novel target detection method for the coastal defense radar based on faster region-based convolutional neural network (Faster R-CNN). The main processing steps are as follows: (1) the Faster R-CNN is selected as the sea-surface target detector because of its high target detection accuracy; (2) a modified Faster R-CNN based on the characteristics of sparsity and small target size in the data set is employed; and (3) soft non-maximum suppression is exploited to eliminate the possible overlapped detection boxes. Furthermore, detailed comparative experiments based on a real data set of coastal defense radar are performed. The mean average precision of the proposed method is improved by 10.86% compared with that of the original Faster R-CNN.


2011 ◽  
Vol 255-260 ◽  
pp. 2898-2903
Author(s):  
Chang Peng Ji ◽  
Mo Gao ◽  
Jie Yang

Double threshold detection based on constraint judgment is proposed for micro-seismic signal detection. The improvement effect on Probability of False Alarm and influence on Probability of Detection are quantitatively analyzed with constraint judgment. The mathematical models of total PFA and PD of double threshold detection based on constraint judgment are built, and the validity of the mathematical model is verified by simulation tests and experiments. The results show that the signal-to-noise ratio under scheduled PFA and PD Call be decreased by introducing constraint judgment to double threshold detection, and improve the identification accuracy of micro-seismic signal.


1989 ◽  
Vol 1989 (1) ◽  
pp. 27-35
Author(s):  
Joseph W. Maresca ◽  
James W. Starr ◽  
Robert D. Roach ◽  
John S. Farlow

ABSTRACT A United States Environmental Protection Agency (EPA) research program evaluated the current performance of commercially available volumetric test methods for the detection of small leaks in underground gasoline storage tanks. The evaluations were performed at the EPA Risk Reduction Engineering Laboratory's Underground Storage Tank Test Apparatus in Edison, New Jersey. The methodology used for evaluation made it possible to determine and resolve most of the technological and engineering issues associated with volumetric leak detection, as well as to define the current practice of commercially available test methods. The approach used (1) experimentally validated models of the important sources of ambient noise that affect volume changes in nonleaking and leaking tanks, (2) a large data base of product-temperature changes that result from the delivery of product to a tank at a different temperature than the product in the tank, and (3) a mathematical model of each test method to estimate the performance of that method. The test-method model includes the instrumentation noise, the configuration of the sensors, the test protocol, the data analysis algorithms, and the detection criterion. Twenty-five commercially available volumetric leak detection systems were evaluated. The leak rate measurable by these systems ranged from 0.26 to 6.78 L/h (0.07 to 1.79 gal/h), with a probability of detection of 0.95 and a probability of false alarm of 0.05. Five methods achieved a performance between 0.19 L/h (0.05 gal/h) and 0.57 L/h (0.15 gal/h). Only one method was able to detect leaks less than 0.57 L/h (0.15 gal/h) if the probability of detection was increased to 0.99 and the probability of false alarm was decreased to 0.01. The measurable leak rates ranged from 0.45 to 12.94 L/h (0.12 to 3.42 gal/h) with these more stringent detection and false alarm parameters. The performance of the methods evaluated was primarily limited by test protocol, operational sensor configuration, data analysis, and calibration, rather than by hardware. The experimental analysis and model calculations suggested that substantial performance improvements can be realized by making procedural changes. With modifications, it is estimated that more than 60 percent of the methods should be able to achieve a probability of detection of 0.99 and a probability of false alarm of 0.01 for leak rates between 0.19 L/h (0.05 gal/h) and 0.56 L/h (0.15 gal/h), and 100 percent should be able to achieve this performance for leak rates of approximately 0.76 L/h (0.20 gal/h).


Author(s):  
K. M. Ghylin ◽  
C. G. Drury ◽  
R Batta ◽  
L. Lin

Data from certified screeners performing an x-ray inspection task for 4 hours, or 1000 images, were analyzed to identify the nature of the vigilance decrement. The expected vigilance decrement was found, with performance measured by probability of detection (PoD) and probability of false alarm [P(FA)] decreasing from hour 1 to hour 4. Correlations between PoD and P(FA) indicate that sensitivity between hours remained the same, however a shift in criterion (Beta) occurred. Significant decreases in both detection and stopping time were found from the first hour to the second, third, and fourth hour. Evidence of changes in the search component of the time per item was found to account for part of the vigilance decrement. As the task continued, participants spent less time actively searching the image, as opposed to other activities. Evidence is provided for truncation of active search as security inspection continues.


2014 ◽  
Vol 643 ◽  
pp. 105-110
Author(s):  
Yuan Li ◽  
Jia Yin Chen ◽  
Xiao Feng Liu ◽  
Ming Chuan Yang

Aiming at the situation where the double-threshold detection has been widely used without complete mathematical proof and condition of application, this paper proves its correctness under the circumstance of spectrum sensing, and circulates the condition where this method can work. The proof and simulation show that, comparing with traditional energy detection, this method can increase the probability of detection by 27% to 42% at most when the SNR is between-15dB and-2dB, while the probability of false alarm is increased by less than 2%.


Activity detection based on likelihood ratio in the presence of high dimensional multimodal data acts as a challenging problem as the estimation of joint probability density functions (pdfs) with intermodal dependence is tedious. The existing method with above expectations fails due to poor performance in the presence of strongly dependent data. This paper proposes a Compressive Sensing Based Detection method in the Multi-sensor signal using the deep learning method. The proposed Tree copula- Grasshopper optimization based Deep Convolutional Neural Network (TC-GO based DCNN) detection method comprises of three main steps, such as compressive sensing, fusion and detection. The signals are initially collected from the sensors in order to subject them under tensor based compressive sensing. The compressed signals are then fused together using tree copula theory, and the parameters are estimated with the Grasshopper optimization algorithm (GOA). The activity detection is finally performed using DCNN, which is trained with the Stochastic Gradient Descent (SGD) Optimizer. The performance of the proposed method is evaluated based on the evaluation metrics, such as probability of detection and probability of false alarm. The highest probability of detection and least probability of false alarm are obtained as 0.9083, and 0.0959, respectively using the proposed method that shows the effectiveness of the proposed method in activity detection.


2020 ◽  
Vol 24 (06) ◽  
pp. 83-90
Author(s):  
Ali Mohammad A. AL-Hussain ◽  
◽  
Maher K. Mahmood ◽  

Compressive sensing (CS) technique is used to solve the problem of high sampling rate with wide band signal spectrum sensing where high speed analogue to digital converter is needed to do that. This leads to difficult hardware implementation, large time of sensing and detection with high consumptions power. The proposed approach combines energy-based detection, with CS compressive sensing and investigates the probability of detection, and the probability of false alarm as a function of the SNR, showing the effect of compression to spectrum sensing performance of cognitive radio system. The Discrete Cosine Transform (DCT) is used as a sparse representation basis of the received signal, and random matrix as a compressive matrix. The 𝓁1 norm algorithm is used to reconstruct the original signal. A closed form of probability of detection and probability of false alarm are derived. Computer simulation shows clearly that the compression ratio, recovery error and SNR level affect the probability of detection.


Author(s):  
Krishna R Narayanan ◽  
Isabel Frost ◽  
Anoosheh Heidarzadeh ◽  
Katie K Tseng ◽  
Sayantan Banerjee ◽  
...  

AbstractBackgroundCOVID-19 originated in China and has quickly spread worldwide causing a pandemic. Countries need rapid data on the prevalence of the virus in communities to enable rapid containment. However, the equipment, human and laboratory resources required for conducting individual RT-PCR is prohibitive. One technique to reduce the number of tests required is the pooling of samples for analysis by RT-PCR prior to testing.MethodsWe conducted a mathematical analysis of pooling strategies for infection rate classification using group testing and for the identification of individuals by testing pooled clusters of samples.FindingsOn the basis of the proposed pooled testing strategy we calculate the probability of false alarm, the probability of detection, and the average number of tests required as a function of the pool size. We find that when the sample size is 256, using a maximum pool size of 64, with only 7.3 tests on average, we can distinguish between prevalences of 1% and 5% with a probability of detection of 95% and probability of false alarm of 4%.InterpretationThe pooling of RT-PCR samples is a cost-effective technique for providing much-needed course-grained data on the prevalence of COVID-19. This is a powerful tool in providing countries with information that can facilitate a response to the pandemic that is evidence-based and saves the most lives possible with the resources available.FundingBill & Melinda Gates FoundationAuthors contributionsRL and KRN conceived the study. IF, KT, KRN, SB and RL all contributed to the writing of the manuscript and AH and JJ provided comments. KRN and AH conducted the analysis and designed the figures.Research in contextEvidence before this studyThe pooling of RT-PCR samples has been shown to be effective in screening for HIV, Chlamydia, Malaria, and influenza, among other pathogens in human health. In agriculture, this method has been used to assess the prevalence of many pathogens, including Dichelobacter nodosus, which causes footrot in sheep, postweaning multisystemic wasting syndrome, and antibiotic resistance in swine feces, in addition to the identification of coronaviruses in multiple bat species. In relation to the current pandemic, researchers in multiple countries have begun to employ this technique to investigate samples for COVID-19.Added value of this studyGiven recent interest in this topic, this study provides a mathematical analysis of infection rate classification using group testing and calculates the probability of false alarm, the probability of detection, and the average number of tests required as a function of the pool size. In addition the identification of individuals by pooled cluster testing is evaluated.Implications of all the available evidenceThis research suggests the pooling of RT-PCR samples for testing can provide a cheap and effective way of gathering much needed data on the prevalence of COVID-19 and identifying infected individuals in the community, where it may be infeasible to carry out a high number of tests. This will enable countries to use stretched resources in the most appropriate way possible, providing valuable data that can inform an evidence-based response to the pandemic.


Sensors ◽  
2020 ◽  
Vol 20 (14) ◽  
pp. 3904
Author(s):  
Jeong Hoon Shin ◽  
Youngjin Choi

The constant false alarm rate (CFAR) process is essential for target detection in radar systems. Although the detection performance of the CFAR process is normally guaranteed in noise-limited environments, it may be dramatically degraded in clutter-limited environments since the probabilistic characteristics for clutter are unknown. Therefore, sophisticated CFAR processes that suppress the effect of clutter can be used in actual applications. However, these methods have the fundamental limitation of detection performance because there is no feedback structure in terms of the probability of false alarm for determining the detection threshold. This paper presents a robust control scheme for adjusting the detection threshold of the CFAR process while estimating the clutter measurement density (CMD) that uses only the measurement sets over a finite time interval in order to adapt to time-varying cluttered environments, and the probability of target existence with finite measurement sets required for estimating CMD is derived. The improved performance of the proposed method was verified by simulation experiments for heterogeneous situations.


Sign in / Sign up

Export Citation Format

Share Document