Temporal Effects in a Security Inspection Task: Breakdown of Performance Components

Author(s):  
K. M. Ghylin ◽  
C. G. Drury ◽  
R Batta ◽  
L. Lin

Data from certified screeners performing an x-ray inspection task for 4 hours, or 1000 images, were analyzed to identify the nature of the vigilance decrement. The expected vigilance decrement was found, with performance measured by probability of detection (PoD) and probability of false alarm [P(FA)] decreasing from hour 1 to hour 4. Correlations between PoD and P(FA) indicate that sensitivity between hours remained the same, however a shift in criterion (Beta) occurred. Significant decreases in both detection and stopping time were found from the first hour to the second, third, and fourth hour. Evidence of changes in the search component of the time per item was found to account for part of the vigilance decrement. As the task continued, participants spent less time actively searching the image, as opposed to other activities. Evidence is provided for truncation of active search as security inspection continues.

1989 ◽  
Vol 1989 (1) ◽  
pp. 27-35
Author(s):  
Joseph W. Maresca ◽  
James W. Starr ◽  
Robert D. Roach ◽  
John S. Farlow

ABSTRACT A United States Environmental Protection Agency (EPA) research program evaluated the current performance of commercially available volumetric test methods for the detection of small leaks in underground gasoline storage tanks. The evaluations were performed at the EPA Risk Reduction Engineering Laboratory's Underground Storage Tank Test Apparatus in Edison, New Jersey. The methodology used for evaluation made it possible to determine and resolve most of the technological and engineering issues associated with volumetric leak detection, as well as to define the current practice of commercially available test methods. The approach used (1) experimentally validated models of the important sources of ambient noise that affect volume changes in nonleaking and leaking tanks, (2) a large data base of product-temperature changes that result from the delivery of product to a tank at a different temperature than the product in the tank, and (3) a mathematical model of each test method to estimate the performance of that method. The test-method model includes the instrumentation noise, the configuration of the sensors, the test protocol, the data analysis algorithms, and the detection criterion. Twenty-five commercially available volumetric leak detection systems were evaluated. The leak rate measurable by these systems ranged from 0.26 to 6.78 L/h (0.07 to 1.79 gal/h), with a probability of detection of 0.95 and a probability of false alarm of 0.05. Five methods achieved a performance between 0.19 L/h (0.05 gal/h) and 0.57 L/h (0.15 gal/h). Only one method was able to detect leaks less than 0.57 L/h (0.15 gal/h) if the probability of detection was increased to 0.99 and the probability of false alarm was decreased to 0.01. The measurable leak rates ranged from 0.45 to 12.94 L/h (0.12 to 3.42 gal/h) with these more stringent detection and false alarm parameters. The performance of the methods evaluated was primarily limited by test protocol, operational sensor configuration, data analysis, and calibration, rather than by hardware. The experimental analysis and model calculations suggested that substantial performance improvements can be realized by making procedural changes. With modifications, it is estimated that more than 60 percent of the methods should be able to achieve a probability of detection of 0.99 and a probability of false alarm of 0.01 for leak rates between 0.19 L/h (0.05 gal/h) and 0.56 L/h (0.15 gal/h), and 100 percent should be able to achieve this performance for leak rates of approximately 0.76 L/h (0.20 gal/h).


2014 ◽  
Vol 643 ◽  
pp. 105-110
Author(s):  
Yuan Li ◽  
Jia Yin Chen ◽  
Xiao Feng Liu ◽  
Ming Chuan Yang

Aiming at the situation where the double-threshold detection has been widely used without complete mathematical proof and condition of application, this paper proves its correctness under the circumstance of spectrum sensing, and circulates the condition where this method can work. The proof and simulation show that, comparing with traditional energy detection, this method can increase the probability of detection by 27% to 42% at most when the SNR is between-15dB and-2dB, while the probability of false alarm is increased by less than 2%.


Activity detection based on likelihood ratio in the presence of high dimensional multimodal data acts as a challenging problem as the estimation of joint probability density functions (pdfs) with intermodal dependence is tedious. The existing method with above expectations fails due to poor performance in the presence of strongly dependent data. This paper proposes a Compressive Sensing Based Detection method in the Multi-sensor signal using the deep learning method. The proposed Tree copula- Grasshopper optimization based Deep Convolutional Neural Network (TC-GO based DCNN) detection method comprises of three main steps, such as compressive sensing, fusion and detection. The signals are initially collected from the sensors in order to subject them under tensor based compressive sensing. The compressed signals are then fused together using tree copula theory, and the parameters are estimated with the Grasshopper optimization algorithm (GOA). The activity detection is finally performed using DCNN, which is trained with the Stochastic Gradient Descent (SGD) Optimizer. The performance of the proposed method is evaluated based on the evaluation metrics, such as probability of detection and probability of false alarm. The highest probability of detection and least probability of false alarm are obtained as 0.9083, and 0.0959, respectively using the proposed method that shows the effectiveness of the proposed method in activity detection.


2020 ◽  
Vol 24 (06) ◽  
pp. 83-90
Author(s):  
Ali Mohammad A. AL-Hussain ◽  
◽  
Maher K. Mahmood ◽  

Compressive sensing (CS) technique is used to solve the problem of high sampling rate with wide band signal spectrum sensing where high speed analogue to digital converter is needed to do that. This leads to difficult hardware implementation, large time of sensing and detection with high consumptions power. The proposed approach combines energy-based detection, with CS compressive sensing and investigates the probability of detection, and the probability of false alarm as a function of the SNR, showing the effect of compression to spectrum sensing performance of cognitive radio system. The Discrete Cosine Transform (DCT) is used as a sparse representation basis of the received signal, and random matrix as a compressive matrix. The 𝓁1 norm algorithm is used to reconstruct the original signal. A closed form of probability of detection and probability of false alarm are derived. Computer simulation shows clearly that the compression ratio, recovery error and SNR level affect the probability of detection.


Author(s):  
Krishna R Narayanan ◽  
Isabel Frost ◽  
Anoosheh Heidarzadeh ◽  
Katie K Tseng ◽  
Sayantan Banerjee ◽  
...  

AbstractBackgroundCOVID-19 originated in China and has quickly spread worldwide causing a pandemic. Countries need rapid data on the prevalence of the virus in communities to enable rapid containment. However, the equipment, human and laboratory resources required for conducting individual RT-PCR is prohibitive. One technique to reduce the number of tests required is the pooling of samples for analysis by RT-PCR prior to testing.MethodsWe conducted a mathematical analysis of pooling strategies for infection rate classification using group testing and for the identification of individuals by testing pooled clusters of samples.FindingsOn the basis of the proposed pooled testing strategy we calculate the probability of false alarm, the probability of detection, and the average number of tests required as a function of the pool size. We find that when the sample size is 256, using a maximum pool size of 64, with only 7.3 tests on average, we can distinguish between prevalences of 1% and 5% with a probability of detection of 95% and probability of false alarm of 4%.InterpretationThe pooling of RT-PCR samples is a cost-effective technique for providing much-needed course-grained data on the prevalence of COVID-19. This is a powerful tool in providing countries with information that can facilitate a response to the pandemic that is evidence-based and saves the most lives possible with the resources available.FundingBill & Melinda Gates FoundationAuthors contributionsRL and KRN conceived the study. IF, KT, KRN, SB and RL all contributed to the writing of the manuscript and AH and JJ provided comments. KRN and AH conducted the analysis and designed the figures.Research in contextEvidence before this studyThe pooling of RT-PCR samples has been shown to be effective in screening for HIV, Chlamydia, Malaria, and influenza, among other pathogens in human health. In agriculture, this method has been used to assess the prevalence of many pathogens, including Dichelobacter nodosus, which causes footrot in sheep, postweaning multisystemic wasting syndrome, and antibiotic resistance in swine feces, in addition to the identification of coronaviruses in multiple bat species. In relation to the current pandemic, researchers in multiple countries have begun to employ this technique to investigate samples for COVID-19.Added value of this studyGiven recent interest in this topic, this study provides a mathematical analysis of infection rate classification using group testing and calculates the probability of false alarm, the probability of detection, and the average number of tests required as a function of the pool size. In addition the identification of individuals by pooled cluster testing is evaluated.Implications of all the available evidenceThis research suggests the pooling of RT-PCR samples for testing can provide a cheap and effective way of gathering much needed data on the prevalence of COVID-19 and identifying infected individuals in the community, where it may be infeasible to carry out a high number of tests. This will enable countries to use stretched resources in the most appropriate way possible, providing valuable data that can inform an evidence-based response to the pandemic.


2020 ◽  
Vol 20 (2) ◽  
pp. 60
Author(s):  
Syahfrizal Tahcfulloh ◽  
Muttaqin Hardiwansyah

Phased-Multiple Input Multiple Output (PMIMO) radar is multi-antenna radar that combines the main advantages of the phased array (PA) and the MIMO radars. The advantage of the PA radar is that it has a high directional coherent gain making it suitable for detecting distant and small radar cross-section (RCS) targets. Meanwhile, the main advantage of the MIMO radar is its high waveform diversity gain which makes it suitable for detecting multiple targets. The combination of these advantages is manifested by the use of overlapping subarrays in the transmit (Tx) array to improve the performance of parameters such as angle resolution and detection accuracy at amplitude and phase proportional to the maximum number of detectable targets. This paper derives a parameter estimation formula with Capon's adaptive estimator and evaluates it for the performance of these parameters. Likewise, derivation for expressions of detection performance such as the probability of false alarm and the probability of detection is also given. The effectiveness and validation of its performance are compared to conventional estimator for other types of radars in terms of the effect of the number of target angles, the RCS of targets, and variations in the number of subarrays at Tx of this radar. Meanwhile, the detection performance is evaluated based on the effect of Signal to Noise Ratio (SNR) and the number of subarrays at Tx. The evaluation results of the estimator show that it is superior to the conventional estimator for estimating the parameters of this radar as well as the detection performance. Having no sidelobe makes this estimator strong against the influence of interference and jamming so that it is suitable and attractive for the design of radar systems. Root mean square error (RMSE) on magnitude detection from LS and Capon estimators were 0.033 and 0.062, respectively. Meanwhile, the detection performance for this radar has the probability of false alarm above 10-4 and the probability of detection of more than 99%.


2017 ◽  
Vol 14 (1) ◽  
pp. 430-434 ◽  
Author(s):  
B Suseela ◽  
D Sivakumar

Spectrum scarcity has gained a great challenge in the current scenarios of wireless communication. In order to optimize the spectrum usage on the other hand cognitive networks has shown a considerable growth. This paper tries to focus on optimization with particle swarm optimization in cognitive networks (PSO-CN) and tree seed algorithm in cognitive networks (TSA-CN) which are multichannel based. The algorithm is based on higher probability of detection and throughput with lower probability of false alarm. The lower probability of false alarm has been achieved without compromising on the transmission rate with TSA-CN. The convergence time is found to be quicker with TSA-CN. Results with matlab based simulator shows there is an increase in throughput and decrease in false alarm with TSA algorithm than the PSO algorithm.


Sign in / Sign up

Export Citation Format

Share Document