false positive error rate
Recently Published Documents


TOTAL DOCUMENTS

11
(FIVE YEARS 4)

H-INDEX

4
(FIVE YEARS 2)

2020 ◽  
pp. jclinpath-2020-206726
Author(s):  
Cornelia Margaret Szecsei ◽  
Jon D Oxley

AimTo examine the effects of specialist reporting on error rates in prostate core biopsy diagnosis.MethodBiopsies were reported by eight specialist uropathologists over 3 years. New cancer diagnoses were double-reported and all biopsies were reviewed for the multidisciplinary team (MDT) meeting. Diagnostic alterations were recorded in supplementary reports and error rates were compared with a decade previously.Results2600 biopsies were reported. 64.1% contained adenocarcinoma, a 19.7% increase. The false-positive error rate had reduced from 0.4% to 0.06%. The false-negative error rate had increased from 1.5% to 1.8%, but represented fewer absolute errors due to increased cancer incidence.ConclusionsSpecialisation and double-reporting have reduced false-positive errors. MDT review of negative cores continues to identify a very low number of false-negative errors. Our data represents a ‘gold standard’ for prostate biopsy diagnostic error rates. Increased use of MRI-targeted biopsies may alter error rates and their future clinical significance.


2020 ◽  
Vol 117 (18) ◽  
pp. 9787-9792 ◽  
Author(s):  
Merle Behr ◽  
M. Azim Ansari ◽  
Axel Munk ◽  
Chris Holmes

Tree structures, showing hierarchical relationships and the latent structures between samples, are ubiquitous in genomic and biomedical sciences. A common question in many studies is whether there is an association between a response variable measured on each sample and the latent group structure represented by some given tree. Currently, this is addressed on an ad hoc basis, usually requiring the user to decide on an appropriate number of clusters to prune out of the tree to be tested against the response variable. Here, we present a statistical method with statistical guarantees that tests for association between the response variable and a fixed tree structure across all levels of the tree hierarchy with high power while accounting for the overall false positive error rate. This enhances the robustness and reproducibility of such findings.


Author(s):  
N. U. Bagrov ◽  
A. S. Konushin ◽  
V. S. Konushin

<p><strong>Abstract.</strong> Nowadays face recognition systems are widely used in the world. In China these systems are used in safe cities projects in production, in Russia they are used mostly in closed-loop systems like factories, business centers with biometric access control or stadiums. Closed loop means that we need to identify people from a fixed dataset: in factory it’s a list of employees, in stadium it’s a list of ticket owners. The most challenging task is to identify people from some large city with an open dataset: we don’t have a fixed set of people in the city, it’s rapidly changing due to migration. Another limit is the accuracy of the system: we can’t make a lot of false positive errors (when a person is incorrectly recognized as another person) because number of human operators is limited and they are expensive. We propose an approach to maximize face recognition accuracy for a fixed false positive error rate using limited amount of hardware.</p>


2019 ◽  
Vol 44 (3) ◽  
pp. 309-341 ◽  
Author(s):  
Jeffrey M. Patton ◽  
Ying Cheng ◽  
Maxwell Hong ◽  
Qi Diao

In psychological and survey research, the prevalence and serious consequences of careless responses from unmotivated participants are well known. In this study, we propose to iteratively detect careless responders and cleanse the data by removing their responses. The careless responders are detected using person-fit statistics. In two simulation studies, the iterative procedure leads to nearly perfect power in detecting extremely careless responders and much higher power than the noniterative procedure in detecting moderately careless responders. Meanwhile, the false-positive error rate is close to the nominal level. In addition, item parameter estimation is much improved by iteratively cleansing the calibration sample. The bias in item discrimination and location parameter estimates is substantially reduced. The standard error estimates, which are spuriously small in the presence of careless responses, are corrected by the iterative cleansing procedure. An empirical example is also presented to illustrate the proposed procedure. These results suggest that the proposed procedure is a promising way to improve item parameter estimation for tests of 20 items or longer when data are contaminated by careless responses.


Author(s):  
Sebastián García ◽  
Alejandro Zunino ◽  
Marcelo Campo

The detection of bots and botnets in the network may be improved if the analysis is done on the traffic of one bot alone. While a botnet may be detected by correlating the behavior of several bots in a large amount of traffic, one bot alone can be detected by analyzing its unique trends in less traffic. The algorithms to differentiate the traffic of one bot from the normal traffic of one computer may take advantage of these differences. The authors propose to detect bots in the network by analyzing the relationships between flow features in a time window. The technique is based on the Expectation-Maximization clustering algorithm. To verify the method they designed test-beds and obtained a dataset of six different captures. The results are encouraging, showing a true positive error rate of 99.08% with a false positive error rate of 0.7%.


Author(s):  
Sebastián García ◽  
Alejandro Zunino ◽  
Marcelo Campo

Botnets’ diversity and dynamism challenge detection and classification algorithms depend heavily on static or protocol-dependant features. Several methods showing promising results were proposed using behavioral-based approaches. The authors conducted an analysis of botnets’ and bots’ most inherent characteristics such as synchronism and network load within specific time windows to detect them more efficiently. By not relying on any specific protocol, our proposed approach detects infected computers by clustering bots’ network behavioral characteristics using the Expectation-Maximization algorithm. An encouraging false positive error rate of 0.7% shows that bots’ traffic can be accurately separated by our approach by analyzing several bots and non-botnet network captures and applying a detailed analysis of error rates.


2009 ◽  
Vol 102 (1) ◽  
pp. 636-643 ◽  
Author(s):  
Takuya Sasaki ◽  
Genki Minamisawa ◽  
Naoya Takahashi ◽  
Norio Matsuki ◽  
Yuji Ikegaya

We introduce a new method to unveil the network connectivity among dozens of neurons in brain slice preparations. While synaptic inputs were whole cell recorded from given postsynaptic neurons, the spatiotemporal firing patterns of presynaptic neuron candidates were monitored en masse with functional multineuron calcium imaging, an optical technique that records action potential–evoked somatic calcium transients with single-cell resolution. By statistically screening the neurons that exhibited calcium transients immediately before the postsynaptic inputs, we identified the presynaptic cells that made synaptic connections onto the patch-clamped neurons. To enhance the detection power, we devised the following points: 1) [K+]e was lowered and [Ca2+]e and [Mg2+]e were elevated, to reduce background synaptic activity and minimize the failure rate of synaptic transmission; and 2) a small fraction of presynaptic neurons was specifically activated by glutamate applied iontophoretically through a glass pipette that was moved to survey the presynaptic network of interest (“trawling”). Then we could theoretically detect 96% of presynaptic neurons activated in the imaged regions with a 1% false-positive error rate. This on-line probing technique would be a promising tool in the study of the wiring topography of neuronal circuits.


2009 ◽  
Vol 27 (15_suppl) ◽  
pp. 6512-6512 ◽  
Author(s):  
H. Tang ◽  
N. R. Foster ◽  
A. Grothey ◽  
S. M. Ansell ◽  
D. J. Sargent

6512 Background: The use of randomized phase II designs with an experimental arm and a standard-treatment control arm (R2PII) instead of a conventional single-arm design is clearly increasing in oncology. In practice, sample size, related cost issues, the belief that historical controls are adequate, and the use of a standard-treatment control arm in a phase II setting are frequently raised objections to R2PII trials. As the expense and complexity of definitive phase III trials increase, the ability of phase II trials to provide reliable and accurate results is critical. Methods: We investigated the ability of single arm vs R2PII trials to provide accurate conclusions by modeling variability in historical controls, patient outcome drifts independent of the tested therapy, and patient selection effects. Simulations compared R2PII and single-arm designs with binary endpoints under realistic parameters (e.g. alpha = beta = 0.10, historical control success rate = 20%, target success rate = 40%). Results: In the absence of variability in historical controls, estimated false positive and negative rates in both designs mirror the designated specifications. However, even in the presence of a modest drift effect in the population (mean 5% absolute shift in true control success rate), the false positive rate in single-arm designs is inflated two to three fold (to 20%-30%), while the R2PII retains the desired error rates. Greater confidence in historical controls in the single-arm design corrects only a small portion of the deviations. Increasing the sample size in each trial inflates the false positive error rate further to as much as 50%. Varying several sets of parameters gave similar results. Conclusions: In the presence of variability in historical controls, patient drift and/or selection effects, the false positive error rate of a single arm design is unacceptably high. In contrast, the R2PII design is reliable and robust despite the complexities in patient outcome drift and selection effects, and variability in historical control success rates. Given the rapid improvements in outcomes of many tumor types, the R2PII design should be the preferred method to evaluate novel agents in oncology in spite of the associated costs and the use of a reference control arm. No significant financial relationships to disclose.


Sign in / Sign up

Export Citation Format

Share Document