Specificity of various cluster criteria used for the detection of glaucomatous visual field abnormalities

2019 ◽  
Vol 104 (6) ◽  
pp. 822-826
Author(s):  
Zhichao Wu ◽  
Felipe A Medeiros ◽  
Robert N Weinreb ◽  
Christopher A Girkin ◽  
Linda M Zangwill

PurposeThis study aimed to evaluate the specificity of commonly used cluster criteria for defining the presence of glaucomatous visual field abnormalities and the impact of variations in the criterion used.MethodsThis is an observational study including 607 eyes from 384 healthy participants, and 501 eyes of 345 participants with glaucoma, with at least two reliable 24–2 visual field tests. An abnormal visual field cluster was defined as the presence of ≥3 contiguous abnormal locations. Variations in this definition were evaluated and included (1) whether abnormalities were based on total deviation and/or pattern deviation values; (2) probability cut-off for defining an abnormal location; and (3) whether abnormalities were required to be repeatable (within the same hemifield or at the same locations) or not. These definitions were also compared against pattern standard deviation (PSD) values.ResultsFalse-positive rates of various cluster criteria ranged between 9% and 46% depending on the specific definitions used. Only definitions that required abnormalities to be repeatable at the same location achieved a false-positive rate of ≤6%. The various cluster criteria generally performed similarly or worse at detecting glaucoma eyes compared with the PSD values.ConclusionsCommonly used visual field cluster criteria have high false-positive rates that vary widely depending on the definition used. These findings highlight the need to carefully consider the criteria used when designing and interpreting glaucoma clinical studies.Trial registration numberNCT00221923.

2021 ◽  
pp. bjophthalmol-2020-318188
Author(s):  
Shotaro Asano ◽  
Hiroshi Murata ◽  
Yuri Fujino ◽  
Takehiro Yamashita ◽  
Atsuya Miki ◽  
...  

Background/AimTo investigate the clinical validity of the Guided Progression Analysis definition (GPAD) and cluster-based definition (CBD) with the Humphrey Field Analyzer 10-2 test in diagnosing glaucomatous visual field (VF) progression, and to introduce a novel definition with optimised specificity by combining the ‘any-location’ and ‘cluster-based’ approaches (hybrid definition).Methods64 400 stable glaucomatous VFs were simulated from 664 pairs of 10-2 tests (10 sets × 10 VF series × 664 eyes; data set 1). Using these simulated VFs, the specificity to detect progression and the effects of changing the parameters (number of test locations or consecutive VF tests, and percentile cut-off values) were investigated. The hybrid definition was designed as the combination where the specificity was closest to 95.0%. Subsequently, another 5000 actual glaucomatous 10-2 tests from 500 eyes (10 VFs each) were collected (data set 2), and their accuracy (sensitivity, specificity and false positive rate) and the time needed to detect VF progression were evaluated.ResultsThe specificity values calculated using data set 1 with GPAD and CBD were 99.6% and 99.8%. Using data set 2, the hybrid definition had a higher sensitivity than GPAD and CBD, without detriment to the specificity or false positive rate. The hybrid definition also detected progression significantly earlier than GPAD and CBD (at 3.1 years vs 4.2 years and 4.1 years, respectively).ConclusionsGPAD and CBD had specificities of 99.6% and 99.8%, respectively. A novel hybrid definition (with a specificity of 95.5%) had higher sensitivity and enabled earlier detection of progression.


2021 ◽  
Vol 42 (Supplement_1) ◽  
Author(s):  
A Rosier ◽  
E Crespin ◽  
A Lazarus ◽  
G Laurent ◽  
A Menet ◽  
...  

Abstract Background Implantable Loop Recorders (ILRs) are increasingly used and generate a high workload for timely adjudication of ECG recordings. In particular, the excessive false positive rate leads to a significant review burden. Purpose A novel machine learning algorithm was developed to reclassify ILR episodes in order to decrease by 80% the False Positive rate while maintaining 99% sensitivity. This study aims to evaluate the impact of this algorithm to reduce the number of abnormal episodes reported in Medtronic ILRs. Methods Among 20 European centers, all Medtronic ILR patients were enrolled during the 2nd semester of 2020. Using a remote monitoring platform, every ILR transmitted episode was collected and anonymised. For every ILR detected episode with a transmitted ECG, the new algorithm reclassified it applying the same labels as the ILR (asystole, brady, AT/AF, VT, artifact, normal). We measured the number of episodes identified as false positive and reclassified as normal by the algorithm, and their proportion among all episodes. Results In 370 patients, ILRs recorded 3755 episodes including 305 patient-triggered and 629 with no ECG transmitted. 2821 episodes were analyzed by the novel algorithm, which reclassified 1227 episodes as normal rhythm. These reclassified episodes accounted for 43% of analyzed episodes and 32.6% of all episodes recorded. Conclusion A novel machine learning algorithm significantly reduces the quantity of episodes flagged as abnormal and typically reviewed by healthcare professionals. FUNDunding Acknowledgement Type of funding sources: None. Figure 1. ILR episodes analysis


2017 ◽  
Author(s):  
Harry Crane

A recent proposal to "redefine statistical significance" (Benjamin, et al. Nature Human Behaviour, 2017) claims that false positive rates "would immediately improve" by factors greater than two and replication rates would double simply by changing the conventional cutoff for 'statistical significance' from P<0.05 to P<0.005. I analyze the veracity of these claims, focusing especially on how Benjamin, et al neglect the effects of P-hacking in assessing the impact of their proposal. My analysis shows that once P-hacking is accounted for the perceived benefits of the lower threshold all but disappear, prompting two main conclusions: (i) The claimed improvements to false positive rate and replication rate in Benjamin, et al (2017) are exaggerated and misleading. (ii) There are plausible scenarios under which the lower cutoff will make the replication crisis worse.


2021 ◽  
Vol 14 (11) ◽  
pp. 2355-2368
Author(s):  
Tobias Schmidt ◽  
Maximilian Bandle ◽  
Jana Giceva

With today's data deluge, approximate filters are particularly attractive to avoid expensive operations like remote data/disk accesses. Among the many filter variants available, it is non-trivial to find the most suitable one and its optimal configuration for a specific use-case. We provide open-source implementations for the most relevant filters (Bloom, Cuckoo, Morton, and Xor filters) and compare them in four key dimensions: the false-positive rate, space consumption, build, and lookup throughput. We improve upon existing state-of-the-art implementations with a new optimization, radix partitioning, which boosts the build and lookup throughput for large filters by up to 9x and 5x. Our in-depth evaluation first studies the impact of all available optimizations separately before combining them to determine the optimal filter for specific use-cases. While register-blocked Bloom filters offer the highest throughput, the new Xor filters are best suited when optimizing for small filter sizes or low false-positive rates.


2020 ◽  
Vol 7 (2) ◽  
pp. 190831
Author(s):  
Luis Morís Fernández ◽  
Miguel A. Vadillo

In the present article, we explore the influence of undisclosed flexibility in the analysis of reaction times (RTs). RTs entail some degrees of freedom of their own, due to their skewed distribution, the potential presence of outliers and the availability of different methods to deal with these issues. Moreover, these degrees of freedom are usually not considered part of the analysis itself, but preprocessing steps that are contingent on data. We analysed the impact of these degrees of freedom on the false-positive rate using simulations over real and simulated data. When several preprocessing methods are used in combination, the false-positive rate can easily rise to 17%. This figure becomes more concerning if we consider that more degrees of freedom are awaiting down the analysis pipeline, potentially making the final false-positive rate much higher.


2019 ◽  
Author(s):  
Luis Morís Fernández ◽  
Miguel A. Vadillo

In the present article, we explore the influence of undisclosed flexibility in the analysis of reaction times (RTs). RTs entail some degrees of freedom of their own, due to their skewed distribution, the potential presence of outliers and the availability of different methods to deal with these issues. Moreover, these degrees of freedom are usually not considered part of the analysis itself, but preprocessing steps that are contingent on data. We analysed the impact of these degrees of freedom on the false-positive rate using simulations over real and simulated data. When several preprocessing methods are used in combination, the false-positive rate can easily rise to 17%. This figure becomes more concerning if we consider that more degrees of freedom are awaiting down the analysis pipeline, potentially making the final false-positive rate much higher.


2002 ◽  
Vol 41 (01) ◽  
pp. 37-41 ◽  
Author(s):  
S. Shung-Shung ◽  
S. Yu-Chien ◽  
Y. Mei-Due ◽  
W. Hwei-Chung ◽  
A. Kao

Summary Aim: Even with careful observation, the overall false-positive rate of laparotomy remains 10-15% when acute appendicitis was suspected. Therefore, the clinical efficacy of Tc-99m HMPAO labeled leukocyte (TC-WBC) scan for the diagnosis of acute appendicitis in patients presenting with atypical clinical findings is assessed. Patients and Methods: Eighty patients presenting with acute abdominal pain and possible acute appendicitis but atypical findings were included in this study. After intravenous injection of TC-WBC, serial anterior abdominal/pelvic images at 30, 60, 120 and 240 min with 800k counts were obtained with a gamma camera. Any abnormal localization of radioactivity in the right lower quadrant of the abdomen, equal to or greater than bone marrow activity, was considered as a positive scan. Results: 36 out of 49 patients showing positive TC-WBC scans received appendectomy. They all proved to have positive pathological findings. Five positive TC-WBC were not related to acute appendicitis, because of other pathological lesions. Eight patients were not operated and clinical follow-up after one month revealed no acute abdominal condition. Three of 31 patients with negative TC-WBC scans received appendectomy. They also presented positive pathological findings. The remaining 28 patients did not receive operations and revealed no evidence of appendicitis after at least one month of follow-up. The overall sensitivity, specificity, accuracy, positive and negative predictive values for TC-WBC scan to diagnose acute appendicitis were 92, 78, 86, 82, and 90%, respectively. Conclusion: TC-WBC scan provides a rapid and highly accurate method for the diagnosis of acute appendicitis in patients with equivocal clinical examination. It proved useful in reducing the false-positive rate of laparotomy and shortens the time necessary for clinical observation.


1993 ◽  
Vol 32 (02) ◽  
pp. 175-179 ◽  
Author(s):  
B. Brambati ◽  
T. Chard ◽  
J. G. Grudzinskas ◽  
M. C. M. Macintosh

Abstract:The analysis of the clinical efficiency of a biochemical parameter in the prediction of chromosome anomalies is described, using a database of 475 cases including 30 abnormalities. A comparison was made of two different approaches to the statistical analysis: the use of Gaussian frequency distributions and likelihood ratios, and logistic regression. Both methods computed that for a 5% false-positive rate approximately 60% of anomalies are detected on the basis of maternal age and serum PAPP-A. The logistic regression analysis is appropriate where the outcome variable (chromosome anomaly) is binary and the detection rates refer to the original data only. The likelihood ratio method is used to predict the outcome in the general population. The latter method depends on the data or some transformation of the data fitting a known frequency distribution (Gaussian in this case). The precision of the predicted detection rates is limited by the small sample of abnormals (30 cases). Varying the means and standard deviations (to the limits of their 95% confidence intervals) of the fitted log Gaussian distributions resulted in a detection rate varying between 42% and 79% for a 5% false-positive rate. Thus, although the likelihood ratio method is potentially the better method in determining the usefulness of a test in the general population, larger numbers of abnormal cases are required to stabilise the means and standard deviations of the fitted log Gaussian distributions.


2019 ◽  
Author(s):  
Amanda Kvarven ◽  
Eirik Strømland ◽  
Magnus Johannesson

Andrews & Kasy (2019) propose an approach for adjusting effect sizes in meta-analysis for publication bias. We use the Andrews-Kasy estimator to adjust the result of 15 meta-analyses and compare the adjusted results to 15 large-scale multiple labs replication studies estimating the same effects. The pre-registered replications provide precisely estimated effect sizes, which do not suffer from publication bias. The Andrews-Kasy approach leads to a moderate reduction of the inflated effect sizes in the meta-analyses. However, the approach still overestimates effect sizes by a factor of about two or more and has an estimated false positive rate of between 57% and 100%.


Sign in / Sign up

Export Citation Format

Share Document