Effect of the absolute statistic on gene-sampling gene-set analysis methods

2015 ◽  
Vol 26 (3) ◽  
pp. 1248-1260 ◽  
Author(s):  
Dougu Nam

Gene-set enrichment analysis and its modified versions have commonly been used for identifying altered functions or pathways in disease from microarray data. In particular, the simple gene-sampling gene-set analysis methods have been heavily used for datasets with only a few sample replicates. The biggest problem with this approach is the highly inflated false-positive rate. In this paper, the effect of absolute gene statistic on gene-sampling gene-set analysis methods is systematically investigated. Thus far, the absolute gene statistic has merely been regarded as a supplementary method for capturing the bidirectional changes in each gene set. Here, it is shown that incorporating the absolute gene statistic in gene-sampling gene-set analysis substantially reduces the false-positive rate and improves the overall discriminatory ability. Its effect was investigated by power, false-positive rate, and receiver operating curve for a number of simulated and real datasets. The performances of gene-set analysis methods in one-tailed (genome-wide association study) and two-tailed (gene expression data) tests were also compared and discussed.

2018 ◽  
Author(s):  
Farhad Maleki ◽  
Anthony J. Kusalik

AbstractGene set analysis methods are widely used to analyze data from high-throughput “omics” technologies. One drawback of these methods is their low specificity or high false positive rate. Over-representation analysis is one of the most commonly used gene set analysis methods. In this paper, we propose a systematic approach to investigate the hypothesis that gene set overlap is an underlying cause of low specificity in over-representation analysis. We quantify gene set overlap and show that it is a ubiquitous phenomenon across gene set databases. Statistical analysis indicates a strong negative correlation between gene set overlap and the specificity of over-representation analysis. We conclude that gene set overlap is an underlying cause of the low specificity. This result highlights the importance of considering gene set overlap in gene set analysis and explains the lack of specificity of methods that ignore gene set overlap. This research also establishes the direction for developing new gene set analysis methods.


2008 ◽  
Vol 6 ◽  
pp. CIN.S867 ◽  
Author(s):  
Irina Dinu ◽  
Qi Liu ◽  
John D. Potter ◽  
Adeniyi J. Adewale ◽  
Gian S. Jhangri ◽  
...  

Gene-set analysis of microarray data evaluates biological pathways, or gene sets, for their differential expression by a phenotype of interest. In contrast to the analysis of individual genes, gene-set analysis utilizes existing biological knowledge of genes and their pathways in assessing differential expression. This paper evaluates the biological performance of five gene-set analysis methods testing “self-contained null hypotheses” via subject sampling, along with the most popular gene-set analysis method, Gene Set Enrichment Analysis (GSEA). We use three real microarray analyses in which differentially expressed gene sets are predictable biologically from the phenotype. Two types of gene sets are considered for this empirical evaluation: one type contains “truly positive” sets that should be identified as differentially expressed; and the other type contains “truly negative” sets that should not be identified as differentially expressed. Our evaluation suggests advantages of SAM-GS, Global, and ANCOVA Global methods over GSEA and the other two methods.


2002 ◽  
Vol 41 (01) ◽  
pp. 37-41 ◽  
Author(s):  
S. Shung-Shung ◽  
S. Yu-Chien ◽  
Y. Mei-Due ◽  
W. Hwei-Chung ◽  
A. Kao

Summary Aim: Even with careful observation, the overall false-positive rate of laparotomy remains 10-15% when acute appendicitis was suspected. Therefore, the clinical efficacy of Tc-99m HMPAO labeled leukocyte (TC-WBC) scan for the diagnosis of acute appendicitis in patients presenting with atypical clinical findings is assessed. Patients and Methods: Eighty patients presenting with acute abdominal pain and possible acute appendicitis but atypical findings were included in this study. After intravenous injection of TC-WBC, serial anterior abdominal/pelvic images at 30, 60, 120 and 240 min with 800k counts were obtained with a gamma camera. Any abnormal localization of radioactivity in the right lower quadrant of the abdomen, equal to or greater than bone marrow activity, was considered as a positive scan. Results: 36 out of 49 patients showing positive TC-WBC scans received appendectomy. They all proved to have positive pathological findings. Five positive TC-WBC were not related to acute appendicitis, because of other pathological lesions. Eight patients were not operated and clinical follow-up after one month revealed no acute abdominal condition. Three of 31 patients with negative TC-WBC scans received appendectomy. They also presented positive pathological findings. The remaining 28 patients did not receive operations and revealed no evidence of appendicitis after at least one month of follow-up. The overall sensitivity, specificity, accuracy, positive and negative predictive values for TC-WBC scan to diagnose acute appendicitis were 92, 78, 86, 82, and 90%, respectively. Conclusion: TC-WBC scan provides a rapid and highly accurate method for the diagnosis of acute appendicitis in patients with equivocal clinical examination. It proved useful in reducing the false-positive rate of laparotomy and shortens the time necessary for clinical observation.


1993 ◽  
Vol 32 (02) ◽  
pp. 175-179 ◽  
Author(s):  
B. Brambati ◽  
T. Chard ◽  
J. G. Grudzinskas ◽  
M. C. M. Macintosh

Abstract:The analysis of the clinical efficiency of a biochemical parameter in the prediction of chromosome anomalies is described, using a database of 475 cases including 30 abnormalities. A comparison was made of two different approaches to the statistical analysis: the use of Gaussian frequency distributions and likelihood ratios, and logistic regression. Both methods computed that for a 5% false-positive rate approximately 60% of anomalies are detected on the basis of maternal age and serum PAPP-A. The logistic regression analysis is appropriate where the outcome variable (chromosome anomaly) is binary and the detection rates refer to the original data only. The likelihood ratio method is used to predict the outcome in the general population. The latter method depends on the data or some transformation of the data fitting a known frequency distribution (Gaussian in this case). The precision of the predicted detection rates is limited by the small sample of abnormals (30 cases). Varying the means and standard deviations (to the limits of their 95% confidence intervals) of the fitted log Gaussian distributions resulted in a detection rate varying between 42% and 79% for a 5% false-positive rate. Thus, although the likelihood ratio method is potentially the better method in determining the usefulness of a test in the general population, larger numbers of abnormal cases are required to stabilise the means and standard deviations of the fitted log Gaussian distributions.


2019 ◽  
Author(s):  
Amanda Kvarven ◽  
Eirik Strømland ◽  
Magnus Johannesson

Andrews & Kasy (2019) propose an approach for adjusting effect sizes in meta-analysis for publication bias. We use the Andrews-Kasy estimator to adjust the result of 15 meta-analyses and compare the adjusted results to 15 large-scale multiple labs replication studies estimating the same effects. The pre-registered replications provide precisely estimated effect sizes, which do not suffer from publication bias. The Andrews-Kasy approach leads to a moderate reduction of the inflated effect sizes in the meta-analyses. However, the approach still overestimates effect sizes by a factor of about two or more and has an estimated false positive rate of between 57% and 100%.


Electronics ◽  
2020 ◽  
Vol 9 (11) ◽  
pp. 1894
Author(s):  
Chun Guo ◽  
Zihua Song ◽  
Yuan Ping ◽  
Guowei Shen ◽  
Yuhei Cui ◽  
...  

Remote Access Trojan (RAT) is one of the most terrible security threats that organizations face today. At present, two major RAT detection methods are host-based and network-based detection methods. To complement one another’s strengths, this article proposes a phased RATs detection method by combining double-side features (PRATD). In PRATD, both host-side and network-side features are combined to build detection models, which is conducive to distinguishing the RATs from benign programs because that the RATs not only generate traffic on the network but also leave traces on the host at run time. Besides, PRATD trains two different detection models for the two runtime states of RATs for improving the True Positive Rate (TPR). The experiments on the network and host records collected from five kinds of benign programs and 20 famous RATs show that PRATD can effectively detect RATs, it can achieve a TPR as high as 93.609% with a False Positive Rate (FPR) as low as 0.407% for the known RATs, a TPR 81.928% and FPR 0.185% for the unknown RATs, which suggests it is a competitive candidate for RAT detection.


2020 ◽  
Vol 154 (Supplement_1) ◽  
pp. S5-S5
Author(s):  
Ridin Balakrishnan ◽  
Daniel Casa ◽  
Morayma Reyes Gil

Abstract The diagnostic approach for ruling out suspected acute pulmonary embolism (PE) in the ED setting includes several tests: ultrasound, plasma d-dimer assays, ventilation-perfusion scans and computed tomography pulmonary angiography (CTPA). Importantly, a pretest probability scoring algorithm is highly recommended to triage high risk cases while also preventing unnecessary testing and harm to low/moderate risk patients. The d-dimer assay (both ELISA and immunoturbidometric) has been shown to be extremely sensitive to rule out PE in conjunction with clinical probability. In particularly, d-dimer testing is recommended for low/moderate risk patients, in whom a negative d-dimer essentially rules out PE sparing these patients from CTPA radiation exposure, longer hospital stay and anticoagulation. However, an unspecific increase in fibrin-degradation related products has been seen with increase in age, resulting in higher false positive rate in the older population. This study analyzed patient visits to the ED of a large academic institution for five years and looked at the relationship between d-dimer values, age and CTPA results to better understand the value of age-adjusted d-dimer cut-offs in ruling out PE in the older population. A total of 7660 ED visits had a CTPA done to rule out PE; out of which 1875 cases had a d-dimer done in conjunction with the CT and 5875 had only CTPA done. Out of the 1875 cases, 1591 had positive d-dimer results (>0.50 µg/ml (FEU)), of which 910 (57%) were from patients older than or equal to fifty years of age. In these older patients, 779 (86%) had a negative CT result. The following were the statistical measures of the d-dimer test before adjusting for age: sensitivity (98%), specificity (12%); negative predictive value (98%) and false positive rate (88%). After adjusting for age in people older than 50 years (d-dimer cut off = age/100), 138 patients eventually turned out to be d-dimer negative and every case but four had a CT result that was also negative for a PE. The four cases included two non-diagnostic results and two with subacute/chronic/subsegmental PE on imaging. None of these four patients were prescribed anticoagulation. The statistical measures of the d-dimer test after adjusting for age showed: sensitivity (96%), specificity (20%); negative predictive value (98%) and a decrease in the false positive rate (80%). Therefore, imaging could have been potentially avoided in 138/779 (18%) of the patients who were part of this older population and had eventual negative or not clinically significant findings on CTPA if age-adjusted d-dimers were used. This data very strongly advocates for the clinical usefulness of an age-adjusted cut-off of d-dimer to rule out PE.


2021 ◽  
Vol 18 (1) ◽  
Author(s):  
Ulrike Baum ◽  
Sangita Kulathinal ◽  
Kari Auranen

Abstract Background Non-sensitive and non-specific observation of outcomes in time-to-event data affects event counts as well as the risk sets, thus, biasing the estimation of hazard ratios. We investigate how imperfect observation of incident events affects the estimation of vaccine effectiveness based on hazard ratios. Methods Imperfect time-to-event data contain two classes of events: a portion of the true events of interest; and false-positive events mistakenly recorded as events of interest. We develop an estimation method utilising a weighted partial likelihood and probabilistic deletion of false-positive events and assuming the sensitivity and the false-positive rate are known. The performance of the method is evaluated using simulated and Finnish register data. Results The novel method enables unbiased semiparametric estimation of hazard ratios from imperfect time-to-event data. False-positive rates that are small can be approximated to be zero without inducing bias. The method is robust to misspecification of the sensitivity as long as the ratio of the sensitivity in the vaccinated and the unvaccinated is specified correctly and the cumulative risk of the true event is small. Conclusions The weighted partial likelihood can be used to adjust for outcome measurement errors in the estimation of hazard ratios and effectiveness but requires specifying the sensitivity and the false-positive rate. In absence of exact information about these parameters, the method works as a tool for assessing the potential magnitude of bias given a range of likely parameter values.


Sign in / Sign up

Export Citation Format

Share Document