SVM Model for Feature Selection to Increase Accuracy and Reduce False Positive Rate in Falls Detection

Author(s):  
Md Rashed-Al-Mahfuz ◽  
Md. Robiul Hoque ◽  
Bimal Kumar Pramanik ◽  
Md. Ekramul Hamid ◽  
Mohammad Ali Moni
Author(s):  
Chunyong Yin ◽  
Luyu Ma ◽  
Lu Feng

Intrusion detection is a kind of security mechanism which is used to detect attacks and intrusion behaviors. Due to the low accuracy and the high false positive rate of the existing clonal selection algorithms applied to intrusion detection, in this paper, we proposed a feature selection method for improved clonal algorithm. The improved method detects the intrusion behavior by selecting the best individual overall and clones them. Experimental results show that the feature selection algorithm is better than the traditional feature selection algorithm on the different classifiers, and it is shown that the final detection results are better than traditional clonal algorithm with 99.6% accuracy and 0.1% false positive rate.


2021 ◽  
Vol 13 (1) ◽  
pp. 1-6
Author(s):  
Dimaz Arno Prasetio ◽  
Kusrini Kusrini ◽  
M. Rudyanto Arief

This study aims to measure the classification accuracy of XSS attacks by using a combination of two methods of determining feature characteristics, namely using linguistic computation and feature selection. XSS attacks have a certain pattern in their character arrangement, this can be studied by learners using n-gram modeling, but in certain cases XSS characteristics can contain a certain meta and synthetic this can be learned using feature selection modeling. From the results of this research, hybrid feature modeling gives good accuracy with an accuracy value of 99.87%, it is better than previous studies which the average is still below 99%, this study also tries to analyze the false positive rate considering that the false positive rate in attack detection is very influential for the convenience of the information security team, with the modeling proposed, the false positive rate is very small, namely 0.039%


2021 ◽  
Author(s):  
Tao Jiang ◽  
Martin Buchkovich ◽  
Alison Motsinger-Reif

Abstract Background: Same-species contamination detection is an important quality control step in genetic data analysis. Due to a scarcity of methods to detect and correct for this quality control issue, same-species contamination is more difficult to detect than cross-species contamination. We introduce a novel machine learning algorithm to detect same-species contamination in next-generation sequencing (NGS) data using a support vector machine (SVM) model. Our approach uniquely detects contamination using variant calling information stored in variant call format (VCF) files for DNA or RNA. Importantly, it can differentiate between same-species contamination and mixtures of tumor and normal cells.In the first stage, a change-point detection method is used to identify copy number variations (CNVs) and copy number aberrations (CNAs) for filtering. Next, single nucleotide polymorphism (SNP) data is used to test for same-species contamination using an SVM model. Based on the assumption that alternative allele frequencies in NGS follow the beta-binomial distribution, the deviation parameter ρ is estimated by the maximum likelihood method. All features of a radial basis function (RBF) kernel SVM are generated using publicly available or private training data. Results: We demonstrate our approach in simulation experiments. The datasets combine, in silico, exome sequencing data of DNA from two lymphoblastoid cell lines (NA12878 and NA10855). We generate VCF files using variants identified in these data and then evaluate the power and false-positive rate of our approach. Our approach can detect contamination levels as low as 5% with a reasonable false-positive rate. Results in real data have sensitivity above 99.99% and specificity of 90.24%, even in the presence of degraded samples with similar features as contaminated samples. We provide an R software implementation of our approach.Conclusions: Our approach addresses the gap in methods to test for same-species contamination in NGS. Due to its high sensitivity for degraded samples and tumor-normal samples, it represents an important tool that can be applied within the quality control process. Additionally, the user-friendly software has the unique ability to conduct quality control using the VCF format.


2019 ◽  
Vol 12 (2) ◽  
pp. 875-886
Author(s):  
Banda Srinivas Raja ◽  
Tummala Ranga Babu

In the current era, research on automated knowledge extraction from Chronic Obstructive Pulmonary Disease (COPD) images is growing rapidly. COPD becomes a highly prevalent disease that impacts both patients and healthcare system. In various medical applications, image classification algorithms are used to predict the disease severity that can help in early diagnosis and decision-making process. Also, for large scale and complex medical images, machine learning techniques are more efficient,accuracy and reliable. Traditional image classification models such as Naïve Bayesian, Neural Networks, SVM, Regression models. etc are used to classify the image using the annotated ROI and image texture features. These models are used as a diagnostic tool in analyzing the COPD and disease prediction. These models are not applicable to classify the COPD using the disease severity level. Also, the accuracy and false positive rate of existing classification models is still far from satisfactory, due to lack of feature extraction and noise handling methods. Therefore, developing an effective classification model for predicting the severity of the COPD using features derived from CT images is a challenge task.In this paper, an ensemble feature selection based classification model was developed, using images features extracted from COPD patients’ CT scan images, to classify disease into “Severity level ” and “Normal level” categories, representing their riskof suffering a COPD disease. We applied five different classifier methods and three state-of-the-art ensemble classifiers to the COPD dataset and validated their performance in terms of F-measure and false positive rate. We found that proposed feature selection based ensemble classifier (F-measure 0.957) had the highest average accuracy for COPD classification.


2002 ◽  
Vol 41 (01) ◽  
pp. 37-41 ◽  
Author(s):  
S. Shung-Shung ◽  
S. Yu-Chien ◽  
Y. Mei-Due ◽  
W. Hwei-Chung ◽  
A. Kao

Summary Aim: Even with careful observation, the overall false-positive rate of laparotomy remains 10-15% when acute appendicitis was suspected. Therefore, the clinical efficacy of Tc-99m HMPAO labeled leukocyte (TC-WBC) scan for the diagnosis of acute appendicitis in patients presenting with atypical clinical findings is assessed. Patients and Methods: Eighty patients presenting with acute abdominal pain and possible acute appendicitis but atypical findings were included in this study. After intravenous injection of TC-WBC, serial anterior abdominal/pelvic images at 30, 60, 120 and 240 min with 800k counts were obtained with a gamma camera. Any abnormal localization of radioactivity in the right lower quadrant of the abdomen, equal to or greater than bone marrow activity, was considered as a positive scan. Results: 36 out of 49 patients showing positive TC-WBC scans received appendectomy. They all proved to have positive pathological findings. Five positive TC-WBC were not related to acute appendicitis, because of other pathological lesions. Eight patients were not operated and clinical follow-up after one month revealed no acute abdominal condition. Three of 31 patients with negative TC-WBC scans received appendectomy. They also presented positive pathological findings. The remaining 28 patients did not receive operations and revealed no evidence of appendicitis after at least one month of follow-up. The overall sensitivity, specificity, accuracy, positive and negative predictive values for TC-WBC scan to diagnose acute appendicitis were 92, 78, 86, 82, and 90%, respectively. Conclusion: TC-WBC scan provides a rapid and highly accurate method for the diagnosis of acute appendicitis in patients with equivocal clinical examination. It proved useful in reducing the false-positive rate of laparotomy and shortens the time necessary for clinical observation.


1993 ◽  
Vol 32 (02) ◽  
pp. 175-179 ◽  
Author(s):  
B. Brambati ◽  
T. Chard ◽  
J. G. Grudzinskas ◽  
M. C. M. Macintosh

Abstract:The analysis of the clinical efficiency of a biochemical parameter in the prediction of chromosome anomalies is described, using a database of 475 cases including 30 abnormalities. A comparison was made of two different approaches to the statistical analysis: the use of Gaussian frequency distributions and likelihood ratios, and logistic regression. Both methods computed that for a 5% false-positive rate approximately 60% of anomalies are detected on the basis of maternal age and serum PAPP-A. The logistic regression analysis is appropriate where the outcome variable (chromosome anomaly) is binary and the detection rates refer to the original data only. The likelihood ratio method is used to predict the outcome in the general population. The latter method depends on the data or some transformation of the data fitting a known frequency distribution (Gaussian in this case). The precision of the predicted detection rates is limited by the small sample of abnormals (30 cases). Varying the means and standard deviations (to the limits of their 95% confidence intervals) of the fitted log Gaussian distributions resulted in a detection rate varying between 42% and 79% for a 5% false-positive rate. Thus, although the likelihood ratio method is potentially the better method in determining the usefulness of a test in the general population, larger numbers of abnormal cases are required to stabilise the means and standard deviations of the fitted log Gaussian distributions.


2019 ◽  
Author(s):  
Amanda Kvarven ◽  
Eirik Strømland ◽  
Magnus Johannesson

Andrews & Kasy (2019) propose an approach for adjusting effect sizes in meta-analysis for publication bias. We use the Andrews-Kasy estimator to adjust the result of 15 meta-analyses and compare the adjusted results to 15 large-scale multiple labs replication studies estimating the same effects. The pre-registered replications provide precisely estimated effect sizes, which do not suffer from publication bias. The Andrews-Kasy approach leads to a moderate reduction of the inflated effect sizes in the meta-analyses. However, the approach still overestimates effect sizes by a factor of about two or more and has an estimated false positive rate of between 57% and 100%.


Electronics ◽  
2020 ◽  
Vol 9 (11) ◽  
pp. 1894
Author(s):  
Chun Guo ◽  
Zihua Song ◽  
Yuan Ping ◽  
Guowei Shen ◽  
Yuhei Cui ◽  
...  

Remote Access Trojan (RAT) is one of the most terrible security threats that organizations face today. At present, two major RAT detection methods are host-based and network-based detection methods. To complement one another’s strengths, this article proposes a phased RATs detection method by combining double-side features (PRATD). In PRATD, both host-side and network-side features are combined to build detection models, which is conducive to distinguishing the RATs from benign programs because that the RATs not only generate traffic on the network but also leave traces on the host at run time. Besides, PRATD trains two different detection models for the two runtime states of RATs for improving the True Positive Rate (TPR). The experiments on the network and host records collected from five kinds of benign programs and 20 famous RATs show that PRATD can effectively detect RATs, it can achieve a TPR as high as 93.609% with a False Positive Rate (FPR) as low as 0.407% for the known RATs, a TPR 81.928% and FPR 0.185% for the unknown RATs, which suggests it is a competitive candidate for RAT detection.


2020 ◽  
Vol 154 (Supplement_1) ◽  
pp. S5-S5
Author(s):  
Ridin Balakrishnan ◽  
Daniel Casa ◽  
Morayma Reyes Gil

Abstract The diagnostic approach for ruling out suspected acute pulmonary embolism (PE) in the ED setting includes several tests: ultrasound, plasma d-dimer assays, ventilation-perfusion scans and computed tomography pulmonary angiography (CTPA). Importantly, a pretest probability scoring algorithm is highly recommended to triage high risk cases while also preventing unnecessary testing and harm to low/moderate risk patients. The d-dimer assay (both ELISA and immunoturbidometric) has been shown to be extremely sensitive to rule out PE in conjunction with clinical probability. In particularly, d-dimer testing is recommended for low/moderate risk patients, in whom a negative d-dimer essentially rules out PE sparing these patients from CTPA radiation exposure, longer hospital stay and anticoagulation. However, an unspecific increase in fibrin-degradation related products has been seen with increase in age, resulting in higher false positive rate in the older population. This study analyzed patient visits to the ED of a large academic institution for five years and looked at the relationship between d-dimer values, age and CTPA results to better understand the value of age-adjusted d-dimer cut-offs in ruling out PE in the older population. A total of 7660 ED visits had a CTPA done to rule out PE; out of which 1875 cases had a d-dimer done in conjunction with the CT and 5875 had only CTPA done. Out of the 1875 cases, 1591 had positive d-dimer results (>0.50 µg/ml (FEU)), of which 910 (57%) were from patients older than or equal to fifty years of age. In these older patients, 779 (86%) had a negative CT result. The following were the statistical measures of the d-dimer test before adjusting for age: sensitivity (98%), specificity (12%); negative predictive value (98%) and false positive rate (88%). After adjusting for age in people older than 50 years (d-dimer cut off = age/100), 138 patients eventually turned out to be d-dimer negative and every case but four had a CT result that was also negative for a PE. The four cases included two non-diagnostic results and two with subacute/chronic/subsegmental PE on imaging. None of these four patients were prescribed anticoagulation. The statistical measures of the d-dimer test after adjusting for age showed: sensitivity (96%), specificity (20%); negative predictive value (98%) and a decrease in the false positive rate (80%). Therefore, imaging could have been potentially avoided in 138/779 (18%) of the patients who were part of this older population and had eventual negative or not clinically significant findings on CTPA if age-adjusted d-dimers were used. This data very strongly advocates for the clinical usefulness of an age-adjusted cut-off of d-dimer to rule out PE.


Sign in / Sign up

Export Citation Format

Share Document