scholarly journals Tanimoto Gaussian Kernelized Feature Extraction Based Multinomial Gentleboost Machine Learning for Multi-Spectral Aerial Image Classification

Aerial images provide a landscape view of earth surfaces that utilized to monitor the large areas. Each Aerial image comprises the different scenes to identify the objects on the digital maps. The several methodologies have been developed to solve the problem of the scene classification using input aerial images. The method does not improve the classification performance using more aerial images. In order to improve the classification performance, a Tanimoto Gaussian Kernelized Feature Extraction Based Multinomial GentleBoost Classification (TGKFE-MGBC) technique is introduced. The TGKFE-MGBC technique comprises three major processes namely object-based segmentation, feature extraction and aerial image scene classification. At first, object-based segmentation partitions the aerial image into several sub-bands. Aerial image with more than two objects is called as multi-spectral. The objects in spectral bands are identified by Tanimoto pixel similarity measure. This process helps to reduce the feature extraction time. Each object has different features like shape, size, color, texture and so on. After that, Gaussian Kernelized Feature Extraction is carried out to extracts the features from the objects with minimal time. Finally, the Multinomial GentleBoost Classification is applied for categorizing the scenes into different classes with the extracted features. The GentleBoost is an ensemble technique uses multinomial naïve Bayes probabilistic classifier as a weak learner and it combines to makes a strong one for classifying the scenes. The strong classifier result improves the aerial image scene classification accuracy and minimizes the false positive rate. Simulation is conducted using aerial image database with different factors such as feature extraction time, aerial image scene classification accuracy and false positive rate. The results showed that the TGKFE-MGBC technique effectively improves the aerial image scene classification accuracy and minimizes the feature extraction time as well as the false positive rate.

Aerial image scene classification is a key problem to be resolved in image processing. Many research works have been designed for carried outing scene classification. But, accuracy of existing scene classification was lower. In order to overcome such limitation, a Robust Regressive Feature Extraction Based Relevance Vector Margin Boosting Scene Classification (RRFERVMBSC) Technique is proposed. The RRFE-RVMBSC technique is designed for improving the classification performance of aerial images with minimal time. The RRFERVMBSC technique comprises two main processes namely feature extraction and classification. Initially, RRFE-RVMBSC technique gets number of aerial images as input. After taking input, Robust Regressive Independent Component Analysis Based Feature Extraction process is performed in order to extract the features i.e. shape, color, texture and size from aerial image. After completing feature extraction process, RRFERVMBSC technique carried outs Ensembled Relevance Vector Margin Boosting Classification (ERVMBC) where all the input aerial images are classified into multiple classes with higher accuracy. The RRFE-RVMBSC technique constructs a strong classifier by reducing the training error of weak RVM classifier for effectual aerial images scene categorization. The RRFERVMBSC technique accomplishes simulation work using parameters such as feature extraction time classification accuracy and false positive rate with respect to number of aerial images.


2021 ◽  
Vol 13 (10) ◽  
pp. 1950
Author(s):  
Cuiping Shi ◽  
Xin Zhao ◽  
Liguo Wang

In recent years, with the rapid development of computer vision, increasing attention has been paid to remote sensing image scene classification. To improve the classification performance, many studies have increased the depth of convolutional neural networks (CNNs) and expanded the width of the network to extract more deep features, thereby increasing the complexity of the model. To solve this problem, in this paper, we propose a lightweight convolutional neural network based on attention-oriented multi-branch feature fusion (AMB-CNN) for remote sensing image scene classification. Firstly, we propose two convolution combination modules for feature extraction, through which the deep features of images can be fully extracted with multi convolution cooperation. Then, the weights of the feature are calculated, and the extracted deep features are sent to the attention mechanism for further feature extraction. Next, all of the extracted features are fused by multiple branches. Finally, depth separable convolution and asymmetric convolution are implemented to greatly reduce the number of parameters. The experimental results show that, compared with some state-of-the-art methods, the proposed method still has a great advantage in classification accuracy with very few parameters.


2016 ◽  
Vol 46 (4) ◽  
pp. 524-548 ◽  
Author(s):  
Shrawan Kumar Trivedi ◽  
Shubhamoy Dey

Purpose The email is an important medium for sharing information rapidly. However, spam, being a nuisance in such communication, motivates the building of a robust filtering system with high classification accuracy and good sensitivity towards false positives. In that context, this paper aims to present a combined classifier technique using a committee selection mechanism where the main objective is to identify a set of classifiers so that their individual decisions can be combined by a committee selection procedure for accurate detection of spam. Design/methodology/approach For training and testing of the relevant machine learning classifiers, text mining approaches are used in this research. Three data sets (Enron, SpamAssassin and LingSpam) have been used to test the classifiers. Initially, pre-processing is performed to extract the features associated with the email files. In the next step, the extracted features are taken through a dimensionality reduction method where non-informative features are removed. Subsequently, an informative feature subset is selected using genetic feature search. Thereafter, the proposed classifiers are tested on those informative features and the results compared with those of other classifiers. Findings For building the proposed combined classifier, three different studies have been performed. The first study identifies the effect of boosting algorithms on two probabilistic classifiers: Bayesian and Naïve Bayes. In that study, AdaBoost has been found to be the best algorithm for performance boosting. The second study was on the effect of different Kernel functions on support vector machine (SVM) classifier, where SVM with normalized polynomial (NP) kernel was observed to be the best. The last study was on combining classifiers with committee selection where the committee members were the best classifiers identified by the first study i.e. Bayesian and Naïve bays with AdaBoost, and the committee president was selected from the second study i.e. SVM with NP kernel. Results show that combining of the identified classifiers to form a committee machine gives excellent performance accuracy with a low false positive rate. Research limitations/implications This research is focused on the classification of email spams written in English language. Only body (text) parts of the emails have been used. Image spam has not been included in this work. We have restricted our work to only emails messages. None of the other types of messages like short message service or multi-media messaging service were a part of this study. Practical implications This research proposes a method of dealing with the issues and challenges faced by internet service providers and organizations that use email. The proposed model provides not only better classification accuracy but also a low false positive rate. Originality/value The proposed combined classifier is a novel classifier designed for accurate classification of email spam.


Author(s):  
Oladotun O. Okediran ◽  
Temitope O. Ashaolu ◽  
Elijah O. Omidiora

One of the most reliable biometrics when issues of access control and security is been considered is face recognition. An integral part of a face recognition system is the feature extraction stage, which becomes a critical problem where is a need to obtain the best feature with minimum classification error and low running time. Many of the existing face recognition systems have adopted different linear discriminant-based algorithms independently for feature extraction in which excellent performance were achieved, but identifying the best most suitable of these variants of linear discriminant-based algorithms for face recognition systems remains a subject open for research. Therefore, this paper carried out a comparative analysis of the performance of the basic Linear Discriminant Algorithm (LDA) and two of its variants which are Kernel Linear Discriminant Analysis (KLDA) and Multiclass Linear Discriminant Analysis (MLDA) in face recognition application for access control. Three Hundred and forty (340) face images were locally acquired with default size of 1200 x 1200. Two hundred and forty (240) images were used for training while the remaining hundred (100) images were used for testing purpose. The image enhancement involves converting into grayscale and normalizing the acquired images using histogram equalization method. Feature extraction and dimension reduction of the images were done using each of LDA, KLDA and MLDA algorithms individually. The extracted feature subsets of the images from each of LDA, KLDA and MLDA algorithm were individually classified using Euclidian distance. This technique was implemented using Matrix Laboratory (R2015a). The performance of LDA, KLDA and MLDA was evaluated and compared at 200 x 200 pixel resolution and 0.57 threshold value using recognition accuracy, sensitivity, specificity, false positive rate, training time and recognition time. The evaluation result shows that the LDA algorithm yielded recognition accuracy, sensitivity, specificity, false positive rate, training time and recognition time of 93.00%, 92.86%, 93.33%, 6.67%, 1311.76 seconds and 67.98 seconds respectively. Also, KLDA recorded recognition accuracy, sensitivity, specificity, false positive rate, training time and recognition time of 95.00%, 95.71%, 93.33%, 6.67%, 1393.24 seconds and 63.67 seconds respectively. Furthermore, MLDA algorithm yielded recognition accuracy, sensitivity, specificity, false positive rate, training time and recognition time of 97.00%, 97.14%, 96.67%, 3.33%, 1191.55 seconds and 58.65 seconds respectively. The t-test measured between the accuracies of MLDA algorithm and KLDA reveals that MLDA algorithm was statistically significant at . Also, the t-test measured between the accuracies of MLDA algorithm and LDA reveals that MLDA algorithm was statistically significant at .


2002 ◽  
Vol 41 (01) ◽  
pp. 37-41 ◽  
Author(s):  
S. Shung-Shung ◽  
S. Yu-Chien ◽  
Y. Mei-Due ◽  
W. Hwei-Chung ◽  
A. Kao

Summary Aim: Even with careful observation, the overall false-positive rate of laparotomy remains 10-15% when acute appendicitis was suspected. Therefore, the clinical efficacy of Tc-99m HMPAO labeled leukocyte (TC-WBC) scan for the diagnosis of acute appendicitis in patients presenting with atypical clinical findings is assessed. Patients and Methods: Eighty patients presenting with acute abdominal pain and possible acute appendicitis but atypical findings were included in this study. After intravenous injection of TC-WBC, serial anterior abdominal/pelvic images at 30, 60, 120 and 240 min with 800k counts were obtained with a gamma camera. Any abnormal localization of radioactivity in the right lower quadrant of the abdomen, equal to or greater than bone marrow activity, was considered as a positive scan. Results: 36 out of 49 patients showing positive TC-WBC scans received appendectomy. They all proved to have positive pathological findings. Five positive TC-WBC were not related to acute appendicitis, because of other pathological lesions. Eight patients were not operated and clinical follow-up after one month revealed no acute abdominal condition. Three of 31 patients with negative TC-WBC scans received appendectomy. They also presented positive pathological findings. The remaining 28 patients did not receive operations and revealed no evidence of appendicitis after at least one month of follow-up. The overall sensitivity, specificity, accuracy, positive and negative predictive values for TC-WBC scan to diagnose acute appendicitis were 92, 78, 86, 82, and 90%, respectively. Conclusion: TC-WBC scan provides a rapid and highly accurate method for the diagnosis of acute appendicitis in patients with equivocal clinical examination. It proved useful in reducing the false-positive rate of laparotomy and shortens the time necessary for clinical observation.


1993 ◽  
Vol 32 (02) ◽  
pp. 175-179 ◽  
Author(s):  
B. Brambati ◽  
T. Chard ◽  
J. G. Grudzinskas ◽  
M. C. M. Macintosh

Abstract:The analysis of the clinical efficiency of a biochemical parameter in the prediction of chromosome anomalies is described, using a database of 475 cases including 30 abnormalities. A comparison was made of two different approaches to the statistical analysis: the use of Gaussian frequency distributions and likelihood ratios, and logistic regression. Both methods computed that for a 5% false-positive rate approximately 60% of anomalies are detected on the basis of maternal age and serum PAPP-A. The logistic regression analysis is appropriate where the outcome variable (chromosome anomaly) is binary and the detection rates refer to the original data only. The likelihood ratio method is used to predict the outcome in the general population. The latter method depends on the data or some transformation of the data fitting a known frequency distribution (Gaussian in this case). The precision of the predicted detection rates is limited by the small sample of abnormals (30 cases). Varying the means and standard deviations (to the limits of their 95% confidence intervals) of the fitted log Gaussian distributions resulted in a detection rate varying between 42% and 79% for a 5% false-positive rate. Thus, although the likelihood ratio method is potentially the better method in determining the usefulness of a test in the general population, larger numbers of abnormal cases are required to stabilise the means and standard deviations of the fitted log Gaussian distributions.


2019 ◽  
Author(s):  
Amanda Kvarven ◽  
Eirik Strømland ◽  
Magnus Johannesson

Andrews & Kasy (2019) propose an approach for adjusting effect sizes in meta-analysis for publication bias. We use the Andrews-Kasy estimator to adjust the result of 15 meta-analyses and compare the adjusted results to 15 large-scale multiple labs replication studies estimating the same effects. The pre-registered replications provide precisely estimated effect sizes, which do not suffer from publication bias. The Andrews-Kasy approach leads to a moderate reduction of the inflated effect sizes in the meta-analyses. However, the approach still overestimates effect sizes by a factor of about two or more and has an estimated false positive rate of between 57% and 100%.


Electronics ◽  
2020 ◽  
Vol 9 (11) ◽  
pp. 1894
Author(s):  
Chun Guo ◽  
Zihua Song ◽  
Yuan Ping ◽  
Guowei Shen ◽  
Yuhei Cui ◽  
...  

Remote Access Trojan (RAT) is one of the most terrible security threats that organizations face today. At present, two major RAT detection methods are host-based and network-based detection methods. To complement one another’s strengths, this article proposes a phased RATs detection method by combining double-side features (PRATD). In PRATD, both host-side and network-side features are combined to build detection models, which is conducive to distinguishing the RATs from benign programs because that the RATs not only generate traffic on the network but also leave traces on the host at run time. Besides, PRATD trains two different detection models for the two runtime states of RATs for improving the True Positive Rate (TPR). The experiments on the network and host records collected from five kinds of benign programs and 20 famous RATs show that PRATD can effectively detect RATs, it can achieve a TPR as high as 93.609% with a False Positive Rate (FPR) as low as 0.407% for the known RATs, a TPR 81.928% and FPR 0.185% for the unknown RATs, which suggests it is a competitive candidate for RAT detection.


2020 ◽  
Vol 154 (Supplement_1) ◽  
pp. S5-S5
Author(s):  
Ridin Balakrishnan ◽  
Daniel Casa ◽  
Morayma Reyes Gil

Abstract The diagnostic approach for ruling out suspected acute pulmonary embolism (PE) in the ED setting includes several tests: ultrasound, plasma d-dimer assays, ventilation-perfusion scans and computed tomography pulmonary angiography (CTPA). Importantly, a pretest probability scoring algorithm is highly recommended to triage high risk cases while also preventing unnecessary testing and harm to low/moderate risk patients. The d-dimer assay (both ELISA and immunoturbidometric) has been shown to be extremely sensitive to rule out PE in conjunction with clinical probability. In particularly, d-dimer testing is recommended for low/moderate risk patients, in whom a negative d-dimer essentially rules out PE sparing these patients from CTPA radiation exposure, longer hospital stay and anticoagulation. However, an unspecific increase in fibrin-degradation related products has been seen with increase in age, resulting in higher false positive rate in the older population. This study analyzed patient visits to the ED of a large academic institution for five years and looked at the relationship between d-dimer values, age and CTPA results to better understand the value of age-adjusted d-dimer cut-offs in ruling out PE in the older population. A total of 7660 ED visits had a CTPA done to rule out PE; out of which 1875 cases had a d-dimer done in conjunction with the CT and 5875 had only CTPA done. Out of the 1875 cases, 1591 had positive d-dimer results (>0.50 µg/ml (FEU)), of which 910 (57%) were from patients older than or equal to fifty years of age. In these older patients, 779 (86%) had a negative CT result. The following were the statistical measures of the d-dimer test before adjusting for age: sensitivity (98%), specificity (12%); negative predictive value (98%) and false positive rate (88%). After adjusting for age in people older than 50 years (d-dimer cut off = age/100), 138 patients eventually turned out to be d-dimer negative and every case but four had a CT result that was also negative for a PE. The four cases included two non-diagnostic results and two with subacute/chronic/subsegmental PE on imaging. None of these four patients were prescribed anticoagulation. The statistical measures of the d-dimer test after adjusting for age showed: sensitivity (96%), specificity (20%); negative predictive value (98%) and a decrease in the false positive rate (80%). Therefore, imaging could have been potentially avoided in 138/779 (18%) of the patients who were part of this older population and had eventual negative or not clinically significant findings on CTPA if age-adjusted d-dimers were used. This data very strongly advocates for the clinical usefulness of an age-adjusted cut-off of d-dimer to rule out PE.


Sign in / Sign up

Export Citation Format

Share Document