scholarly journals Advocacy Learning: Learning through Competition and Class-Conditional Representations

Author(s):  
Ian Fox ◽  
Jenna Wiens

We introduce advocacy learning, a novel supervised training scheme for attention-based classification problems. Advocacy learning relies on a framework consisting of two connected networks: 1) N Advocates (one for each class), each of which outputs an argument in the form of an attention map over the input, and 2) a Judge, which predicts the class label based on these arguments. Each Advocate produces a class-conditional representation with the goal of convincing the Judge that the input example belongs to their class, even when the input belongs to a different class. Applied to several different classification tasks, we show that advocacy learning can lead to small improvements in classification accuracy over an identical supervised baseline. Though a series of follow-up experiments, we analyze when and how such class-conditional representations improve discriminative performance. Though somewhat counter-intuitive, a framework in which subnetworks are trained to competitively provide evidence in support of their class shows promise, in many cases performing on par with standard learning approaches. This provides a foundation for further exploration into competition and class-conditional representations in supervised learning.

2021 ◽  
Vol 25 (1) ◽  
pp. 21-34
Author(s):  
Rafael B. Pereira ◽  
Alexandre Plastino ◽  
Bianca Zadrozny ◽  
Luiz H.C. Merschmann

In many important application domains, such as text categorization, biomolecular analysis, scene or video classification and medical diagnosis, instances are naturally associated with more than one class label, giving rise to multi-label classification problems. This has led, in recent years, to a substantial amount of research in multi-label classification. More specifically, feature selection methods have been developed to allow the identification of relevant and informative features for multi-label classification. This work presents a new feature selection method based on the lazy feature selection paradigm and specific for the multi-label context. Experimental results show that the proposed technique is competitive when compared to multi-label feature selection techniques currently used in the literature, and is clearly more scalable, in a scenario where there is an increasing amount of data.


2018 ◽  
Vol 4 (10) ◽  
pp. 112 ◽  
Author(s):  
Mariam Kalakech ◽  
Alice Porebski ◽  
Nicolas Vandenbroucke ◽  
Denis Hamad

These last few years, several supervised scores have been proposed in the literature to select histograms. Applied to color texture classification problems, these scores have improved the accuracy by selecting the most discriminant histograms among a set of available ones computed from a color image. In this paper, two new scores are proposed to select histograms: The adapted Variance score and the adapted Laplacian score. These new scores are computed without considering the class label of the images, contrary to what is done until now. Experiments, achieved on OuTex, USPTex, and BarkTex sets, show that these unsupervised scores give as good results as the supervised ones for LBP histogram selection.


Author(s):  
Arunkumar Chinnaswamy ◽  
Ramakrishnan Srinivasan

The process of Feature selection in machine learning involves the reduction in the number of features (genes) and similar activities that results in an acceptable level of classification accuracy. This paper discusses the filter based feature selection methods such as Information Gain and Correlation coefficient. After the process of feature selection is performed, the selected genes are subjected to five classification problems such as Naïve Bayes, Bagging, Random Forest, J48 and Decision Stump. The same experiment is performed on the raw data as well. Experimental results show that the filter based approaches reduce the number of gene expression levels effectively and thereby has a reduced feature subset that produces higher classification accuracy compared to the same experiment performed on the raw data. Also Correlation Based Feature Selection uses very fewer genes and produces higher accuracy compared to Information Gain based Feature Selection approach.


2019 ◽  
Vol 38 (13) ◽  
pp. 2477-2503 ◽  
Author(s):  
Jialiang Li ◽  
Ming Gao ◽  
Ralph D'Agostino

1997 ◽  
Vol 12 (01) ◽  
pp. 1-40 ◽  
Author(s):  
LEONARD A. BRESLOW ◽  
DAVID W. AHA

Induced decision trees are an extensively-researched solution to classification tasks. For many practical tasks, the trees produced by tree-generation algorithms are not comprehensible to users due to their size and complexity. Although many tree induction algorithms have been shown to produce simpler, more comprehensible trees (or data structures derived from trees) with good classification accuracy, tree simplification has usually been of secondary concern relative to accuracy, and no attempt has been made to survey the literature from the perspective of simplification. We present a framework that organizes the approaches to tree simplification and summarize and critique the approaches within this framework. The purpose of this survey is to provide researchers and practitioners with a concise overview of tree-simplification approaches and insight into their relative capabilities. In our final discussion, we briefly describe some empirical findings and discuss the application of tree induction algorithms to case retrieval in case-based reasoning systems.


2018 ◽  
Vol 8 (12) ◽  
pp. 2574 ◽  
Author(s):  
Qinghua Mao ◽  
Hongwei Ma ◽  
Xuhui Zhang ◽  
Guangming Zhang

Skewness Decision Tree Support Vector Machine (SDTSVM) algorithm is widely known as a supervised learning model for multi-class classification problems. However, the classification accuracy of the SDTSVM algorithm depends on the perfect selection of its parameters and the classification order. Therefore, an improved SDTSVM (ISDTSVM) algorithm is proposed in order to improve the classification accuracy of steel cord conveyor belt defects. In the proposed model, the classification order is determined by the sum of the Euclidean distances between multi-class sample centers and the parameters are optimized by the inertia weight Particle Swarm Optimization (PSO) algorithm. In order to verify the effectiveness of the ISDTSVM algorithm with different feature space, experiments were conducted on multiple UCI (University of California Irvine) data sets and steel cord conveyor belt defects using the proposed ISDTSVM algorithm and the conventional SDTSVM algorithm respectively. The average classification accuracies of five-fold cross-validation were obtained, based on two kinds of kernel functions respectively. For the Vowel, Zoo, and Wine data sets of the UCI data sets, as well as the steel cord conveyor belt defects, the ISDTSVM algorithm improved the classification accuracy by 3%, 3%, 1% and 4% respectively, compared to the SDTSVM algorithm. The classification accuracy of the radial basis function kernel were higher than the polynomial kernel. The results indicated that the proposed ISDTSVM algorithm improved the classification accuracy significantly, compared to the conventional SDTSVM algorithm.


Author(s):  
Joseph McGrath ◽  
Jonathon Neville ◽  
Tom Stewart ◽  
John Cronin

Inertial measurement units (IMUs) are becoming increasingly popular in activity classification and workload measurement in sport. This systematic literature review focuses on upper body activity classification in court or field-based sports. The aim of this paper is to provide sport scientists and coaches with an overview of the past research in this area, as well as the processes and challenges involved in activity classification. The SPORTDiscus, PubMed and Scopus databases were searched, resulting in 20 articles. Both manually defined algorithms and machine learning approaches have been used to classify IMU data with varying degrees of success. Manually defined algorithms may offer simplicity and reduced computational demand, whereas machine learning may be beneficial for complex classification problems. Inter-study results show that no one machine learning model is best for activity classification; differences in sensor placement, IMU specification and pre-processing decisions can all affect model performance. Accurate classification of sporting activities could benefit players, coaches and team medical personnel by providing an objective estimate of workload. This could help to prevent injuries, enhance performance and provide valuable data to coaching staff.


Author(s):  
JIA LV ◽  
NAIYANG DENG

Local learning has been successfully applied to transductive classification problems. In this paper, it is generalized to multi-class classification of transductive learning problems owing to its good classification ability. Meanwhile, there is essentially no ordinal meaning in class label of multi-class classification, and it belongs to discrete nominal variable. However, common binary series class label representation has the equal distance from one class to another, and it does not reflect the sparse and density relationship among classes distribution, so a learning and adjustable nominal class label representation method is presented. Experimental results on a set of benchmark multi-class datasets show the superiority of our algorithm.


Sign in / Sign up

Export Citation Format

Share Document