scholarly journals An Accurate and Easy to Interpret Binary Classifier Based on Association Rules Using Implication Intensity and Majority Vote

Mathematics ◽  
2021 ◽  
Vol 9 (12) ◽  
pp. 1315
Author(s):  
Souhila Ghanem ◽  
Raphaël Couturier ◽  
Pablo Gregori

In supervised learning, classifiers range from simpler, more interpretable and generally less accurate ones (e.g., CART, C4.5, J48) to more complex, less interpretable and more accurate ones (e.g., neural networks, SVM). In this tradeoff between interpretability and accuracy, we propose a new classifier based on association rules, that is to say, both easy to interpret and leading to relevant accuracy. To illustrate this proposal, its performance is compared to other widely used methods on six open access datasets.

2020 ◽  
Vol 46 (8) ◽  
pp. 609-618
Author(s):  
N. Vershkov ◽  
M. Babenko ◽  
V. Kuchukov ◽  
N. Kuchukova

Electronics ◽  
2021 ◽  
Vol 10 (15) ◽  
pp. 1807
Author(s):  
Sascha Grollmisch ◽  
Estefanía Cano

Including unlabeled data in the training process of neural networks using Semi-Supervised Learning (SSL) has shown impressive results in the image domain, where state-of-the-art results were obtained with only a fraction of the labeled data. The commonality between recent SSL methods is that they strongly rely on the augmentation of unannotated data. This is vastly unexplored for audio data. In this work, SSL using the state-of-the-art FixMatch approach is evaluated on three audio classification tasks, including music, industrial sounds, and acoustic scenes. The performance of FixMatch is compared to Convolutional Neural Networks (CNN) trained from scratch, Transfer Learning, and SSL using the Mean Teacher approach. Additionally, a simple yet effective approach for selecting suitable augmentation methods for FixMatch is introduced. FixMatch with the proposed modifications always outperformed Mean Teacher and the CNNs trained from scratch. For the industrial sounds and music datasets, the CNN baseline performance using the full dataset was reached with less than 5% of the initial training data, demonstrating the potential of recent SSL methods for audio data. Transfer Learning outperformed FixMatch only for the most challenging dataset from acoustic scene classification, showing that there is still room for improvement.


Author(s):  
Carlos Lassance ◽  
Vincent Gripon ◽  
Antonio Ortega

For the past few years, deep learning (DL) robustness (i.e. the ability to maintain the same decision when inputs are subject to perturbations) has become a question of paramount importance, in particular in settings where misclassification can have dramatic consequences. To address this question, authors have proposed different approaches, such as adding regularizers or training using noisy examples. In this paper we introduce a regularizer based on the Laplacian of similarity graphs obtained from the representation of training data at each layer of the DL architecture. This regularizer penalizes large changes (across consecutive layers in the architecture) in the distance between examples of different classes, and as such enforces smooth variations of the class boundaries. We provide theoretical justification for this regularizer and demonstrate its effectiveness to improve robustness on classical supervised learning vision datasets for various types of perturbations. We also show it can be combined with existing methods to increase overall robustness.


2014 ◽  
Vol 144 ◽  
pp. 526-536 ◽  
Author(s):  
Jinling Wang ◽  
Ammar Belatreche ◽  
Liam Maguire ◽  
Thomas Martin McGinnity

Sign in / Sign up

Export Citation Format

Share Document