associative classifiers
Recently Published Documents


TOTAL DOCUMENTS

43
(FIVE YEARS 8)

H-INDEX

8
(FIVE YEARS 1)

2022 ◽  
Vol 12 (1) ◽  
pp. 0-0

Data Mining is an essential task because the digital world creates huge data daily. Associative classification is one of the data mining task which is used to carry out classification of data, based on the demand of knowledge users. Most of the associative classification algorithms are not able to analyze the big data which are mostly continuous in nature. This leads to the interest of analyzing the existing discretization algorithms which converts continuous data into discrete values and the development of novel discretizer Reliable Distributed Fuzzy Discretizer for big data set. Many discretizers suffer the problem of over splitting the partitions. Our proposed method is implemented in distributed fuzzy environment and aims to avoid over splitting of partitions by introducing a novel stopping criteria. Proposed discretization method is compared with existing distributed fuzzy partitioning method and achieved good accuracy in the performance of associative classifiers.


2020 ◽  
Vol 24 ◽  
pp. 105-122
Author(s):  
González-Méndez Andy ◽  
Martín Diana ◽  
Morales Eduardo ◽  
García-Borroto Milton

Associative classification is a pattern recognition approach that integrates classification and association rule discovery to build accurate classification models. These models are formed by a collection of contrast patterns that fulfill some restrictions. In this paper, we introduce an experimental comparison of the impact of using different restrictions in the classification accuracy. To the best of our knowledge, this is the first time that such analysis is performed, deriving some interesting findings about how restrictions impact on the classification results. Contrasting these results with previously published papers, we found that their conclusions could be unintentionally biased by the restrictions they used. We found, for example, that the jumping restriction could severely damage the pattern quality in the presence of dataset noise. We also found that the minimal support restriction has a different effect in the accuracy of two associative classifiers, therefore deciding which one is the best depends on the support value. This paper opens some interesting lines of research, mainly in the creation of new restrictions and new pattern types by joining different restrictions.


2020 ◽  
Vol 10 (8) ◽  
pp. 2779
Author(s):  
Adolfo Rangel-Díaz-de-la-Vega ◽  
Yenny Villuendas-Rey ◽  
Cornelio Yáñez-Márquez ◽  
Oscar Camacho-Nieto ◽  
Itzamá López-Yáñez

In this paper, an experimental study was carried out to determine the influence of imbalanced datasets preprocessing in the performance of associative classifiers, in order to find the better computational solutions to the problem of credit scoring. To do this, six undersampling algorithms, six oversampling algorithms and four hybrid algorithms were evaluated in 13 imbalanced datasets referring to credit scoring. Then, the performance of four associative classifiers was analyzed. The experiments carried out allowed us to determine which sampling algorithms had the best results, as well as their impact on the associative classifiers evaluated. Accordingly, we determine that the Hybrid Associative Classifier with Translation, the Extended Gamma Associative Classifier and the Naïve Associative Classifier do not improve their performance by using sampling algorithms for credit data balancing. On the other hand, the Smallest Normalized Difference Associative Memory classifier was beneficiated by using oversampling and hybrid algorithms.


Author(s):  
Jamolbek Mattiev ◽  
Branko Kavsek

Huge amounts of data are being collected and analyzed nowadays. By using the popular rule-learning algorithms, the number of rule discovered on those ?big? datasets can easily exceed thousands. To produce compact, understandable and accurate classifiers, such rules have to be grouped and pruned, so that only a reasonable number of them are presented to the end user for inspection and further analysis. In this paper, we propose new methods that are able to reduce the number of class association rules produced by ?classical? class association rule classifiers, while maintaining an accurate classification model that is comparable to the ones generated by state-of-the-art classification algorithms. More precisely, we propose new associative classifiers, called DC, DDC and CDC, that use distance-based agglomerative hierarchical clustering as a post-processing step to reduce the number of its rules, and in the rule-selection step, we use different strategies (based on database coverage and cluster center) for each algorithm. Experimental results performed on selected datasets from the UCI ML repository show that our classifiers are able to learn classifiers containing significantly fewer rules than state-of-the-art rule learning algorithms on datasets with a larger number of examples. On the other hand, the classification accuracy of the proposed classifiers is not significantly different from state-of-the-art rule-learners on most of the datasets.


2018 ◽  
Author(s):  
Matheus Freitas Da Silva ◽  
Veronica Oliveira De Carvalho

A classificação associativa, a qual vem sendo muito utilizada em diversos domínios, visa a obtenção de um modelo preditivo em que o processo é baseado na extração de regras de associação. A geração do modelo ocorre em etapas, sendo uma delas voltadas a ordenar e podar um conjunto de regras. No que se refere a ordenação, uma das soluções é ranquear as regras por meio de medidas objetivas (MOs). O critério de ordenação impacta a acurácia do classificador. Nos trabalhos da literatura as MOs são exploradas individualmente. Diante do exposto, este trabalho tem por objetivo explorar a agregação de medidas, em que várias MOs são consideradas ao mesmo tempo, no contexto de classificadores associativos.


Sign in / Sign up

Export Citation Format

Share Document