Decision Tree-Based Multiple Classifier Systems: An FPGA Perspective

Author(s):  
Mario Barbareschi ◽  
Salvatore Del Prete ◽  
Francesco Gargiulo ◽  
Antonino Mazzeo ◽  
Carlo Sansone
Author(s):  
Mario Barbareschi ◽  
Salvatore Barone ◽  
Nicola Mazzocca

AbstractSo far, multiple classifier systems have been increasingly designed to take advantage of hardware features, such as high parallelism and computational power. Indeed, compared to software implementations, hardware accelerators guarantee higher throughput and lower latency. Although the combination of multiple classifiers leads to high classification accuracy, the required area overhead makes the design of a hardware accelerator unfeasible, hindering the adoption of commercial configurable devices. For this reason, in this paper, we exploit approximate computing design paradigm to trade hardware area overhead off for classification accuracy. In particular, starting from trained DT models and employing precision-scaling technique, we explore approximate decision tree variants by means of multiple objective optimization problem, demonstrating a significant performance improvement targeting field-programmable gate array devices.


Entropy ◽  
2020 ◽  
Vol 22 (10) ◽  
pp. 1129
Author(s):  
Jędrzej Biedrzycki ◽  
Robert Burduk

A vital aspect of the Multiple Classifier Systems construction process is the base model integration. For example, the Random Forest approach used the majority voting rule to fuse the base classifiers obtained by bagging the training dataset. In this paper we propose the algorithm that uses partitioning the feature space whose split is determined by the decision rules of each decision tree node which is the base classification model. After dividing the feature space, the centroid of each new subspace is determined. This centroids are used in order to determine the weights needed in the integration phase based on the weighted majority voting rule. The proposal was compared with other Multiple Classifier Systems approaches. The experiments regarding multiple open-source benchmarking datasets demonstrate the effectiveness of our method. To discuss the results of our experiments, we use micro and macro-average classification performance measures.


Author(s):  
SIMON GÜNTER ◽  
HORST BUNKE

Handwritten text recognition is one of the most difficult problems in the field of pattern recognition. In this paper, we describe our efforts towards improving the performance of state-of-the-art handwriting recognition systems through the use of classifier ensembles. There are many examples of classification problems in the literature where multiple classifier systems increase the performance over single classifiers. Normally one of the two following approaches is used to create a multiple classifier system. (1) Several classifiers are developed completely independent of each other and combined in a last step. (2) Several classifiers are created out of one prototype classifier by using so-called classifier ensemble creation methods. In this paper an algorithm which combines both approaches is introduced and it is used to increase the recognition rate of a hidden Markov model (HMM) based handwritten word recognizer.


Author(s):  
ROMAN BERTOLAMI ◽  
HORST BUNKE

Current multiple classifier systems for unconstrained handwritten text recognition do not provide a straightforward way to utilize language model information. In this paper, we describe a generic method to integrate a statistical n-gram language model into the combination of multiple offline handwritten text line recognizers. The proposed method first builds a word transition network and then rescores this network with an n-gram language model. Experimental evaluation conducted on a large dataset of offline handwritten text lines shows that the proposed approach improves the recognition accuracy over a reference system as well as over the original combination method that does not include a language model.


Sign in / Sign up

Export Citation Format

Share Document