Combining Classifiers and Learning Mixture-of-Experts

Author(s):  
Lei Xu ◽  
Shun-ichi Amari

Expert combination is a classic strategy that has been widely used in various problem solving tasks. A team of individuals with diverse and complementary skills tackle a task jointly such that a performance better than any single individual can make is achieved via integrating the strengths of individuals. Started from the late 1980’ in the handwritten character recognition literature, studies have been made on combining multiple classifiers. Also from the early 1990’ in the fields of neural networks and machine learning, efforts have been made under the name of ensemble learning or mixture of experts on how to learn jointly a mixture of experts (parametric models) and a combining strategy for integrating them in an optimal sense. The article aims at a general sketch of two streams of studies, not only with a re-elaboration of essential tasks, basic ingredients, and typical combining rules, but also with a general combination framework (especially one concise and more useful one-parameter modulated special case, called a-integration) suggested to unify a number of typical classifier combination rules and several mixture based learning models, as well as max rule and min rule used in the literature on fuzzy system.

2012 ◽  
pp. 243-252
Author(s):  
Lei Xu ◽  
Shun-ichi Amari

Expert combination is a classic strategy that has been widely used in various problem solving tasks. A team of individuals with diverse and complementary skills tackle a task jointly such that a performance better than any single individual can make is achieved via integrating the strengths of individuals. Started from the late 1980’ in the handwritten character recognition literature, studies have been made on combining multiple classifiers. Also from the early 1990’ in the fields of neural networks and machine learning, efforts have been made under the name of ensemble learning or mixture of experts on how to learn jointly a mixture of experts (parametric models) and a combining strategy for integrating them in an optimal sense. The article aims at a general sketch of two streams of studies, not only with a re-elaboration of essential tasks, basic ingredients, and typical combining rules, but also with a general combination framework (especially one concise and more useful one-parameter modulated special case, called a-integration) suggested to unify a number of typical classifier combination rules and several mixture based learning models, as well as max rule and min rule used in the literature on fuzzy system.


2019 ◽  
Vol 9 (13) ◽  
pp. 2758 ◽  
Author(s):  
Mujtaba Husnain ◽  
Malik Muhammad Saad Missen ◽  
Shahzad Mumtaz ◽  
Muhammad Zeeshan Jhanidr ◽  
Mickaël Coustaty ◽  
...  

In the area of pattern recognition and pattern matching, the methods based on deep learning models have recently attracted several researchers by achieving magnificent performance. In this paper, we propose the use of the convolutional neural network to recognize the multifont offline Urdu handwritten characters in an unconstrained environment. We also propose a novel dataset of Urdu handwritten characters since there is no publicly-available dataset of this kind. A series of experiments are performed on our proposed dataset. The accuracy achieved for character recognition is among the best while comparing with the ones reported in the literature for the same task.


2020 ◽  
Vol 15 (2) ◽  
pp. 136-143
Author(s):  
Omid Akbarzadeh ◽  
Mohammad R. Khosravi ◽  
Mehdi Shadloo-Jahromi

Background: Achieving the best possible classification accuracy is the main purpose of each pattern recognition scheme. An interesting area of classifier design is to design for biomedical signal and image processing. Materials and Methods: In the current work, in order to increase recognition accuracy, a theoretical frame for combination of classifiers is developed. This method uses different pattern representations to show that a wide range of existing algorithms could be incorporated as the particular cases of compound classification where all the pattern representations are used jointly to make an accurate decision. Results: The results show that the combination rules developed under the Naive Bayes and Fuzzy integral method outperforms other classifier combination schemes. Conclusion: The performance of different combination schemes has been studied through an experimental comparison of different classifier combination plans. The dataset used in the article has been obtained from biological signals.


Complexity ◽  
2019 ◽  
Vol 2019 ◽  
pp. 1-16 ◽  
Author(s):  
Alejandro Baldominos ◽  
Yago Saez ◽  
Pedro Isasi

Neuroevolution is the field of study that uses evolutionary computation in order to optimize certain aspect of the design of neural networks, most often its topology and hyperparameters. The field was introduced in the late-1980s, but only in the latest years the field has become mature enough to enable the optimization of deep learning models, such as convolutional neural networks. In this paper, we rely on previous work to apply neuroevolution in order to optimize the topology of deep neural networks that can be used to solve the problem of handwritten character recognition. Moreover, we take advantage of the fact that evolutionary algorithms optimize a population of candidate solutions, by combining a set of the best evolved models resulting in a committee of convolutional neural networks. This process is enhanced by using specific mechanisms to preserve the diversity of the population. Additionally, in this paper, we address one of the disadvantages of neuroevolution: the process is very expensive in terms of computational time. To lessen this issue, we explore the performance of topology transfer learning: whether the best topology obtained using neuroevolution for a certain domain can be successfully applied to a different domain. By doing so, the expensive process of neuroevolution can be reused to tackle different problems, turning it into a more appealing approach for optimizing the design of neural networks topologies. After evaluating our proposal, results show that both the use of neuroevolved committees and the application of topology transfer learning are successful: committees of convolutional neural networks are able to improve classification results when compared to single models, and topologies learned for one problem can be reused for a different problem and data with a good performance. Additionally, both approaches can be combined by building committees of transferred topologies, and this combination attains results that combine the best of both approaches.


2011 ◽  
Vol 368-373 ◽  
pp. 1583-1587
Author(s):  
Jun Ying Chen ◽  
Jing Chen ◽  
Zeng Xi Feng

In this paper, a new shape classification method based on different feature sets using multiple classifiers is proposed. Different feature sets are derived from the shapes by using different extraction methods. The implements of feature extraction are based on two ways: Fourier descriptors and Zernike moments. Multiple classifiers comprise Normal densities based linear classifier, k-nearest neighbor classifier, Feed-Forward neural network, Radial Basis Function neural network classifier. Each classifier is trained by two feature sets respectively to form two classification results. The final classification results are a combined response of the individual classifier using six different classifier combination rules and the results were compared with those derived from multiple classifiers based on the same feature sets and individual classifier. In this study we examined the different classification tasks on Kimia dataset. For the tasks the best combination strategy was found using the product rule, giving an average recognition rate of 95.83%.


Author(s):  
HUI-MIN LIU ◽  
PATRICK WANG ◽  
HONG-QIANG WANG ◽  
XIANG LI

There are different kinds of multiple classifiers in complex recognition systems in pursuit of better recognition capabilities. To exploit the classifiers' potential as individual ones sufficiently and enable them to work cooperatively for the best classification results, they need to be considered as a whole and be dynamically managed according to the changing recognition occasions. In this paper, we present the conception of distributed Multiple Classifiers Management (MCM) and a self-adaptive recursive MCM model based on Mixture-of-Experts (ME). A control subsystem is consisted in the model, which allows the classification progress to be controlled by the systems' priori information when necessary. The model adjusts its parameters dynamically according to the current recognition state and gives the recognition results by combining the current individual classifiers' results with the previous combination result under priori information's control. An algorithm based on one step error correction is presented to acquire the model's parameters dynamically. It takes the previous times' ensemble classification results as true and corrects the current weights of the classifiers. At last, an experiment on the recognition of space objects is simulated. The experiment results show that the MCM model in this paper is effective for complex recognition system containing heterogeneous classifiers on improving the recognition rate and robustness.


Sign in / Sign up

Export Citation Format

Share Document