scholarly journals Multiobjective Neural Network Ensembles Based on Regularized Negative Correlation Learning

2010 ◽  
Vol 22 (12) ◽  
pp. 1738-1751 ◽  
Author(s):  
Huanhuan Chen ◽  
Xin Yao
Author(s):  
Yong Liu ◽  
Xin Yao ◽  
Tetsuya Higuchi

This chapter describes negative correlation learning for designing neural network ensembles. Negative correlation learning has been firstly analysed in terms of minimising mutual information on a regression task. By minimising the mutual information between variables extracted by two neural networks, they are forced to convey different information about some features of their input. Based on the decision boundaries and correct response sets, negative correlation learning has been further studied on two pattern classification problems. The purpose of examining the decision boundaries and the correct response sets is not only to illustrate the learning behavior of negative correlation learning, but also to cast light on how to design more effective neural network ensembles. The experimental results showed the decision boundary of the trained neural network ensemble by negative correlation learning is almost as good as the optimum decision boundary.


2009 ◽  
Vol 19 (02) ◽  
pp. 67-89 ◽  
Author(s):  
M. A. H. AKHAND ◽  
MD. MONIRUL ISLAM ◽  
KAZUYUKI MURASE

Ensembles with several classifiers (such as neural networks or decision trees) are widely used to improve the generalization performance over a single classifier. Proper diversity among component classifiers is considered an important parameter for ensemble construction so that failure of one may be compensated by others. Among various approaches, data sampling, i.e., different data sets for different classifiers, is found more effective than other approaches. A number of ensemble methods have been proposed under the umbrella of data sampling in which some are constrained to neural networks or decision trees and others are commonly applicable to both types of classifiers. We studied prominent data sampling techniques for neural network ensembles, and then experimentally evaluated their effectiveness on a common test ground. Based on overlap and uncover, the relation between generalization and diversity is presented. Eight ensemble methods were tested on 30 benchmark classification problems. We found that bagging and boosting, the pioneer ensemble methods, are still better than most of the other proposed methods. However, negative correlation learning that implicitly encourages different networks to different training spaces is shown as better or at least comparable to bagging and boosting that explicitly create different training spaces.


Sign in / Sign up

Export Citation Format

Share Document