A cross-entropy based stacking method in ensemble learning
Stacking is one of the major types of ensemble learning techniques in which a set of base classifiers contributes their outputs to the meta-level classifier, and the meta-level classifier combines them so as to produce more accurate classifications. In this paper, we propose a new stacking algorithm that defines the cross-entropy as the loss function for the classification problem. The training process is conducted by using a neural network with the stochastic gradient descent technique. One major characteristic of our method is its treatment of each meta instance as a whole with one optimization model, which is different from some other stacking methods such as stacking with multi-response linear regression and stacking with multi-response model trees. In these methods each meta instance is divided into a set of sub-instances. Multiple models apply to those sub-instances and each for a class label. There is no connection between different models. It is very likely that our treatment is a better choice for finding suitable weights. Experiments with 22 data sets from the UCI machine learning repository show that the proposed stacking approach performs well. It outperforms all three base classifiers, several state-of-the-art stacking algorithms, and some other representative ensemble learning methods on average.