An evolutionary algorithm for constructing a decision forest: Combining the classification of disjoints decision trees

2008 ◽  
Vol 23 (4) ◽  
pp. 455-482 ◽  
Author(s):  
Lior Rokach
Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2849
Author(s):  
Sungbum Jun

Due to the recent advance in the industrial Internet of Things (IoT) in manufacturing, the vast amount of data from sensors has triggered the need for leveraging such big data for fault detection. In particular, interpretable machine learning techniques, such as tree-based algorithms, have drawn attention to the need to implement reliable manufacturing systems, and identify the root causes of faults. However, despite the high interpretability of decision trees, tree-based models make a trade-off between accuracy and interpretability. In order to improve the tree’s performance while maintaining its interpretability, an evolutionary algorithm for discretization of multiple attributes, called Decision tree Improved by Multiple sPLits with Evolutionary algorithm for Discretization (DIMPLED), is proposed. The experimental results with two real-world datasets from sensors showed that the decision tree improved by DIMPLED outperformed the performances of single-decision-tree models (C4.5 and CART) that are widely used in practice, and it proved competitive compared to the ensemble methods, which have multiple decision trees. Even though the ensemble methods could produce slightly better performances, the proposed DIMPLED has a more interpretable structure, while maintaining an appropriate performance level.


Author(s):  
Laurent Vézard ◽  
Pierrick Legrand ◽  
Marie Chavent ◽  
Frédérique Faïta-Aïnseba ◽  
Julien Clauzel ◽  
...  

2008 ◽  
pp. 2978-2992
Author(s):  
Jianting Zhang ◽  
Wieguo Liu ◽  
Le Gruenwald

Decision trees (DT) has been widely used for training and classification of remotely sensed image data due to its capability to generate human interpretable decision rules and its relatively fast speed in training and classification. This chapter proposes a successive decision tree (SDT) approach where the samples in the ill-classified branches of a previous resulting decision tree are used to construct a successive decision tree. The decision trees are chained together through pointers and used for classification. SDT aims at constructing more interpretable decision trees while attempting to improve classification accuracies. The proposed approach is applied to two real remotely sensed image datasets for evaluations in terms of classification accuracy and interpretability of the resulting decision rules.


Author(s):  
Charles X. Ling ◽  
John J. Parry ◽  
Handong Wang

Nearest Neighbour (NN) learning algorithms utilize a distance function to determine the classification of testing examples. The attribute weights in the distance function should be set appropriately. We study situations where a simple approach of setting attribute weights using decision trees does not work well, and design three improvements. We test these new methods thoroughly using artificially generated datasets and datasets from the machine learning repository.


Sign in / Sign up

Export Citation Format

Share Document