Comparison of Greedy Algorithms for Decision Tree Optimization

Author(s):  
Abdulaziz Alkhalid ◽  
Igor Chikalov ◽  
Mikhail Moshkov
2021 ◽  
Author(s):  
Arman Zharmagambetov ◽  
Suryabhan Singh Hada ◽  
Magzhan Gabidolla ◽  
Miguel A. Carreira-Perpinan

2010 ◽  
Vol 44-47 ◽  
pp. 3448-3452
Author(s):  
Wei Xiang Xu ◽  
Jing Xu ◽  
Xu Min Liu ◽  
Rui Dong

The methods of pruning have great influence on the effect of the decision tree. By researching on the pruning method based on misclassification, introduced the conception of condition misclassification and improved the standard of pruning. Propose the conditional misclassification pruning method for decision tree optimization and apply it in C4.5 algorithm. The experiment result shows that the condition misclassification pruning can avoid over pruned problem and non-enough pruned problem to some extent and improve the accurate of classification.


2021 ◽  
Author(s):  
Zhuoer Xu ◽  
Guanghui Zhu ◽  
Chunfeng Yuan ◽  
Yihua Huang

AbstractDecision trees have favorable properties, including interpretability, high computational efficiency, and the ability to learn from little training data. Learning a decision tree is known to be NP-complete. The researchers have proposed many greedy algorithms such as CART to learn approximate solutions. Inspired by the current popular neural networks, soft trees that support end-to-end training with back-propagation have attracted more and more attention. However, existing soft trees either lose the interpretability due to the continuous relaxation or employ the two-stage method of end-to-end building and then pruning. In this paper, we propose One-Stage Tree to build and prune the decision tree jointly through a bilevel optimization problem. Moreover, we leverage the reparameterization trick and proximal iterations to keep the tree discrete during end-to-end training. As a result, One-Stage Tree reduces the performance gap between training and testing and maintains the advantage of interpretability. Extensive experiments demonstrate that the proposed One-Stage Tree outperforms CART and the existing soft trees on classification and regression tasks.


2019 ◽  
Vol 1255 ◽  
pp. 012012 ◽  
Author(s):  
Irfan Sudahri Damanik ◽  
Agus Perdana Windarto ◽  
Anjar Wanto ◽  
Poningsih ◽  
Sundari Retno Andani ◽  
...  

2019 ◽  
Vol 1255 ◽  
pp. 012056
Author(s):  
Relita Buaton ◽  
Herman Mawengkang ◽  
Muhammad Zarlis ◽  
Syahril Effendi ◽  
Akim Manaor Hara Pardede ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document