General criteria on building decision trees for data classification

Author(s):  
Yo-Ping Huang ◽  
Vu Thi Thanh Hoa
2019 ◽  
pp. 147-169
Author(s):  
Michael Paluszek ◽  
Stephanie Thomas

2020 ◽  
Vol 8 (2) ◽  
pp. 100-105 ◽  
Author(s):  
Nurazlina Abdul Rashid ◽  
Norashikin Nasaruddin ◽  
Kartini Kassim ◽  
Amirah Hazwani Abdul Rahim

Information ◽  
2018 ◽  
Vol 9 (11) ◽  
pp. 284 ◽  
Author(s):  
Ahmad Hassanat

Big Data classification has recently received a great deal of attention due to the main properties of Big Data, which are volume, variety, and velocity. The furthest-pair-based binary search tree (FPBST) shows a great potential for Big Data classification. This work attempts to improve the performance the FPBST in terms of computation time, space consumed and accuracy. The major enhancement of the FPBST includes converting the resultant BST to a decision tree, in order to remove the need for the slow K-nearest neighbors (KNN), and to obtain a smaller tree, which is useful for memory usage, speeding both training and testing phases and increasing the classification accuracy. The proposed decision trees are based on calculating the probabilities of each class at each node using various methods; these probabilities are then used by the testing phase to classify an unseen example. The experimental results on some (small, intermediate and big) machine learning datasets show the efficiency of the proposed methods, in terms of space, speed and accuracy compared to the FPBST, which shows a great potential for further enhancements of the proposed methods to be used in practice.


Sign in / Sign up

Export Citation Format

Share Document