GIS-Based Landslide Susceptibility Evaluation Using Certainty Factor and Index of Entropy Ensembled with Alternating Decision Tree Models

Author(s):  
Wei Chen ◽  
Hamid Reza Pourghasemi ◽  
Aiding Kornejady ◽  
Xiaoshen Xie
2018 ◽  
Vol 10 (10) ◽  
pp. 1545 ◽  
Author(s):  
Sung-Jae Park ◽  
Chang-Wook Lee ◽  
Saro Lee ◽  
Moung-Jin Lee

We assessed landslide susceptibility using Chi-square Automatic Interaction Detection (CHAID), exhaustive CHAID, and Quick, Unbiased, and Efficient Statistical Tree (QUEST) decision tree models in Jumunjin-eup, Gangneung-si, Korea. A total of 548 landslides were identified based on interpretation of aerial photographs. Half of the 548 landslides were selected for modeling, and the remaining half were used for verification. We used 20 landslide control factors that were classified into five categories, namely topographic elements, hydrological elements, soil maps, forest maps, and geological maps, to determine landslide susceptibility. The relationships of landslide occurrence with landslide-inducing factors were analyzed using CHAID, exhaustive CHAID, and QUEST models. The three models were then verified using the area under the curve (AUC) method. The results showed that the CHAID model (AUC = 87.1%) was more accurate than the exhaustive CHAID (AUC = 86.9%) and QUEST models (AUC = 82.8%). The verification results showed that the CHAID model had the highest accuracy. There was high susceptibility to landslides in mountainous areas and low susceptibility in coastal areas. Analyzing the characteristics of the landslide control factors in advance will enable us to obtain more accurate results.


CATENA ◽  
2020 ◽  
Vol 187 ◽  
pp. 104396 ◽  
Author(s):  
Yanli Wu ◽  
Yutian Ke ◽  
Zhuo Chen ◽  
Shouyun Liang ◽  
Hongliang Zhao ◽  
...  

2021 ◽  
Vol 54 (1) ◽  
pp. 1-38
Author(s):  
Víctor Adrián Sosa Hernández ◽  
Raúl Monroy ◽  
Miguel Angel Medina-Pérez ◽  
Octavio Loyola-González ◽  
Francisco Herrera

Experts from different domains have resorted to machine learning techniques to produce explainable models that support decision-making. Among existing techniques, decision trees have been useful in many application domains for classification. Decision trees can make decisions in a language that is closer to that of the experts. Many researchers have attempted to create better decision tree models by improving the components of the induction algorithm. One of the main components that have been studied and improved is the evaluation measure for candidate splits. In this article, we introduce a tutorial that explains decision tree induction. Then, we present an experimental framework to assess the performance of 21 evaluation measures that produce different C4.5 variants considering 110 databases, two performance measures, and 10× 10-fold cross-validation. Furthermore, we compare and rank the evaluation measures by using a Bayesian statistical analysis. From our experimental results, we present the first two performance rankings in the literature of C4.5 variants. Moreover, we organize the evaluation measures into two groups according to their performance. Finally, we introduce meta-models that automatically determine the group of evaluation measures to produce a C4.5 variant for a new database and some further opportunities for decision tree models.


Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2849
Author(s):  
Sungbum Jun

Due to the recent advance in the industrial Internet of Things (IoT) in manufacturing, the vast amount of data from sensors has triggered the need for leveraging such big data for fault detection. In particular, interpretable machine learning techniques, such as tree-based algorithms, have drawn attention to the need to implement reliable manufacturing systems, and identify the root causes of faults. However, despite the high interpretability of decision trees, tree-based models make a trade-off between accuracy and interpretability. In order to improve the tree’s performance while maintaining its interpretability, an evolutionary algorithm for discretization of multiple attributes, called Decision tree Improved by Multiple sPLits with Evolutionary algorithm for Discretization (DIMPLED), is proposed. The experimental results with two real-world datasets from sensors showed that the decision tree improved by DIMPLED outperformed the performances of single-decision-tree models (C4.5 and CART) that are widely used in practice, and it proved competitive compared to the ensemble methods, which have multiple decision trees. Even though the ensemble methods could produce slightly better performances, the proposed DIMPLED has a more interpretable structure, while maintaining an appropriate performance level.


Author(s):  
Hamed Fazlollahtabar ◽  
Seyed Taghi Akhavan Niaki
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document