scholarly journals Using Decision Tree Induction Systems for Modeling Space-Time Behavior

2010 ◽  
Vol 32 (4) ◽  
pp. 330-350 ◽  
Author(s):  
T. A. Arentze ◽  
F. Hofman ◽  
H. Mourik ◽  
H.J.P. Timmermans ◽  
G. Wets
2021 ◽  
Vol 13 (8) ◽  
pp. 4105
Author(s):  
Yupei Jiang ◽  
Honghu Sun

Leisure walking has been an important topic in space-time behavior and public health research. However, prior studies pay little attention to the integration and the characterization of diverse and multilevel demands of leisure walking. This study constructs a theoretical framework of leisure walking behavior demands from three different dimensions and levels of activity participation, space-time opportunity, and health benefit. On this basis, through a face-to-face survey in Nanjing, China (N = 1168, 2017–2018 data), this study quantitatively analyzes the characteristics of leisure walking demands, as well as the impact of the built environment and individual factors on it. The results show that residents have a high demand for participation and health benefits of leisure walking. The residential neighborhood provides more space opportunities for leisure walking, but there is a certain constraint on the choice of walking time. Residential neighborhood with medium or large parks is more likely to satisfy residents’ demands for engaging in leisure walking and obtaining high health benefits, while neighborhood with a high density of walking paths tends to limit the satisfaction of demands for space opportunity and health benefit. For residents aged 36 and above, married, or retired, their diverse demands for leisure walking are more likely to be fulfilled, while those with high education, medium-high individual income, general and above health status, or children (<18 years) are less likely to be fulfilled. These finding that can have important implications for the healthy neighborhood by fully considering diverse and multilevel demands of leisure walking behavior.


Author(s):  
Ferdinand Bollwein ◽  
Stephan Westphal

AbstractUnivariate decision tree induction methods for multiclass classification problems such as CART, C4.5 and ID3 continue to be very popular in the context of machine learning due to their major benefit of being easy to interpret. However, as these trees only consider a single attribute per node, they often get quite large which lowers their explanatory value. Oblique decision tree building algorithms, which divide the feature space by multidimensional hyperplanes, often produce much smaller trees but the individual splits are hard to interpret. Moreover, the effort of finding optimal oblique splits is very high such that heuristics have to be applied to determine local optimal solutions. In this work, we introduce an effective branch and bound procedure to determine global optimal bivariate oblique splits for concave impurity measures. Decision trees based on these bivariate oblique splits remain fairly interpretable due to the restriction to two attributes per split. The resulting trees are significantly smaller and more accurate than their univariate counterparts due to their ability of adapting better to the underlying data and capturing interactions of attribute pairs. Moreover, our evaluation shows that our algorithm even outperforms algorithms based on heuristically obtained multivariate oblique splits despite the fact that we are focusing on two attributes only.


2021 ◽  
Vol 54 (1) ◽  
pp. 1-38
Author(s):  
Víctor Adrián Sosa Hernández ◽  
Raúl Monroy ◽  
Miguel Angel Medina-Pérez ◽  
Octavio Loyola-González ◽  
Francisco Herrera

Experts from different domains have resorted to machine learning techniques to produce explainable models that support decision-making. Among existing techniques, decision trees have been useful in many application domains for classification. Decision trees can make decisions in a language that is closer to that of the experts. Many researchers have attempted to create better decision tree models by improving the components of the induction algorithm. One of the main components that have been studied and improved is the evaluation measure for candidate splits. In this article, we introduce a tutorial that explains decision tree induction. Then, we present an experimental framework to assess the performance of 21 evaluation measures that produce different C4.5 variants considering 110 databases, two performance measures, and 10× 10-fold cross-validation. Furthermore, we compare and rank the evaluation measures by using a Bayesian statistical analysis. From our experimental results, we present the first two performance rankings in the literature of C4.5 variants. Moreover, we organize the evaluation measures into two groups according to their performance. Finally, we introduce meta-models that automatically determine the group of evaluation measures to produce a C4.5 variant for a new database and some further opportunities for decision tree models.


Author(s):  
Rodrigo C. Barros ◽  
Ricardo Cerri ◽  
Pablo A. Jaskowiak ◽  
Andre C. P. L. F. de Carvalho

2017 ◽  
Vol 34 (2) ◽  
pp. 495-514 ◽  
Author(s):  
Mehrin Saremi ◽  
Farzin Yaghmaee

Sign in / Sign up

Export Citation Format

Share Document