Comparison of Twelve Machine Learning Regression Methods for Spatial Decomposition of Demographic Data Using Multisource Geospatial Data: An Experiment in Guangzhou City, China

2021 ◽  
Vol 11 (20) ◽  
pp. 9424
Author(s):  
Guanwei Zhao ◽  
Zhitao Li ◽  
Muzhuang Yang

The spatial decomposition of demographic data at a fine resolution is a classic and crucial problem in the field of geographical information science. The main objective of this study was to compare twelve well-known machine learning regression algorithms for the spatial decomposition of demographic data with multisource geospatial data. Grid search and cross-validation methods were used to ensure that the optimal model parameters were obtained. The results showed that all the global regression algorithms used in the study exhibited acceptable results, besides the ordinary least squares (OLS) algorithm. In addition, the regularization method and the subsetting method were both useful for alleviating overfitting in the OLS model, and the former was better than the latter. The more competitive performance of the nonlinear regression algorithms than the linear regression algorithms implies that the relationship between population density and influence factors is likely to be non-linear. Among the global regression algorithms used in the study, the best results were achieved by the k-nearest neighbors (KNN) regression algorithm. In addition, it was found that multi-sources geospatial data can improve the accuracy of spatial decomposition results significantly, and thus the proposed method in our study can be applied to the study of spatial decomposition in other areas.

2021 ◽  
Vol 10 (2) ◽  
pp. 66
Author(s):  
Jiawei Zhu ◽  
Chao Tao ◽  
Xin Lin ◽  
Jian Peng ◽  
Haozhe Huang ◽  
...  

Analyzing the urban spatial structure of a city is a core topic within urban geographical information science that has the ability to assist urban planning, site selection, location recommendation, etc. Among previous studies, comprehending the functionality of places is a central topic and corresponds to understanding how people use places. With the help of big geospatial data which contain affluent information about human mobility and activity, we propose a novel multiple subspaces-based model to interpret the urban functional regions. This model is based on the assumption that the temporal activity patterns of places lie in a high-dimensional space and can be represented by a union of low-dimensional subspaces. These subspaces are obtained through finding sparse representations using the data science method known as sparse subspace clustering (SSC). The paper details how to use this method in the context of detecting functional regions. With these subspaces, we can detect the functionality of urban regions in a designated study area and further explore the characteristics of functional regions. We conducted experiments using real data in Shanghai. The experimental results and outperformance of our proposed model against the single subspace-based method prove the efficacy and feasibility of our model.


This article reviews the use of Geographical Information System (GIS) has been primarily applied in spatial decision making from simple to complex geospatial problems. GIS is usually referred to as a computer system used explicitly to store, manage, analyze, manipulate, and visualize geospatial data. GIS can produce meaningful information for a better understanding of solving related geographic/spatial problems. With the technology, hardware, and software assistance, GIS is at its progressive pace even though GIS starts with a simple and straightforward question of geographic features/event location. This rapid development has made GIS and spatial data becoming a critical commodity today. However, without the basic knowledge and GIS understanding, the actual GIS capabilities, such as understanding geographical concepts, managing geographic phenomena, and solving geographical problems, become limited. To become worse, GIS is was seen as a tool to facilitate map display and simple spatial analysis. Furthermore, the market's professional training emphasizes simple GIS components such as hardware, software, geospatial data mapping, extracting geographical data from tables (tabular data), simple queries or display, and spatial data editing mastered using GIS manuals in training. Thus, this article highlights the impact of implementing GIS without sufficient GIS fundamental knowledge, resulting in complicated spatial decision planning issues.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Tomoaki Mameno ◽  
Masahiro Wada ◽  
Kazunori Nozaki ◽  
Toshihito Takahashi ◽  
Yoshitaka Tsujioka ◽  
...  

AbstractThe purpose of this retrospective cohort study was to create a model for predicting the onset of peri-implantitis by using machine learning methods and to clarify interactions between risk indicators. This study evaluated 254 implants, 127 with and 127 without peri-implantitis, from among 1408 implants with at least 4 years in function. Demographic data and parameters known to be risk factors for the development of peri-implantitis were analyzed with three models: logistic regression, support vector machines, and random forests (RF). As the results, RF had the highest performance in predicting the onset of peri-implantitis (AUC: 0.71, accuracy: 0.70, precision: 0.72, recall: 0.66, and f1-score: 0.69). The factor that had the most influence on prediction was implant functional time, followed by oral hygiene. In addition, PCR of more than 50% to 60%, smoking more than 3 cigarettes/day, KMW less than 2 mm, and the presence of less than two occlusal supports tended to be associated with an increased risk of peri-implantitis. Moreover, these risk indicators were not independent and had complex effects on each other. The results of this study suggest that peri-implantitis onset was predicted in 70% of cases, by RF which allows consideration of nonlinear relational data with complex interactions.


2021 ◽  
Vol 11 (15) ◽  
pp. 6704
Author(s):  
Jingyong Cai ◽  
Masashi Takemoto ◽  
Yuming Qiu ◽  
Hironori Nakajo

Despite being heavily used in the training of deep neural networks (DNNs), multipliers are resource-intensive and insufficient in many different scenarios. Previous discoveries have revealed the superiority when activation functions, such as the sigmoid, are calculated by shift-and-add operations, although they fail to remove multiplications in training altogether. In this paper, we propose an innovative approach that can convert all multiplications in the forward and backward inferences of DNNs into shift-and-add operations. Because the model parameters and backpropagated errors of a large DNN model are typically clustered around zero, these values can be approximated by their sine values. Multiplications between the weights and error signals are transferred to multiplications of their sine values, which are replaceable with simpler operations with the help of the product to sum formula. In addition, a rectified sine activation function is utilized for further converting layer inputs into sine values. In this way, the original multiplication-intensive operations can be computed through simple add-and-shift operations. This trigonometric approximation method provides an efficient training and inference alternative for devices with insufficient hardware multipliers. Experimental results demonstrate that this method is able to obtain a performance close to that of classical training algorithms. The approach we propose sheds new light on future hardware customization research for machine learning.


2021 ◽  
Vol 10 (4) ◽  
pp. 246
Author(s):  
Vagan Terziyan ◽  
Anton Nikulin

Operating with ignorance is an important concern of geographical information science when the objective is to discover knowledge from the imperfect spatial data. Data mining (driven by knowledge discovery tools) is about processing available (observed, known, and understood) samples of data aiming to build a model (e.g., a classifier) to handle data samples that are not yet observed, known, or understood. These tools traditionally take semantically labeled samples of the available data (known facts) as an input for learning. We want to challenge the indispensability of this approach, and we suggest considering the things the other way around. What if the task would be as follows: how to build a model based on the semantics of our ignorance, i.e., by processing the shape of “voids” within the available data space? Can we improve traditional classification by also modeling the ignorance? In this paper, we provide some algorithms for the discovery and visualization of the ignorance zones in two-dimensional data spaces and design two ignorance-aware smart prototype selection techniques (incremental and adversarial) to improve the performance of the nearest neighbor classifiers. We present experiments with artificial and real datasets to test the concept of the usefulness of ignorance semantics discovery.


Author(s):  
Hector Donaldo Mata ◽  
Mohammed Hadi ◽  
David Hale

Transportation agencies utilize key performance indicators (KPIs) to measure the performance of their traffic networks and business processes. To make effective decisions based on these KPIs, there is a need to align the KPIs at the strategic, tactical, and operational decision levels and to set targets for these KPIs. However, there has been no known effort to develop methods to ensure this alignment producing a correlative model to explore the relationships to support the derivation of the KPI targets. Such development will lead to more realistic target setting and effective decisions based on these targets, ensuring that agency goals are met subject to the available resources. This paper presents a methodology in which the KPIs are represented in a tree-like structure that can be used to depict the association between metrics at the strategic, tactical, and operational levels. Utilizing a combination of business intelligence and machine learning tools, this paper demonstrates that it is possible not only to identify such relationships but also to quantify them. The proposed methodology compares the effectiveness and accuracy of multiple machine learning models including ordinary least squares regression (OLS), least absolute shrinkage and selection operator (LASSO), and ridge regression, for the identification and quantification of interlevel relationships. The output of the model allows the identification of which metrics have more influence on the upper-level KPI targets. The analysis can be performed at the system, facility, and segment levels, providing important insights on what investments are needed to improve system performance.


Author(s):  
Charalambos Kyriakou ◽  
Symeon E. Christodoulou ◽  
Loukas Dimitriou

The paper presents a data-driven framework and related field studies on the use of supervised machine learning and smartphone technology for the spatial condition-assessment mapping of roadway pavement surface anomalies. The study explores the use of data, collected by sensors from a smartphone and a vehicle’s onboard diagnostic device while the vehicle is in movement, for the detection of roadway anomalies. The research proposes a low-cost and automated method to obtain up-to-date information on roadway pavement surface anomalies with the use of smartphone technology, artificial neural networks, robust regression analysis, and supervised machine learning algorithms for multiclass problems. The technology for the suggested system is readily available and accurate and can be utilized in pavement monitoring systems and geographical information system applications. Further, the proposed methodology has been field-tested, exhibiting accuracy levels higher than 90%, and it is currently expanded to include larger datasets and a bigger number of common roadway pavement surface defect types. The proposed system is of practical importance since it provides continuous information on roadway pavement surface conditions, which can be valuable for pavement engineers and public safety.


Sign in / Sign up

Export Citation Format

Share Document