scholarly journals Examining the vintage effect in hedonic pricing using spatially varying coefficients models: a case study of single-family houses in the Canton of Zurich

2022 ◽  
Vol 158 (1) ◽  
Author(s):  
Jakob A. Dambon ◽  
Stefan S. Fahrländer ◽  
Saira Karlen ◽  
Manuel Lehner ◽  
Jaron Schlesinger ◽  
...  

AbstractThis article examines the spatially varying effect of age on single-family house (SFH) prices. Age has been shown to be a key driver for house depreciation and is usually associated with a negative price effect. In practice, however, there exist deviations from this behavior which are referred to as vintage effects. We estimate a spatially varying coefficients (SVC) model to investigate the spatial structures of vintage effects on SFH pricing. For SFHs in the Canton of Zurich, Switzerland, we find substantial spatial variation in the age effect. In particular, we find a local, strong vintage effect primarily in urban areas compared to pure depreciative age effects in rural locations. Using cross validation, we assess the potential improvement in predictive performance by incorporating spatially varying vintage effects in hedonic models. We find a substantial improvement in out-of-sample predictive performance of SVC models over classical spatial hedonic models.

Author(s):  
E. M. Schliep ◽  
A. E. Gelfand ◽  
D. M. Holland

Abstract. There is considerable demand for accurate air quality information in human health analyses. The sparsity of ground monitoring stations across the United States motivates the need for advanced statistical models to predict air quality metrics, such as PM2.5, at unobserved sites. Remote sensing technologies have the potential to expand our knowledge of PM2.5 spatial patterns beyond what we can predict from current PM2.5 monitoring networks. Data from satellites have an additional advantage in not requiring extensive emission inventories necessary for most atmospheric models that have been used in earlier data fusion models for air pollution. Statistical models combining monitoring station data with satellite-obtained aerosol optical thickness (AOT), also referred to as aerosol optical depth (AOD), have been proposed in the literature with varying levels of success in predicting PM2.5. The benefit of using AOT is that satellites provide complete gridded spatial coverage. However, the challenges involved with using it in fusion models are (1) the correlation between the two data sources varies both in time and in space, (2) the data sources are temporally and spatially misaligned, and (3) there is extensive missingness in the monitoring data and also in the satellite data due to cloud cover. We propose a hierarchical autoregressive spatially varying coefficients model to jointly model the two data sources, which addresses the foregoing challenges. Additionally, we offer formal model comparison for competing models in terms of model fit and out of sample prediction of PM2.5. The models are applied to daily observations of PM2.5 and AOT in the summer months of 2013 across the conterminous United States. Most notably, during this time period, we find small in-sample improvement incorporating AOT into our autoregressive model but little out-of-sample predictive improvement.


2019 ◽  
Vol 17 (4) ◽  
pp. 401-416 ◽  
Author(s):  
Ana Stanojevic ◽  
Aleksandar Kekovic

Buildings preservation by the conversion of their function has become a domain of interest in the field of industrial heritage. Due to the need to expand existing housing capacities in urban areas, a large number of industrial buildings are nowadays converted into multi-family and single-family housing. The paper deals with the analysis of the functional and aesthetic internal transformation of industrial into housing spaces. The research goal is to determine the principles of conceptualization of housing functional plan within the framework of the original physical structure of the industrial building, at the architectonic composition level and housing unit (dwelling) level. Besides, the paper aims to check the existence of common patterns of the aesthetic transformation of converted spaces, examined through three epochs of the development of industrial architecture: the second half of the XIX century, the first half of the XX century and the post-WWII period.


2021 ◽  
Vol 13 (4) ◽  
pp. 1883
Author(s):  
Agnieszka Telega ◽  
Ivan Telega ◽  
Agnieszka Bieda

Cities occupy only about 3% of the Earth’s surface area, but half of the global population lives in them. The high population density in urban areas requires special actions to make these areas develop sustainably. One of the greatest challenges of the modern world is to organize urban spaces in a way to make them attractive, safe and friendly to people living in cities. This can be managed with the help of a number of indicators, one of which is walkability. Of course, the most complete analyses are based on spatial data, and the easiest way to implement them is using GIS tools. Therefore, the goal of the paper is to present a new approach for measuring walkability, which is based on density maps of specific urban functions and networks of generally accessible pavements and paths. The method is implemented using open-source data. Density values are interpolated from point data (urban objects featuring specific functions) and polygons (pedestrian infrastructure) using Kernel Density and Line Density tools in GIS. The obtained values allow the calculation of a synthetic indicator taking into account the access by means of pedestrian infrastructure to public transport stops, parks and recreation areas, various attractions, shops and services. The proposed method was applied to calculate the walkability for Kraków (the second largest city in Poland). The greatest value of walkability was obtained for the Main Square (central part of the Old Town). The least accessible to pedestrians are, on the other hand, areas located on the outskirts of the city, which are intended for extensive industrial areas, single-family housing or large green areas.


2019 ◽  
Vol 60 ◽  
pp. 102235 ◽  
Author(s):  
Mark Janko ◽  
Varun Goel ◽  
Michael Emch

REGION ◽  
2020 ◽  
Vol 7 (1) ◽  
pp. 1-19
Author(s):  
Mauricio Sarrias

This study focus on models with spatially varying coefficients using simulations.  As shown by Sarrias (2019), this modeling strategy is intended to complement the existing approaches by using variables at micro level and by adding flexibility and realism to the potential domain of the coefficient on the geographical space. Spatial heterogeneity is modelled by allowing the parameters associated with each observed variable to vary “randomly” across space according to some distribution. To show the main advantages of this modeling strategy, the Rchoice package in R is used.


2017 ◽  
Author(s):  
Vladimir Gligorijević ◽  
Meet Barot ◽  
Richard Bonneau

AbstractThe prevalence of high-throughput experimental methods has resulted in an abundance of large-scale molecular and functional interaction networks. The connectivity of these networks provide a rich source of information for inferring functional annotations for genes and proteins. An important challenge has been to develop methods for combining these heterogeneous networks to extract useful protein feature representations for function prediction. Most of the existing approaches for network integration use shallow models that cannot capture complex and highly-nonlinear network structures. Thus, we propose deepNF, a network fusion method based on Multimodal Deep Autoencoders to extract high-level features of proteins from multiple heterogeneous interaction networks. We apply this method to combine STRING networks to construct a common low-dimensional representation containing high-level protein features. We use separate layers for different network types in the early stages of the multimodal autoencoder, later connecting all the layers into a single bottleneck layer from which we extract features to predict protein function. We compare the cross-validation and temporal holdout predictive performance of our method with state-of-the-art methods, including the recently proposed method Mashup. Our results show that our method outperforms previous methods for both human and yeast STRING networks. We also show substantial improvement in the performance of our method in predicting GO terms of varying type and specificity.AvailabilitydeepNF is freely available at: https://github.com/VGligorijevic/deepNF


2020 ◽  
Author(s):  
Bryan Strange ◽  
Linda Zhang ◽  
Alba Sierra-Marcos ◽  
Eva Alfayate ◽  
Jussi Tohka ◽  
...  

Identifying measures that predict future cognitive impairment in healthy individuals is necessary to inform treatment strategies for candidate dementia-preventative and modifying interventions. Here, we derive such measures by studying converters who transitioned from cognitively normal at baseline to mild-cognitive impairment (MCI) in a longitudinal study of 1213 elderly participants. We first establish reduced grey matter density (GMD) in left entorhinal cortex (EC) as a biomarker for impending cognitive decline in healthy individuals, employing a matched sampling control for several dementia risk-factors, thereby mitigating the potential effects of bias on our statistical tests. Next, we determine the predictive performance of baseline demographic, genetic, neuropsychological and MRI measures by entering these variables into an elastic net-regularized classifier. Our trained statistical model classified converters and controls with validation Area-Under-the-Curve>0.9, identifying only delayed verbal memory and left EC GMD as relevant predictors for classification. This performance was maintained on test classification of out-of-sample converters and controls. Our results suggest a parsimonious but powerful predictive model for MCI development in the cognitively healthy elderly.


2020 ◽  
Vol 10 (1) ◽  
pp. 1-11
Author(s):  
Arvind Shrivastava ◽  
Nitin Kumar ◽  
Kuldeep Kumar ◽  
Sanjeev Gupta

The paper deals with the Random Forest, a popular classification machine learning algorithm to predict bankruptcy (distress) for Indian firms. Random Forest orders firms according to their propensity to default or their likelihood to become distressed. This is also useful to explain the association between the tendency of firm failure and its features. The results are analyzed vis-à-vis Tree Net. Both in-sample and out of sample estimations have been performed to compare Random Forest with Tree Net, which is a cutting edge data mining tool known to provide satisfactory estimation results. An exhaustive data set comprising companies from varied sectors have been included in the analysis. It is found that Tree Net procedure provides improved classification and predictive performance vis-à-vis Random Forest methodology consistently that may be utilized further by industry analysts and researchers alike for predictive purposes.


Sign in / Sign up

Export Citation Format

Share Document