Mitigating the zonal effect in modeling urban population density functions by Monte Carlo simulation

2018 ◽  
Vol 46 (6) ◽  
pp. 1061-1078 ◽  
Author(s):  
Fahui Wang ◽  
Cuiling Liu ◽  
Yaping Xu

Most empirical studies indicate that the pattern of declining urban population density with distance from the city center is best captured by a negative exponential function. Such studies usually use aggregated data in census area units that are subject to several criticisms such as modifiable areal unit problem, unfair sampling, and uncertainty in distance measure. In order to mitigate these concerns associated, this paper uses Monte Carlo simulation to generate individual residents that are consistent with known patterns of population distribution. By doing so, we are able to aggregate population back to various uniform area units to examine the scale and zonal effects explicitly. The case study in Chicago area indicates that the best fitting density function remains exponential for data in census tracts or block groups, however, the logarithmic function becomes a better fit when uniform area units such as squares, triangles or hexagons are used. The study also suggests that the scale effect remain to some extent in all area units, and the zonal effect be largely mitigated by uniform area units of regular shape.

Urban Studies ◽  
2019 ◽  
Vol 57 (10) ◽  
pp. 2147-2162 ◽  
Author(s):  
Yi Qiang ◽  
Jinwen Xu ◽  
Guohui Zhang

The declining pattern of population density from city centres to the outskirts has been widely observed in American cities. Such a pattern reflects a trade-off between housing price/commuting cost and employment. However, most previous studies in urban population density functions are based on the Euclidean distance, and do not consider commuting cost in cities. This study provides an empirical evaluation of the classic population density functions in 382 metropolitan statistical areas (MSA) in the USA using travel times to city centres as the independent variable. The major findings of the study are: (1) the negative exponential function has the overall best fit for population density in the MSAs; (2) the Gaussian and exponential functions tend to fit larger MSAs, while the power function has better performance for small MSAs; (3) most of the MSAs appear to show a decentralisation trend during 1990–2016, and larger MSAs tend to have a higher rate of decentralisation. This study leverages crowdsourced geospatial data to provide empirical evidence of the classic urban economic models. The findings will increase our understanding about urban morphology, population–job displacement and urban decentralisation. The findings also provide baseline information to monitor and predict the changing trend of urban population distribution that could be driven by future environmental and technological changes.


2020 ◽  
Vol 9 (6) ◽  
pp. 339
Author(s):  
Zengli Wang ◽  
Hong Zhang

Empirical studies have focused on investigating the interactive relationships between crime pairs. However, many other types of crime patterns have not been extensively investigated. In this paper, we introduce three basic crime patterns in four combinations. Based on graph theory, the subgraphs for each pattern were constructed and analyzed using criminology theories. A Monte Carlo simulation was conducted to examine the significance of these patterns. Crime patterns were statistically significant and generated different levels of crime risk. Compared to the classical patterns, combined patterns create much higher risk levels. Among these patterns, “co-occurrence, repeat, and shift” generated the highest level of crime risk, while “repeat” generated much lower levels of crime risk. “Co-occurrence and shift” and “repeat and shift” showed undulated risk levels, while others showed a continuous decrease. These results outline the importance of proposed crime patterns and call for differentiated crime prevention strategies. This method can be extended to other research areas that use point events as research objects.


1979 ◽  
Vol 11 (6) ◽  
pp. 629-641 ◽  
Author(s):  
K Zielinski

Seven models of the quadratic gamma type (negative-exponential, normal, inverse-power, quadratic negative-exponential, gamma, normal gamma, and quadratic gamma distributions) and the equilibrium models of Amson are tested by use of data from Bristol, Coventry, Derby, Leicester, Nottingham, Leeds, and Bradford. The first five of these cities are tested at two levels: By use of all radial distances and by use of only those less than four kilometres. The object of these tests was to detect differences in goodness of fit at the city centre and overall. The last two cities were used to test a model proposed to describe intercity population distributions.


2008 ◽  
Vol 2008 ◽  
pp. 1-22 ◽  
Author(s):  
Yanguang Chen

The method of spectral analysis is employed to research the spatial dynamics of urban population distribution. First of all, the negative exponential model is derived in a new way by using an entropy-maximizing idea. Then an approximate scaling relation between wave number and spectral density is derived by Fourier transform of the negative exponential model. The theoretical results suggest the locality of urban population activities. So the principle of entropy maximization can be utilized to interpret the locality and localization of urban morphology. The wave-spectrum model is applied to the city in the real world, Hangzhou, China, and spectral exponents can give the dimension values of the fractal lines of urban population profiles. The changing trend of the fractal dimension does reflect the localization of urban population growth and diffusion. This research on spatial dynamics of urban evolvement is significant for modeling spatial complexity and simulating spatial complication of city systems by cellular automata.


Sensors ◽  
2020 ◽  
Vol 20 (18) ◽  
pp. 5032 ◽  
Author(s):  
Qiang Zhou ◽  
Yuanmao Zheng ◽  
Jinyuan Shao ◽  
Yinglun Lin ◽  
Haowei Wang

Previously published studies on population distribution were based on the provincial level, while the number of urban-level studies is more limited. In addition, the rough spatial resolution of traditional nighttime light (NTL) data has limited their fine application in current small-scale population distribution research. For the purpose of studying the spatial distribution of populations at the urban scale, we proposed a new index (i.e., the road network adjusted human settlement index, RNAHSI) by integrating Luojia 1-01 (LJ 1-01) NTL data, the enhanced vegetation index (EVI), and road network density (RND) data based on population density relationships to depict the spatial distribution of urban human settlements. The RNAHSI updated the high-resolution NTL data and combined the RND data on the basis of human settlement index (HSI) data to refine the spatial pattern of urban population distribution. The results indicated that the mean relative error (MRE) between the population estimation data based on the RNAHSI and the demographic data was 34.80%, which was lower than that in the HSI and WorldPop dataset. This index is suitable primarily for the study of urban population distribution, as the RNAHSI can clearly highlight human activities in areas with dense urban road networks and can refine the spatial heterogeneity of impervious areas. In addition, we also drew a population density map of the city of Shenzhen with a 100 m spatial resolution for 2018 based on the RNAHSI, which has great reference significance for urban management and urban resource allocation.


Methodology ◽  
2011 ◽  
Vol 7 (3) ◽  
pp. 81-87 ◽  
Author(s):  
Shuyan Sun ◽  
Wei Pan ◽  
Lihshing Leigh Wang

Observed power analysis is recommended by many scholarly journal editors and reviewers, especially for studies with statistically nonsignificant test results. However, researchers may not fully realize that blind observance of this recommendation could lead to an unfruitful effort, despite the repeated warnings from methodologists. Through both a review of 14 published empirical studies and a Monte Carlo simulation study, the present study demonstrates that observed power is usually not as informative or helpful as we think because (a) observed power for a nonsignificant test is generally low and, therefore, does not provide additional information to the test; and (b) a low observed power does not always indicate that the test is underpowered. Implications and suggestions of statistical power analysis for quantitative researchers are discussed.


2021 ◽  
Vol 13 (2) ◽  
pp. 463
Author(s):  
Jian Feng ◽  
Yanguang Chen

Urban population density provides a good perspective for understanding urban growth and socio-spatial dynamics. Based on sub-district data of the five national censuses in 1964, 1982, 1990, 2000, and 2010, this paper is devoted to analyzing of urban growth and the spatial restructuring of the population in the city of Hangzhou, China. Research methods are based on mathematical modeling and field investigation. The modeling result shows that the negative exponential function and the power-exponential function can be well fitted to Hangzhou’s observational data of urban density. The negative exponential model reflects the expected state, while the power-exponential model reflects the real state of urban density distribution. The parameters of these models are linearly correlated to the spatial information entropy of population distribution. The fact that the density gradient in the negative exponential function flattened in the 1990s and 2000s is closely related to the development of suburbanization. In terms of investigation materials and the changing trend of model parameters, we can reveal the spatio-temporal features of Hangzhou’s urban growth. The main conclusions can be reached as follows. The policy of reformation and opening-up and the establishment of a market economy improved the development mode of Hangzhou. As long as a city has a good social and economic environment, it will automatically tend to the optimal state through self-organization.


2021 ◽  
Vol 13 (17) ◽  
pp. 9498
Author(s):  
Minjung Kwak

A prevailing assumption in research on remanufactured products is “the cheaper, the better”. Customers prefer prices that are as low as possible. Customer price preference is modeled as a linear function with the minimal price at customers’ willingness to pay (WTP), which is assumed to be homogeneous and constant in the market. However, this linearity assumption is being challenged, as recent empirical studies have testified to customer heterogeneity in price perception and demonstrated the existence of too-cheap prices (TC). This study is the first attempt to investigate the validity of the linearity assumption for remanufactured products. A Monte Carlo simulation was conducted to estimate how the average market preference changes with the price of the remanufactured product when TC and WTP are heterogeneous across individual customers. Survey data from a previous study were used to fit and model the distributions of TC and WTP. Results show that a linear or monotonically decreasing relationship between price and customer preference may not hold for remanufactured products. With heterogeneous TC and WTP, the average price preference revealed an inverted U shape with a peak between the TC and WTP, independent of product type and individual customers’ preference function form. This implies that a bell-shaped or triangular function may serve as a better alternative than a linear function can when modeling market-price preference in remanufacturing research.


Author(s):  
Ryuichi Shimizu ◽  
Ze-Jun Ding

Monte Carlo simulation has been becoming most powerful tool to describe the electron scattering in solids, leading to more comprehensive understanding of the complicated mechanism of generation of various types of signals for microbeam analysis.The present paper proposes a practical model for the Monte Carlo simulation of scattering processes of a penetrating electron and the generation of the slow secondaries in solids. The model is based on the combined use of Gryzinski’s inner-shell electron excitation function and the dielectric function for taking into account the valence electron contribution in inelastic scattering processes, while the cross-sections derived by partial wave expansion method are used for describing elastic scattering processes. An improvement of the use of this elastic scattering cross-section can be seen in the success to describe the anisotropy of angular distribution of elastically backscattered electrons from Au in low energy region, shown in Fig.l. Fig.l(a) shows the elastic cross-sections of 600 eV electron for single Au-atom, clearly indicating that the angular distribution is no more smooth as expected from Rutherford scattering formula, but has the socalled lobes appearing at the large scattering angle.


Author(s):  
D. R. Liu ◽  
S. S. Shinozaki ◽  
R. J. Baird

The epitaxially grown (GaAs)Ge thin film has been arousing much interest because it is one of metastable alloys of III-V compound semiconductors with germanium and a possible candidate in optoelectronic applications. It is important to be able to accurately determine the composition of the film, particularly whether or not the GaAs component is in stoichiometry, but x-ray energy dispersive analysis (EDS) cannot meet this need. The thickness of the film is usually about 0.5-1.5 μm. If Kα peaks are used for quantification, the accelerating voltage must be more than 10 kV in order for these peaks to be excited. Under this voltage, the generation depth of x-ray photons approaches 1 μm, as evidenced by a Monte Carlo simulation and actual x-ray intensity measurement as discussed below. If a lower voltage is used to reduce the generation depth, their L peaks have to be used. But these L peaks actually are merged as one big hump simply because the atomic numbers of these three elements are relatively small and close together, and the EDS energy resolution is limited.


Sign in / Sign up

Export Citation Format

Share Document