scholarly journals Construction, Detection, and Interpretation of Crime Patterns over Space and Time

2020 ◽  
Vol 9 (6) ◽  
pp. 339
Author(s):  
Zengli Wang ◽  
Hong Zhang

Empirical studies have focused on investigating the interactive relationships between crime pairs. However, many other types of crime patterns have not been extensively investigated. In this paper, we introduce three basic crime patterns in four combinations. Based on graph theory, the subgraphs for each pattern were constructed and analyzed using criminology theories. A Monte Carlo simulation was conducted to examine the significance of these patterns. Crime patterns were statistically significant and generated different levels of crime risk. Compared to the classical patterns, combined patterns create much higher risk levels. Among these patterns, “co-occurrence, repeat, and shift” generated the highest level of crime risk, while “repeat” generated much lower levels of crime risk. “Co-occurrence and shift” and “repeat and shift” showed undulated risk levels, while others showed a continuous decrease. These results outline the importance of proposed crime patterns and call for differentiated crime prevention strategies. This method can be extended to other research areas that use point events as research objects.

2020 ◽  
Vol 8 (5) ◽  
pp. 3283-3285

This research investigates the conditioned level in the mid-gestation period using stochastic model such as Markov process which requires the Monte Carlo simulation to get the intended results. The simulation in fetal stages addresses the influence of possible risk factor in different levels. The abnormal conditioned in mid-pregnancy that affects the behavioral randomness of the fetal development. The equation of the data implement through the Monte Carlo equation. Empirical Analysis has showed in the behavioral changes of fetal development during mid-gestation.


Methodology ◽  
2011 ◽  
Vol 7 (3) ◽  
pp. 81-87 ◽  
Author(s):  
Shuyan Sun ◽  
Wei Pan ◽  
Lihshing Leigh Wang

Observed power analysis is recommended by many scholarly journal editors and reviewers, especially for studies with statistically nonsignificant test results. However, researchers may not fully realize that blind observance of this recommendation could lead to an unfruitful effort, despite the repeated warnings from methodologists. Through both a review of 14 published empirical studies and a Monte Carlo simulation study, the present study demonstrates that observed power is usually not as informative or helpful as we think because (a) observed power for a nonsignificant test is generally low and, therefore, does not provide additional information to the test; and (b) a low observed power does not always indicate that the test is underpowered. Implications and suggestions of statistical power analysis for quantitative researchers are discussed.


2021 ◽  
pp. 097226292110029
Author(s):  
Akshay Hinduja ◽  
Manju Pandey

Often, data in multi-criteria decision-making (MCDM) problems are imprecise and changeable due to the mandatory participation of human judgement, which is often unclear and vague. Besides, different MCDM methods may produce different results under different levels of uncertainty and require divergent levels of computational resources. Therefore, the quality of the decision and the amount of effort are heavily affected by selection of the MCDM method. With the regular proliferation of such methods and their modifications, it is important to carry out a comparative study that provides comprehensive insight into their performances under uncertain conditions. In this study, we use the randomized quasi-Monte Carlo simulation approach to compare empirically the results produced by 12 classic and contemporary fuzzy MCDM (FMCDM) approaches with rank-reversal perspective over increasing uncertainty in various decision scenarios. Furthermore, this study also investigates the similarity between ranks produced by each pair of methods for the same decision problems. The study further compares the results obtained by quasi-Monte Carlo simulation with the results obtained by Monte Carlo simulation. The findings of this study will assist decision-makers in the selection of most appropriate fuzzy MCDM approach for different decision scenarios. The results of this research are significant additions to the current repository of knowledge in the multi-criteria decision analysis as well as the literature pertaining to the Information Systems. It also provides insights for many managerial applications of these MCDM methods.


2018 ◽  
Vol 46 (6) ◽  
pp. 1061-1078 ◽  
Author(s):  
Fahui Wang ◽  
Cuiling Liu ◽  
Yaping Xu

Most empirical studies indicate that the pattern of declining urban population density with distance from the city center is best captured by a negative exponential function. Such studies usually use aggregated data in census area units that are subject to several criticisms such as modifiable areal unit problem, unfair sampling, and uncertainty in distance measure. In order to mitigate these concerns associated, this paper uses Monte Carlo simulation to generate individual residents that are consistent with known patterns of population distribution. By doing so, we are able to aggregate population back to various uniform area units to examine the scale and zonal effects explicitly. The case study in Chicago area indicates that the best fitting density function remains exponential for data in census tracts or block groups, however, the logarithmic function becomes a better fit when uniform area units such as squares, triangles or hexagons are used. The study also suggests that the scale effect remain to some extent in all area units, and the zonal effect be largely mitigated by uniform area units of regular shape.


2021 ◽  
Vol 13 (17) ◽  
pp. 9498
Author(s):  
Minjung Kwak

A prevailing assumption in research on remanufactured products is “the cheaper, the better”. Customers prefer prices that are as low as possible. Customer price preference is modeled as a linear function with the minimal price at customers’ willingness to pay (WTP), which is assumed to be homogeneous and constant in the market. However, this linearity assumption is being challenged, as recent empirical studies have testified to customer heterogeneity in price perception and demonstrated the existence of too-cheap prices (TC). This study is the first attempt to investigate the validity of the linearity assumption for remanufactured products. A Monte Carlo simulation was conducted to estimate how the average market preference changes with the price of the remanufactured product when TC and WTP are heterogeneous across individual customers. Survey data from a previous study were used to fit and model the distributions of TC and WTP. Results show that a linear or monotonically decreasing relationship between price and customer preference may not hold for remanufactured products. With heterogeneous TC and WTP, the average price preference revealed an inverted U shape with a peak between the TC and WTP, independent of product type and individual customers’ preference function form. This implies that a bell-shaped or triangular function may serve as a better alternative than a linear function can when modeling market-price preference in remanufacturing research.


2013 ◽  
Vol 21 (2) ◽  
pp. 252-265 ◽  
Author(s):  
Simon Hug

An increasing number of analyses in various subfields of political science employ Boolean algebra as proposed by Ragin's qualitative comparative analysis (QCA). This type of analysis is perfectly justifiable if the goal is to test deterministic hypotheses under the assumption of error-free measures of the employed variables. My contention is, however, that only in a very few research areas are our theories sufficiently advanced to yield deterministic hypotheses. Also, given the nature of our objects of study, error-free measures are largely an illusion. Hence, it is unsurprising that many studies employ QCA inductively and gloss over possible measurement errors. In this article, I address these issues and demonstrate the consequences of these problems with simple empirical examples. In an analysis similar to Monte Carlo simulation, I show that using Boolean algebra in an exploratory fashion without considering possible measurement errors may lead to dramatically misleading inferences. I then suggest remedies that help researchers to circumvent some of these pitfalls.


Author(s):  
Ryuichi Shimizu ◽  
Ze-Jun Ding

Monte Carlo simulation has been becoming most powerful tool to describe the electron scattering in solids, leading to more comprehensive understanding of the complicated mechanism of generation of various types of signals for microbeam analysis.The present paper proposes a practical model for the Monte Carlo simulation of scattering processes of a penetrating electron and the generation of the slow secondaries in solids. The model is based on the combined use of Gryzinski’s inner-shell electron excitation function and the dielectric function for taking into account the valence electron contribution in inelastic scattering processes, while the cross-sections derived by partial wave expansion method are used for describing elastic scattering processes. An improvement of the use of this elastic scattering cross-section can be seen in the success to describe the anisotropy of angular distribution of elastically backscattered electrons from Au in low energy region, shown in Fig.l. Fig.l(a) shows the elastic cross-sections of 600 eV electron for single Au-atom, clearly indicating that the angular distribution is no more smooth as expected from Rutherford scattering formula, but has the socalled lobes appearing at the large scattering angle.


Author(s):  
D. R. Liu ◽  
S. S. Shinozaki ◽  
R. J. Baird

The epitaxially grown (GaAs)Ge thin film has been arousing much interest because it is one of metastable alloys of III-V compound semiconductors with germanium and a possible candidate in optoelectronic applications. It is important to be able to accurately determine the composition of the film, particularly whether or not the GaAs component is in stoichiometry, but x-ray energy dispersive analysis (EDS) cannot meet this need. The thickness of the film is usually about 0.5-1.5 μm. If Kα peaks are used for quantification, the accelerating voltage must be more than 10 kV in order for these peaks to be excited. Under this voltage, the generation depth of x-ray photons approaches 1 μm, as evidenced by a Monte Carlo simulation and actual x-ray intensity measurement as discussed below. If a lower voltage is used to reduce the generation depth, their L peaks have to be used. But these L peaks actually are merged as one big hump simply because the atomic numbers of these three elements are relatively small and close together, and the EDS energy resolution is limited.


Sign in / Sign up

Export Citation Format

Share Document