The Victim's Experience and Fear of Crime

1998 ◽  
Vol 5 (2) ◽  
pp. 93-140 ◽  
Author(s):  
Helmut Kury ◽  
Theodore Ferdinand

With the rapid development of sophisticated victim surveys, the fear of crime has emerged as a fundamental concept in theoretical and practical discourse. Since publication of the Report of the President's Commission The Challenge of Crime in a Free Society (1967), the fear of offenders has become a major public concern in the United States alongside the mounting problem of crime itself. The flourishing of national crime surveys in the United States and in Europe has in turn led to large data sets examining carefully not only the knowledge and experience of the victims regarding criminality but also the fear of offenders and its causes ( cf. Herbert and Darwood, 1992; p. 145). We shall offer first, a review of research on these issues in Europe and the United States, and then we shall report our research that has probed these issues in a focused manner.

ILR Review ◽  
2017 ◽  
Vol 72 (2) ◽  
pp. 300-322 ◽  
Author(s):  
Ran Abramitzky ◽  
Leah Boustan ◽  
Katherine Eriksson

The authors compile large data sets from Norwegian and US historical censuses to study return migration during the Age of Mass Migration (1850–1913). Norwegian immigrants who returned to Norway held lower-paid occupations than did Norwegian immigrants who stayed in the United States, both before and after their first transatlantic migration, suggesting they were negatively selected from the migrant pool. Upon returning to Norway, return migrants held higher-paid occupations relative to Norwegians who never moved, despite hailing from poorer backgrounds. These patterns suggest that despite being negatively selected, return migrants had been able to accumulate savings and could improve their economic circumstances once they returned home.


2020 ◽  
Vol 45 (s1) ◽  
pp. 535-559
Author(s):  
Christian Pentzold ◽  
Lena Fölsche

AbstractOur article examines how journalistic reports and online comments have made sense of computational politics. It treats the discourse around data-driven campaigns as its object of analysis and codifies four main perspectives that have structured the debates about the use of large data sets and data analytics in elections. We study American, British, and German sources on the 2016 United States presidential election, the 2017 United Kingdom general election, and the 2017 German federal election. There, groups of speakers maneuvered between enthusiastic, skeptical, agnostic, or admonitory stances and so cannot be clearly mapped onto these four discursive positions. Coming along with the inconsistent accounts, public sensemaking was marked by an atmosphere of speculation about the substance and effects of computational politics. We conclude that this equivocality helped journalists and commentators to sideline prior reporting on the issue in order to repeatedly rediscover the practices they had already covered.


Big Data ◽  
2016 ◽  
pp. 2249-2274
Author(s):  
Chinh Nguyen ◽  
Rosemary Stockdale ◽  
Helana Scheepers ◽  
Jason Sargent

The rapid development of technology and interactive nature of Government 2.0 (Gov 2.0) is generating large data sets for Government, resulting in a struggle to control, manage, and extract the right information. Therefore, research into these large data sets (termed Big Data) has become necessary. Governments are now spending significant finances on storing and processing vast amounts of information because of the huge proliferation and complexity of Big Data and a lack of effective records management. On the other hand, there is a method called Electronic Records Management (ERM), for controlling and governing the important data of an organisation. This paper investigates the challenges identified from reviewing the literature for Gov 2.0, Big Data, and ERM in order to develop a better understanding of the application of ERM to Big Data to extract useable information in the context of Gov 2.0. The paper suggests that a key building block in providing useable information to stakeholders could potentially be ERM with its well established governance policies. A framework is constructed to illustrate how ERM can play a role in the context of Gov 2.0. Future research is necessary to address the specific constraints and expectations placed on governments in terms of data retention and use.


Author(s):  
Kimberly R. Huyser ◽  
Sofia Locklear

American Indian and Alaska Native (AIAN) Peoples are diverse, but their diversity is statistically flattened in national-level survey data and, subsequently, in contemporary understandings of race and inequality in the United States. This chapter demonstrates the utility of disaggregated data for gaining, for instance, nuanced information on social outcomes such as educational attainment and income levels, and shaping resource allocation accordingly. Throughout, it explores both reasons and remedies for AIAN invisibility in large data sets. Using their personal identities as a case in point, the authors argue for more refined survey instruments, informed by Indigenous modes of identity and affiliation, not only to raise the statistical salience of AIANs but also to paint a fuller picture of a vibrant, heterogeneous First Peoples all too often dismissed as a vanishing people.


2014 ◽  
Vol 10 (4) ◽  
pp. 94-116 ◽  
Author(s):  
Chinh Nguyen ◽  
Rosemary Stockdale ◽  
Helana Scheepers ◽  
Jason Sargent

The rapid development of technology and interactive nature of Government 2.0 (Gov 2.0) is generating large data sets for Government, resulting in a struggle to control, manage, and extract the right information. Therefore, research into these large data sets (termed Big Data) has become necessary. Governments are now spending significant finances on storing and processing vast amounts of information because of the huge proliferation and complexity of Big Data and a lack of effective records management. On the other hand, there is a method called Electronic Records Management (ERM), for controlling and governing the important data of an organisation. This paper investigates the challenges identified from reviewing the literature for Gov 2.0, Big Data, and ERM in order to develop a better understanding of the application of ERM to Big Data to extract useable information in the context of Gov 2.0. The paper suggests that a key building block in providing useable information to stakeholders could potentially be ERM with its well established governance policies. A framework is constructed to illustrate how ERM can play a role in the context of Gov 2.0. Future research is necessary to address the specific constraints and expectations placed on governments in terms of data retention and use.


2009 ◽  
Vol 89 (5) ◽  
pp. 543-554 ◽  
Author(s):  
R G Kachanoski

Estimation of fertilizer N requirements of crops remains a challenge. Numerous field studies have been carried out to calibrate soil tests against yield response to applied fertilizer N. Synthesis and identification of common crop fertilizer N response across large data sets (years, sites) will allow maximum use of this past work and a framework for comparison of future work. The objective of this paper is to define macro-relationships between the economically optimum fertilizer N rate (EONR) and the yield increase at the EONR defined as the delta yield, ΔYec , for large data sets of 2nd- and 3rd-order estimates of fertilizer N response functions with both 0th and 1st-order rate relationships between fertilizer nitrogen use efficiency and applied fertilizer N. The derived macro-relationships are curvilinear, depend on the price ratio R = the ratio of the (price per kilogram of fertilizer N)/(price per kilogram grain), and are similar to measurements from data sets of corn fertilizer N response functions spanning decades (+20 yr) and representing areas in both the United States and Canada. The macro-relationships appear to be robust and therefore useful for quantifying (post-harvest analysis) soil fertility, crop fertilizer N requirement, and comparison/classification of N response functions.Key words: Response function, prediction, efficiency, economic N rate, corn


Author(s):  
John A. Hunt

Spectrum-imaging is a useful technique for comparing different processing methods on very large data sets which are identical for each method. This paper is concerned with comparing methods of electron energy-loss spectroscopy (EELS) quantitative analysis on the Al-Li system. The spectrum-image analyzed here was obtained from an Al-10at%Li foil aged to produce δ' precipitates that can span the foil thickness. Two 1024 channel EELS spectra offset in energy by 1 eV were recorded and stored at each pixel in the 80x80 spectrum-image (25 Mbytes). An energy range of 39-89eV (20 channels/eV) are represented. During processing the spectra are either subtracted to create an artifact corrected difference spectrum, or the energy offset is numerically removed and the spectra are added to create a normal spectrum. The spectrum-images are processed into 2D floating-point images using methods and software described in [1].


Author(s):  
Thomas W. Shattuck ◽  
James R. Anderson ◽  
Neil W. Tindale ◽  
Peter R. Buseck

Individual particle analysis involves the study of tens of thousands of particles using automated scanning electron microscopy and elemental analysis by energy-dispersive, x-ray emission spectroscopy (EDS). EDS produces large data sets that must be analyzed using multi-variate statistical techniques. A complete study uses cluster analysis, discriminant analysis, and factor or principal components analysis (PCA). The three techniques are used in the study of particles sampled during the FeLine cruise to the mid-Pacific ocean in the summer of 1990. The mid-Pacific aerosol provides information on long range particle transport, iron deposition, sea salt ageing, and halogen chemistry.Aerosol particle data sets suffer from a number of difficulties for pattern recognition using cluster analysis. There is a great disparity in the number of observations per cluster and the range of the variables in each cluster. The variables are not normally distributed, they are subject to considerable experimental error, and many values are zero, because of finite detection limits. Many of the clusters show considerable overlap, because of natural variability, agglomeration, and chemical reactivity.


Sign in / Sign up

Export Citation Format

Share Document