scholarly journals Culture Process and the Interpretation of Radiocarbon Data

Radiocarbon ◽  
2017 ◽  
Vol 60 (2) ◽  
pp. 453-467 ◽  
Author(s):  
Jacob Freeman ◽  
David A Byers ◽  
Erick Robinson ◽  
Robert L Kelly

AbstractOver the last decade, archaeologists have turned to large radiocarbon (14C) data sets to infer prehistoric population size and change. An outstanding question concerns just how direct of an estimate 14C dates are for human populations. In this paper we propose that 14C dates are a better estimate of energy consumption, rather than an unmediated, proportional estimate of population size. We use a parametric model to describe the relationship between population size, economic complexity and energy consumption in human societies, and then parametrize the model using data from modern contexts. Our results suggest that energy consumption scales sub-linearly with population size, which means that the analysis of a large 14C time-series has the potential to misestimate rates of population change and absolute population size. Energy consumption is also an exponential function of economic complexity. Thus, the 14C record could change semi-independent of population as complexity grows or declines. Scaling models are an important tool for stimulating future research to tease apart the different effects of population and social complexity on energy consumption, and explain variation in the forms of 14C date time-series in different regions.

Author(s):  
Chao Chen ◽  
Diane J. Cook

The value of smart environments in understanding and monitoring human behavior has become increasingly obvious in the past few years. Using data collected from sensors in these environments, scientists have been able to recognize activities that residents perform and use the information to provide context-aware services and information. However, less attention has been paid to monitoring and analyzing energy usage in smart homes, despite the fact that electricity consumption in homes has grown dramatically. In this chapter, the authors demonstrate how energy consumption relates to human activity through verifying that energy consumption can be predicted based on the activity that is being performed. The authors then automatically identify novelties in human behavior by recognizing outliers in energy consumption generated by the residents in a smart environment. To validate these approaches, they use real energy data collected in their CASAS smart apartment testbed and analyze the results for two different data sets collected in this smart home.


2017 ◽  
Vol 2017 ◽  
pp. 1-9 ◽  
Author(s):  
Hamid H. Hussien ◽  
Fathy H. Eissa ◽  
Khidir E. Awadalla

Malaria is the leading cause of illness and death in Sudan. The entire population is at risk of malaria epidemics with a very high burden on government and population. The usefulness of forecasting methods in predicting the number of future incidences is needed to motivate the development of a system that can predict future incidences. The objective of this paper is to develop applicable and understood time series models and to find out what method can provide better performance to predict future incidences level. We used monthly incidence data collected from five states in Sudan with unstable malaria transmission. We test four methods of the forecast: (1) autoregressive integrated moving average (ARIMA); (2) exponential smoothing; (3) transformation model; and (4) moving average. The result showed that transformation method performed significantly better than the other methods for Gadaref, Gazira, North Kordofan, and Northern, while the moving average model performed significantly better for Khartoum. Future research should combine a number of different and dissimilar methods of time series to improve forecast accuracy with the ultimate aim of developing a simple and useful model for producing reasonably reliable forecasts of the malaria incidence in the study area.


Author(s):  
David West ◽  
Scott Dellana

The quality of treated wastewater has always been an important issue, but it becomes even more critical as human populations increase. Unfortunately, current ability to monitor and control effluent quality from a wastewater treatment process is primitive (Wen & Vassiliadis, 1998). Control is difficult because wastewater treatment consists of complex multivariate processes with nonlinear relationships and time varying dynamics. Consequently, there is a critical need for forecasting models that are effective in predicting wastewater effluent quality. Using data from an urban wastewater treatment plant, we tested several linear and nonlinear models, including ARIMA and neural networks. Our results provide evidence that a nonlinear neural network time series model achieves the most accurate forecast of wastewater effluent quality.


Author(s):  
Oliver Duke-Williams ◽  
John Stillwell

One of the major problems challenging time series research based on stock and flow data is the inconsistency that occurs over time due to changes in variable definition, data classification and spatial boundary configuration. The census of population is a prime example of a source whose data are fraught with these problems, resulting in even the simplest comparison between the 2001 Census and its predecessor in 1991 being difficult. The first part of this chapter introduces the subject of inconsistencies between related data sets, with general reference to census interaction data. Various types of inconsistency are described. A number of approaches to dealing with inconsistency are then outlined, with examples of how these have been used in practice. The handling of journey to work data of persons who work from home is then used as an illustrative example of the problems posed by inconsistencies in base populations. Home-workers have been treated in different ways in successive UK censuses, a factor which can cause difficulties not only for researchers interested in such working practices, but also for those interested in other aspects of commuting. The latter set of problems are perhaps more pernicious, as users are less likely to be aware of the biases introduced into data sets that are being compared. In the second half of this chapter, we make use of a time series data set of migration interaction data that does have temporal consistency to explore how migration propensities and patterns in England and Wales have changed since 1999 and in particular since the year prior to the 2001 Census. The data used are those that are produced by the Office of National Statistics based on comparisons of NHS patient records from one year to the next and adjusted using data on NHS patients re-registering in different health authorities. The analysis of these data suggests that the massive exodus of individuals from major metropolitan across the country that has been identified in previous studies is continuing apace, particularly from London whose net losses doubled in absolute terms between 1999 and 2004 before reducing marginally in 2005 and 2006. Whilst this pattern of counterurbanisation is evident for all-age flows, it conceals significant variations for certain age groups, not least those aged between 16 and 24, whose migration propensities are high and whose net redistribution is closely connected with the location of universities. The time series analyses are preceded by a comparison of patient register data with corresponding data from the 2001 Census. This suggests strong correlation between the indicators selected and strengthens the argument that patient register data in more recent years provide reliable evidence for researchers and policy makers on how propensities and patterns change over time.


2021 ◽  
Author(s):  
Katherine Eaton ◽  
Leo Featherstone ◽  
Sebastian Duchene ◽  
Ann Carmichael ◽  
Nükhet Varlık ◽  
...  

Abstract Plague has an enigmatic history as a zoonotic pathogen. This potentially devastating infectious disease will unexpectedly appear in human populations and disappear just as suddenly. As a result, a long-standing line of inquiry has been to estimate when and where plague appeared in the past. However, there have been significant disparities between phylogenetic studies of the causative bacterium, Yersinia pestis, regarding the timing and geographic origins of its reemergence. Here, we curate and contextualize an updated phylogeny of Y. pestis using 601 genome sequences sampled globally. We perform a detailed Bayesian evaluation of temporal signal in subsets of these data and demonstrate that a Y. pestis-wide molecular clock model is unstable. To resolve this, we devised a new approach in which each Y. pestis population was assessed independently. This enabled us to recover significant temporal signal in five populations, including the ancient pandemic lineages which we now estimate may have emerged decades, or even centuries, before a pandemic was historically documented from European sources. Despite this, we only obtain robust divergence dates from populations sampled over a period of at least 90 years, indicating that genetic evidence alone is insufficient for accurately reconstructing the timing and spread of short-term plague epidemics. Finally, we identify key historical data sets that can be used in future research, which will complement the strengths and mitigate the weaknesses of genomic data.


2017 ◽  
Vol 65 (11) ◽  
pp. 1483-1512 ◽  
Author(s):  
Jeff A. Bouffard ◽  
LaQuana N. Askew

Sex offender registration and notification (SORN) laws were implemented to protect communities by increasing public awareness, and these laws have expanded over time to include registration by more types of offenders. Despite widespread implementation, research provides only inconsistent support for the impact of SORN laws on incidence of sexual offending. Using data from a large metropolitan area in Texas over the time period 1977 to 2012, and employing a number of time-series analyses, we examine the impact of the initial SORN implementation and two enhancements to the law. Results reveal no effect of SORN, or its subsequent modifications, on all sexual offenses or any of several specific offenses measures (e.g., crimes by repeat offenders). Implications for effective policy and future research are presented.


2002 ◽  
Vol 1 (3-4) ◽  
pp. 194-210 ◽  
Author(s):  
Matthew O Ward

Glyphs are graphical entities that convey one or more data values via attributes such as shape, size, color, and position. They have been widely used in the visualization of data and information, and are especially well suited for displaying complex, multivariate data sets. The placement or layout of glyphs on a display can communicate significant information regarding the data values themselves as well as relationships between data points, and a wide assortment of placement strategies have been developed to date. Methods range from simply using data dimensions as positional attributes to basing placement on implicit or explicit structure within the data set. This paper presents an overview of multivariate glyphs, a list of issues regarding the layout of glyphs, and a comprehensive taxonomy of placement strategies to assist the visualization designer in selecting the technique most suitable to his or her data and task. Examples, strengths, weaknesses, and design considerations are given for each category of technique. We conclude with some general guidelines for selecting a placement strategy, along with a brief description of some of our future research directions.


2013 ◽  
pp. 1675-1696
Author(s):  
Oliver Duke-Williams ◽  
John Stillwell

One of the major problems challenging time series research based on stock and flow data is the inconsistency that occurs over time due to changes in variable definition, data classification and spatial boundary configuration. The census of population is a prime example of a source whose data are fraught with these problems, resulting in even the simplest comparison between the 2001 Census and its predecessor in 1991 being difficult. The first part of this chapter introduces the subject of inconsistencies between related data sets, with general reference to census interaction data. Various types of inconsistency are described. A number of approaches to dealing with inconsistency are then outlined, with examples of how these have been used in practice. The handling of journey to work data of persons who work from home is then used as an illustrative example of the problems posed by inconsistencies in base populations. Home-workers have been treated in different ways in successive UK censuses, a factor which can cause difficulties not only for researchers interested in such working practices, but also for those interested in other aspects of commuting. The latter set of problems are perhaps more pernicious, as users are less likely to be aware of the biases introduced into data sets that are being compared. In the second half of this chapter, we make use of a time series data set of migration interaction data that does have temporal consistency to explore how migration propensities and patterns in England and Wales have changed since 1999 and in particular since the year prior to the 2001 Census. The data used are those that are produced by the Office of National Statistics based on comparisons of NHS patient records from one year to the next and adjusted using data on NHS patients re-registering in different health authorities. The analysis of these data suggests that the massive exodus of individuals from major metropolitan across the country that has been identified in previous studies is continuing apace, particularly from London whose net losses doubled in absolute terms between 1999 and 2004 before reducing marginally in 2005 and 2006. Whilst this pattern of counterurbanisation is evident for all-age flows, it conceals significant variations for certain age groups, not least those aged between 16 and 24, whose migration propensities are high and whose net redistribution is closely connected with the location of universities. The time series analyses are preceded by a comparison of patient register data with corresponding data from the 2001 Census. This suggests strong correlation between the indicators selected and strengthens the argument that patient register data in more recent years provide reliable evidence for researchers and policy makers on how propensities and patterns change over time.


1999 ◽  
Vol 9 (2) ◽  
pp. 167-174 ◽  
Author(s):  
Leslie Picoult-Newberg ◽  
Trey E. Ideker ◽  
Mark G. Pohl ◽  
Scott L. Taylor ◽  
Miriam A. Donaldson ◽  
...  

There is considerable interest in the discovery and characterization of single nucleotide polymorphisms (SNPs) to enable the analysis of the potential relationships between human genotype and phenotype. Here we present a strategy that permits the rapid discovery of SNPs from publicly available expressed sequence tag (EST) databases. From a set of ESTs derived from 19 different cDNA libraries, we assembled 300,000 distinct sequences and identified 850 mismatches from contiguous EST data sets (candidate SNP sites), without de novo sequencing. Through a polymerase-mediated, single-base, primer extension technique, Genetic Bit Analysis (GBA), we confirmed the presence of a subset of these candidate SNP sites and have estimated the allele frequencies in three human populations with different ethnic origins. Altogether, our approach provides a basis for rapid and efficient regional and genome-wide SNP discovery using data assembled from sequences from different libraries of cDNAs.[The SNPs identified in this study can be found in the National Center of Biotechnology (NCBI) SNP database under submitter handles ORCHID (SNPS-981210-A) and debnick (SNPS-981209-A and SNPS-981209-B).]


2019 ◽  
Vol 18 (2) ◽  
pp. es2 ◽  
Author(s):  
Melissa K. Kjelvik ◽  
Elizabeth H. Schultheis

Data are becoming increasingly important in science and society, and thus data literacy is a vital asset to students as they prepare for careers in and outside science, technology, engineering, and mathematics and go on to lead productive lives. In this paper, we discuss why the strongest learning experiences surrounding data literacy may arise when students are given opportunities to work with authentic data from scientific research. First, we explore the overlap between the fields of quantitative reasoning, data science, and data literacy, specifically focusing on how data literacy results from practicing quantitative reasoning and data science in the context of authentic data. Next, we identify and describe features that influence the complexity of authentic data sets (selection, curation, scope, size, and messiness) and implications for data-literacy instruction. Finally, we discuss areas for future research with the aim of identifying the impact that authentic data may have on student learning. These include defining desired learning outcomes surrounding data use in the classroom and identification of teaching best practices when using data in the classroom to develop students’ data-literacy abilities.


Sign in / Sign up

Export Citation Format

Share Document