scholarly journals Calibration of radiocarbon dates: tables based on the consensus data of the Workshop on Calibrating the Radiocarbon Time Scale

Radiocarbon ◽  
1982 ◽  
Vol 24 (2) ◽  
pp. 103-150 ◽  
Author(s):  
Jeffrey Klein ◽  
J C Lerman ◽  
P E Damon ◽  
E K Ralph

A calibration is presented for conventional radiocarbon ages ranging from 10 to 7240 years BP and thus covering a calendric range of 8000 years from 6050 BC to AD 1950. Distinctive features of this calibration include 1) an improved data set consisting of 1154 radiocarbon measurements on samples of known age, 2) an extended range over which radiocarbon ages may be calibrated (an additional 530 years), 3) separate 95% confidence intervals (in tabular from) for six different radiocarbon uncertainties (20, 50, 100, 150, 200, 300 years), and 4) an estimate of the non-Poisson errors related to radiocarbon determinations, including an estimate of the systematic errors between laboratories.

2018 ◽  
pp. 30-37
Author(s):  
A. P. Aleshkin ◽  
A. A. Makarov ◽  
Yu. F. Matasov

The article deals with the behavior of reduced scalar estimates in the presence of systematic errors in the observational data. The proposed procedure with a different method of forming the reduction coefficient. A quasi-optimal variant of the compression parameter formation is considered. Simulation results for different conditions of application of the proposed algorithms are presented. Currently, one of the ways to improve the accuracy of the formation of the time scale in solving the problems of frequency-time customer support is the averaging of the readings of several generators. At the same time, this approach, as shown in the theory of statistical estimation, is effective for parrying the random component of the error of the estimated process. However, for frequency generators random error can be effectively compensated for a long range of observations, but the systematic component - frequency drift - is a serious problem, which can be eliminated by averaging only under certain conditions. Therefore, the article proposes a version of the reduced estimate, effective, as shown, to parry the departure of the time scale by introducing a shift in the implementation of compression, defined by the reduction procedure. The conditions in which the degree of the achieved positive effect has a practical sense are considered.


2020 ◽  
Vol 499 (4) ◽  
pp. 5641-5652
Author(s):  
Georgios Vernardos ◽  
Grigorios Tsagkatakis ◽  
Yannis Pantazis

ABSTRACT Gravitational lensing is a powerful tool for constraining substructure in the mass distribution of galaxies, be it from the presence of dark matter sub-haloes or due to physical mechanisms affecting the baryons throughout galaxy evolution. Such substructure is hard to model and is either ignored by traditional, smooth modelling, approaches, or treated as well-localized massive perturbers. In this work, we propose a deep learning approach to quantify the statistical properties of such perturbations directly from images, where only the extended lensed source features within a mask are considered, without the need of any lens modelling. Our training data consist of mock lensed images assuming perturbing Gaussian Random Fields permeating the smooth overall lens potential, and, for the first time, using images of real galaxies as the lensed source. We employ a novel deep neural network that can handle arbitrary uncertainty intervals associated with the training data set labels as input, provides probability distributions as output, and adopts a composite loss function. The method succeeds not only in accurately estimating the actual parameter values, but also reduces the predicted confidence intervals by 10 per cent in an unsupervised manner, i.e. without having access to the actual ground truth values. Our results are invariant to the inherent degeneracy between mass perturbations in the lens and complex brightness profiles for the source. Hence, we can quantitatively and robustly quantify the smoothness of the mass density of thousands of lenses, including confidence intervals, and provide a consistent ranking for follow-up science.


2000 ◽  
Vol 66 ◽  
pp. 257-295 ◽  
Author(s):  
Trevor Kirk ◽  
George Williams ◽  
A. Caseldine ◽  
J. Crowther ◽  
I. Darke ◽  
...  

Excavations at the Glandy Cross monumental complex during 1991 and 1992 formed part of an integrated programme of evaluation, rescue, and research by Dyfed Archaeological Trust (DAT). Enclosures, pit circles, standing stones, and cairns were excavated and their environs systematically surveyed. Radiocarbon dates show the monumental complex to have been constructed between c. 2190–1530 cal BC. However, the earliest activity at the site may date to c. 4470–4230 cal BC. A defended enclosure was constructed on the peripheries of the complex c. 830–510 cal BC.The 1991–92 excavation results are presented along with a summary of survey, salvage, and research spanning the period 1981 to 1992. This new data set is tentatively interpreted in terms of historical process and the social practice of monumental construction. A brief commentary on heritage management at Glandy Cross is also presented.A note on authorship: one of the authors (George Williams) directed the Glandy Cross excavations during 1991–92 and prepared an initial draft of the project report. Following his retirement from DAT a project editor (Trevor Kirk) was commissioned by Cadw: Welsh Historic Monuments to guide the project towards publication. This paper was largely penned by the project editor, though the excavation and survey data were produced by George Williams and his fieldwork team. The excavation and survey archives are held at the offices of DAT.


Radiocarbon ◽  
1993 ◽  
Vol 35 (1) ◽  
pp. 25-33 ◽  
Author(s):  
Gordon W. Pearson ◽  
Minze Stuiver

The sole purpose of this paper is to present a previously published 14C data set to which minor corrections have been applied. All basic information previously given is still applicable (Pearson & Stuiver 1986). The corrections are needed because 14C count-rate influences (radon decay in Seattle, a re-evaluation of the corrections applied for efficiency variation with time previously unrecognized in Belfast) had to be accounted for in more detail. Information on the radon correction is given in Stuiver and Becker (1993). The Belfast corrections were necessary because the original correction for efficiency variations with time was calculated using two suspect standards (these were shown to be suspect by recent observations) that overweighted the correction. A re-evaluation (Pearson & Qua 1993) now shows it to be almost insignificant, and the corrected dates (using the new correction) became older by about 16 years.


2021 ◽  
Vol 28 ◽  
pp. 146-150
Author(s):  
L. A. Atramentova

Using the data obtained in a cytogenetic study as an example, we consider the typical errors that are made when performing statistical analysis. Widespread but flawed statistical analysis inevitably produces biased results and increases the likelihood of incorrect scientific conclusions. Errors occur due to not taking into account the study design and the structure of the analyzed data. The article shows how the numerical imbalance of the data set leads to a bias in the result. Using a dataset as an example, it explains how to balance the complex. It shows the advantage of presenting sample indicators with confidence intervals instead of statistical errors. Attention is drawn to the need to take into account the size of the analyzed shares when choosing a statistical method. It shows how the same data set can be analyzed in different ways depending on the purpose of the study. The algorithm of correct statistical analysis and the form of the tabular presentation of the results are described. Keywords: data structure, numerically unbalanced complex, confidence interval.


2018 ◽  
Vol 233 (9-10) ◽  
pp. 689-694 ◽  
Author(s):  
Julian Henn

Abstract For the evaluation of data sets from dynamic structure crystallography, it may be helpful to predict expected $R = {{{I_{ON}}} \over {{I_{OFF}}}}$ -based agreement factors from the observed intensities and their corresponding standard uncertainties with laser ON and with laser OFF. The predicted R factors serve three purposes: (i) they indicate, which data sets are suitable and promising for further evaluation, (ii) they give a reference R value for the case of absence of systematic errors in the data and (iii) they can be compared to the corresponding predicted F2-based R factors. For point (ii) it is inevitable, that the standard uncertainties from the experiment are adequate, i.e. they should adequately describe the noise in the observed intensities and must not be systematically over- or under estimated for a part of the data or the whole data set. It may be this requirement, which is currently the largest obstacle for further progress in the field of dynamic structure crystallography.


Radiocarbon ◽  
1989 ◽  
Vol 31 (2) ◽  
pp. iii-iii
Author(s):  
Ajt Jull ◽  
Hans E Suess

Timothy Weiler Linick died on June 4th, 1989. He was a dedicated researcher, and an important part of the NSF Accelerator Facility for Radioisotope Analysis at the University of Arizona. He will be remembered for his care and attention to details, especially in the calculation and reporting of radiocarbon dates. He made important contributions to the fields of oceanography and tree-ring calibration of the 14C time scale.


Radiocarbon ◽  
1973 ◽  
Vol 15 (3) ◽  
pp. 592-598 ◽  
Author(s):  
Charles Bovington ◽  
Azizeh Mahd Avi ◽  
Roghiyeh Masoumi

Ages reported in this date list are calculated using the Libby half life of 5568 ± 30 years with 1950 as the standard year of reference; results are quoted in years b.p. and on the a.d./b.c. time scale.


2010 ◽  
Vol 31 (1-2) ◽  
pp. 151-159 ◽  
Author(s):  
Gordon J. Ogden

Although nearly 50 years have passed since P.B. Sears introduced pollen analysis to North America, it remains an occult art. Dramatic improvements in sampling and analytic techniques continue to be limited by intractable problems of differential production, dispersal, ballistics, sedimentation, and preservation. It is a basic tenet of pollen stratigraphy that the data set, consisting primarily of microfossils preserved in sediments, is better than anything we have yet been able to do with it. Basic agreement between late- and postglacial pollen records has been confirmed wherever the method has been applied. Quantitative sampling techniques, sample preparation, and analytic procedures, together with multiple radiocarbon dates, permits calculation of sedimentation rates and absolute pollen influx. Of approximately 300 sediment cores from northeastern North America, fewer than 30 have more than 3 radiocarbon determinations from which least squares power curve regressions can be reliably calculated in the determination of sedimentation rates. Analogy with modern environments represented by surface pollen spectra is limited by an insufficient number of samples of uniform quality to characterize a vegetational mosaic covering 40 degrees of latitude (40-80°N) and longitude (60-100°W). The present surface pollen data bank includes about 700 samples, unevenly spaced and of uneven quality, permitting a grid resolution of no better than 10,000 km2.


Antiquity ◽  
1990 ◽  
Vol 64 (245) ◽  
pp. 836-841 ◽  
Author(s):  
Paul Mellars

Over the past 10-20 years archaeologists have become familiar with the problems of potential ‘aberrations’ in the radiocarbon time-scale, arising from factors such as the varying rates of production of I4C in the upper atmosphere, or from the delayed cycling of ‘fossil’ carbon in the overall carbon reservoir. In some cases these aberrations can lead to dramatic ‘wiggles’ in the radiocarbon calibration curves, while in other cases (as, for example, during the Iron Age, around 700 BC) they can lead to substantial ‘plateaux’ during which measured radiocarbon dates show no detectable change over periods of several centuries (Pearson & Stuiver 1986; Stuiver & Pearson 1986).


Sign in / Sign up

Export Citation Format

Share Document