scholarly journals Present Status of Radiocarbon Calibration and Comparison Records Based on Polynesian Corals and Iberian Margin Sediments

Radiocarbon ◽  
2004 ◽  
Vol 46 (3) ◽  
pp. 1189-1202 ◽  
Author(s):  
Edouard Bard ◽  
Guillemette Ménot-Combes ◽  
Frauke Rostek

In this paper, we present updated information and results of the radiocarbon records based on Polynesian corals and on Iberian Margin planktonic foraminifera. The latter record was first published by Bard et al. (2004a,b), with the subsequent addition of some data by Shackleton et al. (2004). These data sets are compared with the IntCal98 record (Stuiver et al. 1998) and with data sets based on other archives, such as varves of Lake Suigetsu (Kitagawa and van der Plicht 1998, 2000), speleothems from the Bahamas (Beck et al. 2001), and Cariaco sediments (Hughen et al. 2004). Up to 26,000 cal BP, the Iberian Margin data agree within the errors of the other records. By contrast, in the interval between 33,000 and 41,000 cal BP, the Iberian Margin record runs between the Lake Suigetsu and Bahamian speleothem data sets, but it agrees with the few IntCal98 coral data and the Cariaco record.

2004 ◽  
Vol 61 (2) ◽  
pp. 204-214 ◽  
Author(s):  
Edouard Bard ◽  
Frauke Rostek ◽  
Guillemette Ménot-Combes

We present a new set of 14C ages obtained by accelerator mass spectrometry (AMS) on planktonic foraminifera from a deep-sea core collected off the Iberian Margin (MD952042). This site, at 37°N, is distant from the high-latitude zones where 14C reservoir age is large and variable. Many independent proxies — alkenones, magnetic susceptibility, ice-rafted debris, foraminifera stable isotopes, abundances of foraminifera, pollen, and dinoflagellates — show abrupt changes correlative with Dansgaard-Oeschger and Heinrich events of the last glacial period. The good stratigraphic agreement of all proxies — from the fine to the coarse-size fractions — indicates that the foraminifera 14C ages are representative of the different sediment fractions. To obtain reliable 14C ages of foraminifera beyond 20,000 14C yr B.P. we leached the shells prior to carbonate hydrolysis and subsequent analysis. For a calendar age scale, we matched the Iberian Margin profile with that of Greenland Summit δ18O. Both are proxies for temperature, which in models varies synchronously in the two areas. The match creates no spurious jumps in sedimentation rate and requires only a limited number of tie points. Except for ages older than 40,000 14C yr B.P. Greenland's GISP2 and GRIP records yield similar calendars. The 14C and imported calendar ages of the Iberian Margin record are then compared to data — from lacustrine annual varves and from corals and speleothems dated by U–Th — previously used to extend the calibration beyond 20,000 14C yr B.P. The new record follows a smooth pattern between 23,000 and 50,000 cal yr B.P. We find good agreement with the previous data sets between 23,000 and 31,000 cal yr B.P. In the interval between 33,000 and 41,000 cal yr B.P. for which previous records disagree by up to 5000 cal yr, the Iberian Margin record closely follows the polynomial curve that was previously defined by an interpolation of the coral ages and runs between the Lake Suigetsu and the Bahamian speleothem data sets.


2021 ◽  
Vol 13 (1) ◽  
Author(s):  
Manfred Berres ◽  
Andreas U. Monsch ◽  
René Spiegel

Abstract Background The Placebo Group Simulation Approach (PGSA) aims at partially replacing randomized placebo-controlled trials (RPCTs), making use of data from historical control groups in order to decrease the needed number of study participants exposed to lengthy placebo treatment. PGSA algorithms to create virtual control groups were originally derived from mild cognitive impairment (MCI) data of the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database. To produce more generalizable algorithms, we aimed to compile five different MCI databases in a heuristic manner to create a “standard control algorithm” for use in future clinical trials. Methods We compared data from two North American cohort studies (n=395 and 4328, respectively), one company-sponsored international clinical drug trial (n=831) and two convenience patient samples, one from Germany (n=726), and one from Switzerland (n=1558). Results Despite differences between the five MCI samples regarding inclusion and exclusion criteria, their baseline demographic and cognitive performance data varied less than expected. However, the five samples differed markedly with regard to their subsequent cognitive performance and clinical development: (1) MCI patients from the drug trial did not deteriorate on verbal fluency over 3 years, whereas patients in the other samples did; (2) relatively few patients from the drug trial progressed from MCI to dementia (about 10% after 4 years), in contrast to the other four samples with progression rates over 30%. Conclusion Conventional MCI criteria were insufficient to allow for the creation of well-defined and internationally comparable samples of MCI patients. More recently published criteria for MCI or “MCI due to AD” are unlikely to remedy this situation. The Alzheimer scientific community needs to agree on a standard set of neuropsychological tests including appropriate selection criteria to make MCI a scientifically more useful concept. Patient data from different sources would then be comparable, and the scientific merits of algorithm-based study designs such as the PGSA could be properly assessed.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Hossein Ahmadvand ◽  
Fouzhan Foroutan ◽  
Mahmood Fathy

AbstractData variety is one of the most important features of Big Data. Data variety is the result of aggregating data from multiple sources and uneven distribution of data. This feature of Big Data causes high variation in the consumption of processing resources such as CPU consumption. This issue has been overlooked in previous works. To overcome the mentioned problem, in the present work, we used Dynamic Voltage and Frequency Scaling (DVFS) to reduce the energy consumption of computation. To this goal, we consider two types of deadlines as our constraint. Before applying the DVFS technique to computer nodes, we estimate the processing time and the frequency needed to meet the deadline. In the evaluation phase, we have used a set of data sets and applications. The experimental results show that our proposed approach surpasses the other scenarios in processing real datasets. Based on the experimental results in this paper, DV-DVFS can achieve up to 15% improvement in energy consumption.


Mathematics ◽  
2021 ◽  
Vol 9 (16) ◽  
pp. 1850
Author(s):  
Rashad A. R. Bantan ◽  
Farrukh Jamal ◽  
Christophe Chesneau ◽  
Mohammed Elgarhy

Unit distributions are commonly used in probability and statistics to describe useful quantities with values between 0 and 1, such as proportions, probabilities, and percentages. Some unit distributions are defined in a natural analytical manner, and the others are derived through the transformation of an existing distribution defined in a greater domain. In this article, we introduce the unit gamma/Gompertz distribution, founded on the inverse-exponential scheme and the gamma/Gompertz distribution. The gamma/Gompertz distribution is known to be a very flexible three-parameter lifetime distribution, and we aim to transpose this flexibility to the unit interval. First, we check this aspect with the analytical behavior of the primary functions. It is shown that the probability density function can be increasing, decreasing, “increasing-decreasing” and “decreasing-increasing”, with pliant asymmetric properties. On the other hand, the hazard rate function has monotonically increasing, decreasing, or constant shapes. We complete the theoretical part with some propositions on stochastic ordering, moments, quantiles, and the reliability coefficient. Practically, to estimate the model parameters from unit data, the maximum likelihood method is used. We present some simulation results to evaluate this method. Two applications using real data sets, one on trade shares and the other on flood levels, demonstrate the importance of the new model when compared to other unit models.


2021 ◽  
Author(s):  
Marion Peral ◽  
Thibaut Caley ◽  
Bruno Malaizé ◽  
Erin McClymont ◽  
Thomas Extier ◽  
...  

<p>The Mid-Pleistocene transition (MPT) took place between 1,200 Ma and 800 ka (still debated). During this transition, the Earth’s orbitally paced ice age cycles intensified, lengthened from ∼40 000 (∼40 ky) to ∼100 ky, and became distinctly asymmetrical while Earth’s orbital variations remained unchanged. Although orbital variations constitute the first order forcing on glacial-interglacial oscillations of the late Quaternary, they cannot explain alone the shifts in climatic periodicity and amplitude observed during the MPT. In order to explain the MPT, long-term evolution of internal mechanisms and feedbacks have been called upon, in relation with the global cooling trend initiated during the Cenozoic, the expansion of Antarctic and Greenland Ice Sheet and/or the long-term decline in greenhouse gases (particularly CO2). A key point is therefore to accurately reconstruction of oceanic temperatures to decipher the processes driving climate variations.</p><p>In the present work, we studied the marine sediment core MD96-2048 taken from south Indian Ocean (26*10’482’’ S, 34*01’148’’ E) in the region of the Agulhas current. We compared 5 paleothermometers: alkenone, TEX86, foraminiferal- transfer function, Mg/Ca and clumped isotope. Among these approaches, carbonate clumped-isotope thermometry (∆<sub>47</sub>) only depends on crystallization temperature, and the ∆<sub>47</sub> relationship with planktonic foraminifer calcification temperature is well defined. Since Mg/Ca is not only controlled by temperature but is also affected by salinity and pH. The classical d<sup>18</sup>O in planktic is dependent on SST and d<sup>18</sup>Osw, which is regionally correlated with the salinity in the present-day ocean. Assuming that the present-day d<sup>18</sup>O<sub>sw</sub>-salinity relation was the same during the MPT, we are able to separate changes in d<sup>18</sup>O<sub>sw</sub> from temperature effects and reconstruct past salinity. Combining d<sup>18</sup>O, Mg/Ca and ∆<sub>47</sub> on planktonic foraminifera allow in theory to reconstruct SST, SSS and pH.</p><p>Here, we measured d<sup>18</sup>O, Mg/Ca and ∆<sub>47</sub> on the shallow-dwelling planktonic species Globigerinioides ruber ss. at the maximal of glacial and interglacial periods over the last 1.2 Ma. Our set of data makes it possible to estimate the long-term evolution of SST, salinity and pH (and thus have an insight into the atmospheric CO<sub>2</sub> concentration) across the MPT. Frist, strong differences are observed between the 5 derived-SST: the alkenone and TEX86 recorded the higher temperatures than the other SST proxies. Alkenone derived-SST do not show glacial-interglacial variations within the MPT. The Mg/Ca and transfer function derived-SST show a good agreement each other, while the clumped-isotope derived-SST are systematically colder than the other derived-SST. Then, our ∆<sub>47</sub>-SST, salinity and pH results clearly show that amplitude of glacial-interglacial variations was insignificant between 1.2 and 0.8 Ma (within the MPT) and increased after the MPT. Finally, we also discussed the potential to use this unique combination of proxies to reconstruct changes of atmospheric CO<sub>2</sub> concentration.</p>


2021 ◽  
Vol 48 (4) ◽  
pp. 307-328
Author(s):  
Dominic Farace ◽  
Hélène Prost ◽  
Antonella Zane ◽  
Birger Hjørland ◽  
◽  
...  

This article presents and discusses different kinds of data documents, including data sets, data studies, data papers and data journals. It provides descriptive and bibliometric data on different kinds of data documents and discusses the theoretical and philosophical problems by classifying documents according to the DIKW model (data documents, information documents, knowl­edge documents and wisdom documents). Data documents are, on the one hand, an established category today, even with its own data citation index (DCI). On the other hand, data documents have blurred boundaries in relation to other kinds of documents and seem sometimes to be understood from the problematic philosophical assumption that a datum can be understood as “a single, fixed truth, valid for everyone, everywhere, at all times”


2020 ◽  
Vol 13 (1) ◽  
pp. 067
Author(s):  
Christie Andre Souza ◽  
Michelle Simões Reboita

Os ciclones tropicais quando atingem ventos com intensidade igual ou superior a 119 km/h desenvolvem uma estrutura conhecida como olho em seu centro. Já os ventos mais intensos do sistema são encontrados imediatamente após o olho. Num estudo recente para os ciclones Haiyan e Haima foi levantada a questão da qualidade dos dados do Global Forecast System (GFS) em representar os ventos uma vez que os ventos máximos apareceram no olho do sistema. Diante disso, esse estudo tem como objetivo avaliar como diferentes conjuntos de dados (GFS, ERA5, ERA-Interim e CCMP) representam os ventos nesses dois ciclones tropicais. A ERA5 e o GFS mostram ventos mais intensos nos ciclones do que os outros dois conjuntos de dados. Todos, exceto o GFS, mostram claramente ventos mais fracos no olho dos ciclones.  Wind intensity of two tropical cyclones obtained by different data sets A B S T R A C TWhen the tropical cyclones reach winds with intensity equal or higher than 119 km/h, they develop a structure known as the eye at its center. The strongest winds in the system are found immediately after the eye. In a recent study for Haiyan and Haima cyclones, the question of the quality of the Global Forecast System (GFS) data in representing the winds once the maximum winds appeared in the system eye was raised. Therefore, this study aims to evaluate how different data sets (GFS, ERA5, ERA-Interim and CCMP) represent the winds in these two tropical cyclones. ERA5 and GFS show cyclones with more intense winds than the other datasets. Except the GFS, the other data clearly show weaker winds in the cyclone eye.Keywords: analyzes; cyclones; meteorology; reanalysis


2010 ◽  
Vol 23 ◽  
pp. 113-117
Author(s):  
A. Orphanou ◽  
K. Nicolaides ◽  
D. Charalambous ◽  
P. Lingis ◽  
S. C. Michaelides

Abstract. In the present study, the monthly statistical characteristics of jetlet and tropopause in relation to the development of thunderstorms over Cyprus are examined. For the needs of the study the 12:00 UTC radiosonde data obtained from the Athalassa station (33.4° E, 35.1° N) for an 11-year period, from 1997 till 2007, were employed. On the basis of this dataset, the height and the temperature of the tropopause, as well as the height, wind direction and speed of the jetlet were estimated. Additionally, the days in the above period with observed thunderstorms were selected and the aforementioned characteristics of the jetlet and tropopause were noted. The two data sets were subsequently contrasted in an attempt to identify possible relations between thunderstorm development, on the one hand, and tropopause and jetlet characteristics, on the other hand.


1986 ◽  
Vol 59 (2) ◽  
pp. 751-760
Author(s):  
Todd McLin Davis

A problem often not detected in the interpretation of survey research is the potential interaction between subgroups within the sample and aspects of the survey. Potentially interesting interactions are commonly obscured when data are analyzed using descriptive and univariate statistical procedures. This paper suggests the use of cluster analysis as a tool for interpretation of data, particularly when such data take the form of coded categories. An example of the analysis of two data sets with known properties, one random and the other contrived, is presented to illustrate the application of cluster procedures to survey research data.


Author(s):  
Sylvia L. Osborn

With the widespread use of online systems, there is an increasing focus on maintaining the privacy of individuals and information about them. This is often referred to as a need for privacy protection. The author briefly examines definitions of privacy in this context, roughly delineating between keeping facts private and statistical privacy that deals with what can be inferred from data sets. Many of the mechanisms used to implement what is commonly thought of as access control are the same ones used to protect privacy. This chapter explores when this is not the case and, in general, the interplay between privacy and access control on the one hand and, on the other hand, the separation of these models from mechanisms for their implementation.


Sign in / Sign up

Export Citation Format

Share Document