scholarly journals A database of marine and terrestrial radiogenic Nd and Sr isotopes for tracing earth-surface processes

2018 ◽  
Author(s):  
Cécile L. Blanchet

Abstract. The database presented here contains radiogenic neodymium and strontium isotope ratios measured on both terrestrial and marine sediments. The main purpose of this dataset is to help assessing sediment provenance and transport processes for various time intervals. This can be achieved by either mapping sediment isotopic signature and/or fingerprinting source areas using statistical tools. The database has been built by incorporating data from the literature and the SedDB database and harmonizing the metadata, especially units and geographical coordinates. The original data were processed in three steps. Firstly, a specific attention has been devoted to provide geographical coordinates to each sample in order to be able to map the data. When available, the original geographical coordinates from the reference (generally DMS coordinates) were transferred into the decimal degrees system. When coordinates were not provided, an approximate location was derived from available information in the original publication. Secondly, all samples were assigned a set of standardized criteria that help splitting the dataset in specific categories. For instance, samples were discriminated according to their location Region, Sub-region and Location that relate to location at continental to city/river scale) or the sample type (terrestrial samples – aerosols, soil sediments, river sediments – or marine samples – marine sediment or trap sample). Finally, samples were distinguished according to their deposition age, which allowed to compute average values for specific time intervals. Graphical examples illustrating the functionality of the database are presented and the validity of the process was tested by comparing the results with published data. The dataset will be updated bi-annually and might be extended to reach a global geographical extent and/or add other type of samples. It is publicly available (under CC4.0-BY Licence) on the GFZ data management service at http://doi.org/10.5880/GFZ.5.2.2018.001.

2019 ◽  
Vol 11 (2) ◽  
pp. 741-759 ◽  
Author(s):  
Cécile L. Blanchet

Abstract. The database presented here contains radiogenic neodymium and strontium isotope ratios measured on both terrestrial and marine sediments. The main purpose of this dataset is to help assess sediment provenance and transport processes for various time intervals. This can be achieved by either mapping sediment isotopic signature and/or fingerprinting source areas using statistical tools. The database has been built by incorporating data from the literature and the SedDB database and harmonizing the metadata, especially units and geographical coordinates. The original data were processed in three steps. Firstly, specific attention has been devoted to providing geographical coordinates to each sample in order to be able to map the data. When available, the original geographical coordinates from the reference (generally DMS coordinates) were transferred into the decimal degrees system. When coordinates were not provided, an approximate location was derived from available information in the original publication. Secondly, all samples were assigned a set of standardized criteria that help split the dataset into specific categories. For instance, samples were distinguished according to their location (“Region”, “Sub-region” and “Location” that relate to locations at continental to city or river scale) or the sample type (terrestrial samples – “aerosols”, “soil sediments”, “river sediments”, “rocks” – or marine samples – “marine sediment” or “trap sample”). Finally, samples were distinguished according to their deposition age, which allowed us to compute average values for specific time intervals. Graphical examples illustrating the functionality of the database are presented and the validity of the process was tested by comparing the results with published data. The dataset will be updated bi-annually in order to add more data points to increase the sampling density or provide new types of samples (e.g. seawater signature) and/or integrate additional information regarding the samples. It is publicly available (under CC4.0-BY Licence) from the GFZ data management service at https://doi.org/10.5880/GFZ.4.3.2019.001.


1994 ◽  
Vol 144 ◽  
pp. 139-141 ◽  
Author(s):  
J. Rybák ◽  
V. Rušin ◽  
M. Rybanský

AbstractFe XIV 530.3 nm coronal emission line observations have been used for the estimation of the green solar corona rotation. A homogeneous data set, created from measurements of the world-wide coronagraphic network, has been examined with a help of correlation analysis to reveal the averaged synodic rotation period as a function of latitude and time over the epoch from 1947 to 1991.The values of the synodic rotation period obtained for this epoch for the whole range of latitudes and a latitude band ±30° are 27.52±0.12 days and 26.95±0.21 days, resp. A differential rotation of green solar corona, with local period maxima around ±60° and minimum of the rotation period at the equator, was confirmed. No clear cyclic variation of the rotation has been found for examinated epoch but some monotonic trends for some time intervals are presented.A detailed investigation of the original data and their correlation functions has shown that an existence of sufficiently reliable tracers is not evident for the whole set of examinated data. This should be taken into account in future more precise estimations of the green corona rotation period.


2021 ◽  
Vol 13 (13) ◽  
pp. 2618
Author(s):  
Carsten Juergens ◽  
M. Fabian Meyer-Heß

This contribution focuses on the utilization of very-high-resolution (VHR) images to identify construction areas and their temporal changes aiming to estimate the investment in construction as a basis for economic forecasts. Triggered by the need to improve macroeconomic forecasts and reduce their time intervals, the idea arose to use frequently available information derived from satellite imagery. For the improvement of macroeconomic forecasts, the period to detect changes between two points in time needs to be rather short because early identification of such investments is beneficial. Therefore, in this study, it is of interest to identify and quantify new construction areas, which will turn into build-up areas later. A multiresolution segmentation followed by a kNN classification is applied to WorldView images from an area around the southern part of Berlin, Germany. Specific material compositions of construction areas result in typical classification patterns different from other land cover classes. A GIS-based analysis follows to extract specific temporal “patterns of life” in construction areas. With the early identification of such patterns of life, it is possible to predict construction areas that will turn into real estate later. This information serves as an input for macroeconomic forecasts to support quicker forecasts in future.


2021 ◽  
Author(s):  
Dominik Jaeger ◽  
Roland Stalder ◽  
Cristiano Chiessi ◽  
André Sawakuchi ◽  
Michael Strasser

<p>Trace metal concentrations and associated hydrous lattice point defects (OH defects) in quartz can help reveal its host rock’s crystallization history and are easily quantified using electron microprobe and infrared spectroscopy, respectively. These chemical impurities are preserved throughout the sedimentary cycle and thus lend themselves as tracers for sediment provenance analyses, particularly in settings where “traditional” provenance tools, e.g., thermochronology and heavy mineral analysis, are difficult due to factors like low mineral fertility and aggressive tropical weathering.</p><p>In this study, we apply this provenance analysis tool to detrital, sand-sized quartz grains from the Amazon River and its major tributaries, draining the Andean orogen as well as the Guiana- and Central Brazil Shields. Trace metal and OH defect concentrations from individual catchments are spread out over wide and mutually overlapping ranges of values. This means that each individual quartz grain cannot be unequivocally attributed to one catchment. However, evaluation of a statistically sound number of grains reveals that Andean quartz is, on average, richer in the trace metal aluminum (and Al-related OH defects) than quartz derived from one of the shield sources.</p><p>We evaluate our findings in the context of previous provenance studies on Amazon River sediments and discuss a potential future application of analyzing trace metals and OH defects in quartz in the offshore sediment record. Any past, major rearrangements in the Amazon watershed affecting the ratio of Andean vs. Shield-derived quartz grains should be detectable and our approach may therefore contribute to the reconstruction of Amazon drainage basin evolution.</p>


2018 ◽  
Vol 2 ◽  
pp. e25317
Author(s):  
Stijn Van Hoey ◽  
Peter Desmet

The ability to communicate and assess the quality and fitness for use of data is crucial to ensure maximum utility and re-use. Data consumers have certain requirements for the data they seek and need to be able to check if a data set conforms with these requirements. Data publishers aim to provide data with the highest possible quality and need to be able to identify potential errors that can be addressed with the available information at hand. The development and adoption of data publication guidelines is one approach to define and meet those requirements. However, the use of a guideline, the mapping decisions, and the requirements a dataset is expected to meet, are generally not communicated with the provided data. Moreover, these guidelines are typically intended for humans only. In this talk, we will present 'whip': a proposed syntax for data specifications. With whip, one can define column-based constraints for tabular (tidy) data using a number of rules, e.g. how data is structured following Darwin Core, how a term uses controlled vocabulary values, or what the expected minimum and maximum values are. These rules are human- and machine-readable, which communicates the specifications, and allows to automatically validate those in pipelines for data publication and quality assessment, such as Kurator. Whip can be formatted as a (yaml) text file that can be provided with the published data, communicating the specifications a dataset is expected to meet. The scope of these specifications can be specific to a dataset, but can also be used to express expected data quality and fitness for use of a publisher, consumer or community, allowing bottom-up and top-down adoption. As such, these specifications are complementary to the core set of data quality tests as currently under development by the TDWG Biodiversity Data Quality Task 2 Group 2. Whip rules are currently generic, but more specific ones can be defined to address requirements for biodiversity information.


2019 ◽  
pp. 1518-1538
Author(s):  
Sowmyarani C. N. ◽  
Dayananda P.

Privacy attack on individual records has great concern in privacy preserving data publishing. When an intruder who is interested to know the private information of particular person of his interest, will acquire background knowledge about the person. This background knowledge may be gained though publicly available information such as Voter's id or through social networks. Combining this background information with published data; intruder may get the private information causing a privacy attack of that person. There are many privacy attack models. Most popular attack models are discussed in this chapter. The study of these attack models plays a significant role towards the invention of robust Privacy preserving models.


2018 ◽  
Vol 193 ◽  
pp. 03007 ◽  
Author(s):  
Sergei Kolodyazhniy ◽  
Vladimir Kozlov

Using an integral mathematical model of a fire considering the assumptions typical of a starting stage of a fire, analytical dependencies were obtained for determining the time of reaching a critical value of the density of a smoke screen in a premises with a fire epicenter and adjoining premises. By means of analytical formulas for determining critical evacuation time intervals based on visibility loss, table values for different parameters that are included in the original equations were obtained. Simple engineering analytical solutions that describe the dynamics of smoke formation in premises in case of a fire when used in a certain combination are presented. The obtained dependencies allow one to identify the critical time of evacuation with no use of special PC software as well as to obtain original data without calculating an anti-smoke ventilation system.


1992 ◽  
Vol 2 (3) ◽  
pp. 175-183 ◽  
Author(s):  
J. Watson

SummaryPopulation estimates (number of breeding pairs) of Golden Eagles Aquila chrysaetos are given for most countries in Europe based on recent published accounts. Where published data are not available information is from local raptor specialists. The “best estimate” of the contemporary European population is 5,600 pairs ± 5%. The largest numbers are in Spain (c. 1,200 pairs) with Norway, European Russia, Scotland and Sweden each holding over 300 pairs. Information on trends reveals that most substantial populations (> 200 pairs) are stable; decreases are reported from some Baltic countries and in parts of southeast Europe. The total population is also shown for five biogeographic regions across Europe. In some cases, such “regions” may be more appropriate for the formulation of conservation priorities and policies than are the biologically artificial units defined by national boundaries.Des estimations des populations (nombre de couples nicheurs) d'Aigles royaux basées sur des publications récentes sont présentées pour la plupart des pays d'Europe. Là où les publications font défaut, des informations ont été requises apurès de spécialistes locaux. La population européene actuelle est estimée à 5,600 couples ± 5%. Les populations les plus fortes, soit plus de 300 couples, se trouvent en Espagne (env. 1,200 couples), Norvege, Russie européene, Ecosse et Suède. Les informations sur les tendances des fluctuations indiquent que la plupart des grandes populations (plus de 200 couples) sont stables; un déclin a été constaté chez certaines populations de l'est de la Baltique et du sudest de l'Europe. La population totale est aussi indiquée pour cinq régions biogéographiques qui peuvent s'avérer plus adéquates que les pays avec leurs frontières artifi-cielles pour la formulation de priorités et d'une politique pour la conservation de l'espèce.


2002 ◽  
Vol 6 (4) ◽  
pp. 655-670 ◽  
Author(s):  
R. J. Abrahart ◽  
L. See

Abstract. This paper evaluates six published data fusion strategies for hydrological forecasting based on two contrasting catchments: the River Ouse and the Upper River Wye. The input level and discharge estimates for each river comprised a mixed set of single model forecasts. Data fusion was performed using: arithmetic-averaging, a probabilistic method in which the best model from the last time step is used to generate the current forecast, two different neural network operations and two different soft computing methodologies. The results from this investigation are compared and contrasted using statistical and graphical evaluation. Each location demonstrated several options and potential advantages for using data fusion tools to construct superior estimates of hydrological forecast. Fusion operations were better in overall terms in comparison to their individual modelling counterparts and two clear winners emerged. Indeed, the six different mechanisms on test revealed unequal aptitudes for fixing different categories of problematic catchment behaviour and, in such cases, the best method(s) were a good deal better than their closest rival(s). Neural network fusion of differenced data provided the best solution for a stable regime (with neural network fusion of original data being somewhat similar) — whereas a fuzzified probabilistic mechanism produced a superior output in a more volatile environment. The need for a data fusion research agenda within the hydrological sciences is discussed and some initial suggestions are presented. Keywords: data fusion, fuzzy logic, neural network, hydrological modelling


2015 ◽  
Vol 28 (9) ◽  
pp. 3786-3805 ◽  
Author(s):  
Han-Ching Chen ◽  
Chung-Hsiung Sui ◽  
Yu-Heng Tseng ◽  
Bohua Huang

Abstract The Simple Ocean Data Assimilation, version 2.2.4 (SODA 2.2.4), analysis for the period of 1960–2010 is used to study the variability of Pacific subtropical cells (STCs) and its causal relation with tropical climate variability. Results show that the interior STC transport into the equatorial basin through 9°S and 9°N is well connected with equatorial sea surface temperature (SST) (9°S–9°N, 180°–90°W). The highest correlation at interannual time scales is contributed by the western interior STC transport within 160°E and 130°W. It is known that the ENSO recharge–discharge cycle experiences five stages: the recharging stage, recharged stage, warmest SST stage, discharging stage, and discharged stage. A correlation analysis of interior STC transport convergence, equatorial warm water volume (WWV), wind stress curl, and SST identifies the time intervals between the five stages, which are 8, 10, 2, and 8 months, respectively. A composite analysis for El Niño–developing and La Niña–developing events is also performed. The composited ENSO evolutions are in accordance with the recharge–discharge theory and the corresponding time lags between the above denoted five stages are 4–12, 6, 2, and 4 months, respectively. For stronger El Niño events, the discharge due to interior STC transport at 9°N terminates earlier than that at 9°S because of the southward migration of westerly winds following the El Niño peak phase. This study clarifies subsurface transport processes and their time intervals, which are useful for refinement of theoretical models and for evaluating coupled ocean–atmosphere general circulation model results.


Sign in / Sign up

Export Citation Format

Share Document