scholarly journals The Neotoma Paleoecology Database, a multiproxy, international, community-curated data resource

2018 ◽  
Vol 89 (1) ◽  
pp. 156-177 ◽  
Author(s):  
John W. Williams ◽  
Eric C. Grimm ◽  
Jessica L. Blois ◽  
Donald F. Charles ◽  
Edward B. Davis ◽  
...  

AbstractThe Neotoma Paleoecology Database is a community-curated data resource that supports interdisciplinary global change research by enabling broad-scale studies of taxon and community diversity, distributions, and dynamics during the large environmental changes of the past. By consolidating many kinds of data into a common repository, Neotoma lowers costs of paleodata management, makes paleoecological data openly available, and offers a high-quality, curated resource. Neotoma’s distributed scientific governance model is flexible and scalable, with many open pathways for participation by new members, data contributors, stewards, and research communities. The Neotoma data model supports, or can be extended to support, any kind of paleoecological or paleoenvironmental data from sedimentary archives. Data additions to Neotoma are growing and now include >3.8 million observations, >17,000 datasets, and >9200 sites. Dataset types currently include fossil pollen, vertebrates, diatoms, ostracodes, macroinvertebrates, plant macrofossils, insects, testate amoebae, geochronological data, and the recently added organic biomarkers, stable isotopes, and specimen-level data. Multiple avenues exist to obtain Neotoma data, including the Explorer map-based interface, an application programming interface, theneotomaR package, and digital object identifiers. As the volume and variety of scientific data grow, community-curated data resources such as Neotoma have become foundational infrastructure for big data science.

2021 ◽  
Author(s):  
Tingwan Yang ◽  
Hongyan Zhao ◽  
Zhengyu Xia ◽  
Zicheng Yu ◽  
Hongkai Li ◽  
...  

<p>Montane bogs—peat-forming ecosystems located in high elevation and receiving their water supply mostly from meteoric waters—are unique archives of past environmental changes. Studying these ecosystems and their responses to recent climate warming will help improve our understanding of the sensitivity of high-elevation peatlands to regional climate dynamics. Here, we report a post-bomb radiocarbon-dated, high-resolution, and multi-proxy record in Laobaishan bog (LBS), a mountaintop bog from the Changbai Mountains Range in Northeast China. We analyzed plant macrofossils and testate amoebae of a 41-cm peat core dated between 1970 and 2009 to document the ecohydrological response of peatland to the anthropogenic warming in recent decades. We quantitatively reconstruct the surface wetness changes of LBS bog using the first axis of the detrended correspondence analysis (DCA) of plant macrofossil assemblages and depth to water table (DWT) inferred by transfer function of testate amoebae assemblages. We distinguished two hydroclimate stages: the moist stage before the 1990s and the rapidly drying stage since the 1990s. During the moist stage, plant macrofossils were characterized by the low abundance of <em>Sphagnum capitifolium</em> and <em>Polytrichum strichum</em> that prefer dry habitats, and testate amoebae assemblages were dominated by low abundance of dry-adapted <em>Assulina muscorum</em> and <em>Corythion dubium</em>. High score of first axis and low DWT also suggested a moist habitat at LBS. After the transition into the drying stage, the abundance of <em>S. capitifolium</em> and <em>P. strichum</em> increased and that of <em>A. muscorum</em> and <em>C. dubium</em> showed similar trend. Score of first axis and DWT reconstructions show that LBS have experienced rapid surface desiccation since the 1990s. Based on the high-resolution gridded reanalysis data, these ecohydrological changes occurred with a rapid increase in temperature (~1°C) but without notable change in total precipitation during the growing season (May–September) since the 1990s. Besides, backward trajectory analysis showed no apparent changes in atmospheric circulation pattern since the 1990s, supporting our interpretation that the ecohydrological changes in LBS bog were induced by climate warming. These results demonstrate that the plant communities, microbial assemblages, and peatland hydrology of montane peatland show a sensitive response to climate warming that might be in larger amplitude than the low-elevation areas.</p>


2020 ◽  
Author(s):  
Agnieszka Mroczkowska ◽  
Piotr Kittel ◽  
Katarzyna Marcisz ◽  
Ekaterina Dolbunova ◽  
Emilie Gauthier ◽  
...  

<p>Peatlands are natural geoarchives which record within organic deposits a picture of the past environmental changes. Depending on the preserved proxy, we are able to reconstruct various aspects of palaeoenvironmental changes, e.g. using pollen (vegetation composition), plant macrofossils (local vegetation changes), testate amoebae and zoological remains (hydrological changes) or XRF scanning (geochemical changes). Here, we investigated changes in land use and climate of western Russia using a range of biotic and abiotic proxies. This part of Europe is characterized by a continental climate, which makes this region very sensitive to climate change, in particular to precipitation fluctuations. Furthermore, in the last two centuries strong human impact in that area has been noticed.  </p><p>The Serteya kettle hole mire (55°40'N 31°30'E) is situated in the Smolensk Oblast in Western Dvina Lakeland. Study site is located close to the range of plant communities belonging to the hemiboreal zone, making it an ideal position to trace the plant succession of Eastern Europe. Preliminary dating of the material proves that the average rate of biogenic deposits in the reservoir was approx. 1 m per 600 years. The majority of the European peatlands was in some sense transformed as a result of drainage and land use practices in their basins. Serteya kettle hole mire allowed us to accurately track how a small ecosystem responds to palaeoenvironmental changes. Preliminary results will show the major fluctuations of the mire hydrology accompanied by the changes in the land use in the region. Our goal is also to determine the resistance and resilience of peat bogs to disturbances.</p>


The Holocene ◽  
2021 ◽  
pp. 095968362199464
Author(s):  
Katarzyna Marcisz ◽  
Krzysztof Buczek ◽  
Mariusz Gałka ◽  
Włodzimierz Margielewski ◽  
Matthieu Mulot ◽  
...  

Landslide mountain fens formed in landslide depressions are dynamic environments as their development is disturbed by a number of factors, for example, landslides, slopewash, and surface run-off. These processes lead to the accumulation of mineral material and wood in peat. Disturbed peatlands are interesting archives of past environmental changes, but they may be challenging for providing biotic proxy-based quantitative reconstructions. Here we investigate long-term changes in testate amoeba communities from two landslide mountain fens – so far an overlooked habitat for testate amoeba investigations. Our results show that abundances of testate amoebae are extremely low in this type of peatlands, therefore not suitable for providing quantitative depth-to-water table reconstructions. However, frequent shifts of dominant testate amoeba species reflect dynamic lithological situation of the studied fens. We observed that high and stable mineral matter input into the peatlands was associated with high abundances of species producing agglutinated (xenosomic) as well as idiosomic shells which prevailed in the testate amoeba communities in both analyzed profiles. This is the first study that explores testate amoebae of landslide mountain fens in such detail, providing novel information about microbial communities of these ecosystems.


2021 ◽  
Vol 22 (S6) ◽  
Author(s):  
Yasmine Mansour ◽  
Annie Chateau ◽  
Anna-Sophie Fiston-Lavier

Abstract Background Meiotic recombination is a vital biological process playing an essential role in genome's structural and functional dynamics. Genomes exhibit highly various recombination profiles along chromosomes associated with several chromatin states. However, eu-heterochromatin boundaries are not available nor easily provided for non-model organisms, especially for newly sequenced ones. Hence, we miss accurate local recombination rates necessary to address evolutionary questions. Results Here, we propose an automated computational tool, based on the Marey maps method, allowing to identify heterochromatin boundaries along chromosomes and estimating local recombination rates. Our method, called BREC (heterochromatin Boundaries and RECombination rate estimates) is non-genome-specific, running even on non-model genomes as long as genetic and physical maps are available. BREC is based on pure statistics and is data-driven, implying that good input data quality remains a strong requirement. Therefore, a data pre-processing module (data quality control and cleaning) is provided. Experiments show that BREC handles different markers' density and distribution issues. Conclusions BREC's heterochromatin boundaries have been validated with cytological equivalents experimentally generated on the fruit fly Drosophila melanogaster genome, for which BREC returns congruent corresponding values. Also, BREC's recombination rates have been compared with previously reported estimates. Based on the promising results, we believe our tool has the potential to help bring data science into the service of genome biology and evolution. We introduce BREC within an R-package and a Shiny web-based user-friendly application yielding a fast, easy-to-use, and broadly accessible resource. The BREC R-package is available at the GitHub repository https://github.com/GenomeStructureOrganization.


2012 ◽  
Vol 522 ◽  
pp. 770-775
Author(s):  
Yu Zheng ◽  
Yan Rong Ni ◽  
Deng Zhe Ma

In order to satisfy the needs of fast and convenient customization of manufacturing scientific data sharing service, the data service customization process and its key technologies were studied. First the data resource model and the customization oriented professional data service model were studied. Then the processes of service customization, from resource registration, service definition, service parsing, to service generating, were analyzed. The parsing engine based on service parsing technology and incubator based on service generating technology was emphasized. Finally the prototype system was developed and validated by an example.


2020 ◽  
Author(s):  
Jenna Marie Reps ◽  
Ross Williams ◽  
Seng Chan You ◽  
Thomas Falconer ◽  
Evan Minty ◽  
...  

Abstract Objective: To demonstrate how the Observational Healthcare Data Science and Informatics (OHDSI) collaborative network and standardization can be utilized to scale-up external validation of patient-level prediction models by enabling validation across a large number of heterogeneous observational healthcare datasets.Materials & Methods: Five previously published prognostic models (ATRIA, CHADS2, CHADS2VASC, Q-Stroke and Framingham) that predict future risk of stroke in patients with atrial fibrillation were replicated using the OHDSI frameworks. A network study was run that enabled the five models to be externally validated across nine observational healthcare datasets spanning three countries and five independent sites. Results: The five existing models were able to be integrated into the OHDSI framework for patient-level prediction and they obtained mean c-statistics ranging between 0.57-0.63 across the 6 databases with sufficient data to predict stroke within 1 year of initial atrial fibrillation diagnosis for females with atrial fibrillation. This was comparable with existing validation studies. The validation network study was run across nine datasets within 60 days once the models were replicated. An R package for the study was published at https://github.com/OHDSI/StudyProtocolSandbox/tree/master/ExistingStrokeRiskExternalValidation.Discussion: This study demonstrates the ability to scale up external validation of patient-level prediction models using a collaboration of researchers and a data standardization that enable models to be readily shared across data sites. External validation is necessary to understand the transportability or reproducibility of a prediction model, but without collaborative approaches it can take three or more years for a model to be validated by one independent researcher. Conclusion : In this paper we show it is possible to both scale-up and speed-up external validation by showing how validation can be done across multiple databases in less than 2 months. We recommend that researchers developing new prediction models use the OHDSI network to externally validate their models.


Author(s):  
Jenna Marie Reps ◽  
Ross D Williams ◽  
Seng Chan You ◽  
Thomas Falconer ◽  
Evan Minty ◽  
...  

Abstract Background: To demonstrate how the Observational Healthcare Data Science and Informatics (OHDSI) collaborative network and standardization can be utilized to scale-up external validation of patient-level prediction models by enabling validation across a large number of heterogeneous observational healthcare datasets.Methods: Five previously published prognostic models (ATRIA, CHADS2, CHADS2VASC, Q-Stroke and Framingham) that predict future risk of stroke in patients with atrial fibrillation were replicated using the OHDSI frameworks. A network study was run that enabled the five models to be externally validated across nine observational healthcare datasets spanning three countries and five independent sites. Results: The five existing models were able to be integrated into the OHDSI framework for patient-level prediction and they obtained mean c-statistics ranging between 0.57-0.63 across the 6 databases with sufficient data to predict stroke within 1 year of initial atrial fibrillation diagnosis for females with atrial fibrillation. This was comparable with existing validation studies. The validation network study was run across nine datasets within 60 days once the models were replicated. An R package for the study was published at https://github.com/OHDSI/StudyProtocolSandbox/tree/master/ExistingStrokeRiskExternalValidation.Conclusion : This study demonstrates the ability to scale up external validation of patient-level prediction models using a collaboration of researchers and a data standardization that enable models to be readily shared across data sites. External validation is necessary to understand the transportability or reproducibility of a prediction model, but without collaborative approaches it can take three or more years for a model to be validated by one independent researcher. In this paper we show it is possible to both scale-up and speed-up external validation by showing how validation can be done across multiple databases in less than 2 months. We recommend that researchers developing new prediction models use the OHDSI network to externally validate their models.


2018 ◽  
Vol 6 (3) ◽  
pp. 669-686 ◽  
Author(s):  
Michael Dietze

Abstract. Environmental seismology is the study of the seismic signals emitted by Earth surface processes. This emerging research field is at the intersection of seismology, geomorphology, hydrology, meteorology, and further Earth science disciplines. It amalgamates a wide variety of methods from across these disciplines and ultimately fuses them in a common analysis environment. This overarching scope of environmental seismology requires a coherent yet integrative software which is accepted by many of the involved scientific disciplines. The statistic software R has gained paramount importance in the majority of data science research fields. R has well-justified advances over other mostly commercial software, which makes it the ideal language to base a comprehensive analysis toolbox on. The article introduces the avenues and needs of environmental seismology, and how these are met by the R package eseis. The conceptual structure, example data sets, and available functions are demonstrated. Worked examples illustrate possible applications of the package and in-depth descriptions of the flexible use of the functions. The package has a registered DOI, is available under the GPL licence on the Comprehensive R Archive Network (CRAN), and is maintained on GitHub.


2021 ◽  
pp. 178359172110512
Author(s):  
Lei Huang ◽  
Miltos Ladikas ◽  
Guangxi He ◽  
Julia Hahn ◽  
Jens Schippl

The current rapid development of online car-hailing services creates a serious challenge to the existing paradigm of market governance and antitrust policy. However, the debate on the market structure of the car-hailing platform requires more empirical evidence to uncover its functions. This research adopts an interdisciplinary methodology based on computer science and economics, and including software reverse engineering tools applied to the interoperability of the terminal application and resource allocation model, to demonstrate the topological market structure of personal data resources allocation in China’s car-hailing industry. Within the discussion of the hybrid nature of technology and economy, the analysis results clearly show that China’s car-hailing platform services present a multi-sided market structure when seen from the perspective of personal data resource allocation. Personal data resource (PDR), that is considered an essential market resource, is applied as an asset transferred unhindered between platforms via the application programming interface, and thus, creating a new market allocation mechanism. The connection between the car-hailing platforms and social media platforms is an essential aspect of the market competition in the domain. As applications of online platforms increase in the global context, this research offers a new perspective in personal data resource allocation with implications for the governance of the platform economy.


Sign in / Sign up

Export Citation Format

Share Document