scholarly journals LabKey Server: An open source platform for scientific data integration, analysis and collaboration

2011 ◽  
Vol 12 (1) ◽  
Author(s):  
Elizabeth K Nelson ◽  
Britt Piehler ◽  
Josh Eckels ◽  
Adam Rauch ◽  
Matthew Bellew ◽  
...  
2021 ◽  
Author(s):  
Joey O'Dell ◽  
Jaap H. Nienhuis ◽  
Jana R. Cox ◽  
Douglas A. Edmonds ◽  
Paolo Scussolini

Abstract. Flood-protection levees have been built along rivers and coastlines globally. Current datasets, however, are generally confined to territorial boundaries (national datasets) and are not always easily accessible, posing limitations for hydrologic models and assessments of flood hazard. Here we present our work to develop a single, open-source global river delta levee data environment (openDELvE) which aims to bridge a data deficiency by collecting and standardising global flood-protection levee data for river deltas. In openDELvE we have aggregated data from national databases as well as data stored in reports, maps, and satellite imagery. The database identifies the river delta land areas that the levees have been designed to protect, and where additional data is available, we record the extent and design specifications of the levees themselves (e.g., levee height, crest width, construction material) in a harmonised format. openDELvE currently contains 5,089 km of levees on deltas, and 44,733.505 km2 of leveed area in 1,601 polygons. For the 152 deltas included in openDELvE, on average 19 % of their habitable land area is confined by verifiable flood-protection levees. Globally, we estimate that between 5 % and 54 % of all delta land is confined by flood-protection levees. The data is aligned to the recent standards of Findability, Accessibility, Interoperability and Reuse of scientific data (FAIR) and is open-source. openDELvE is made public on an interactive platform (www.opendelve.eu), which includes a community-driven revision tool to encourage inclusion of new levee data and continuous improvement and refinement of open-source levee data.


Database ◽  
2020 ◽  
Vol 2020 ◽  
Author(s):  
Hendrikje Seifert ◽  
Marc Weber ◽  
Frank Oliver Glöckner ◽  
Ivaylo Kostadinov

Abstract The Nagoya Protocol on Access and Benefit Sharing is a transparent legal framework, which governs the access to genetic resources and the fair and equitable sharing of benefits arising from their utilization. Complying with the Nagoya regulations ensures legal use and re-use of data from genetic resources. Providing detailed provenance information and clear re-usage conditions plays a key role in ensuring the re-usability of research data according to the FAIR (findable, accessible, interoperable and re-usable) Guiding Principles for scientific data management and stewardship. Even with the framework provided by the ABS (access and benefit sharing) Clearing House and the support of the National Focal Points, establishing a direct link between the research data from genetic resources and the relevant Nagoya information remains a challenge. This is particularly true for re-using publicly available data. The Nagoya Lookup Service was developed for stakeholders in biological sciences with the aim at facilitating the legal and FAIR data management, specifically for data publication and re-use. The service provides up-to-date information on the Nagoya party status for a geolocation provided by GPS coordinates, directing the user to the relevant local authorities for further information. It integrates open data from the ABS Clearing House, Marine Regions, GeoNames and Wikidata. The service is accessible through a REST API and a user-friendly web form. Stakeholders include data librarians, data brokers, scientists and data archivists who may use this service before, during and after data acquisition or publication to check whether legal documents need to be prepared, considered or verified. The service allows researchers to estimate whether genetic data they plan to produce or re-use might fall under Nagoya regulations or not, within the limits of the technology and without constituting legal advice. It is implemented using portable Docker containers and can easily be deployed locally or on a cloud infrastructure. The source code for building the service is available under an open-source license on GitHub, with a functional image on Docker Hub and can be used by anyone free of charge.


2019 ◽  
Vol 35 (17) ◽  
pp. 3055-3062 ◽  
Author(s):  
Amrit Singh ◽  
Casey P Shannon ◽  
Benoît Gautier ◽  
Florian Rohart ◽  
Michaël Vacher ◽  
...  

Abstract Motivation In the continuously expanding omics era, novel computational and statistical strategies are needed for data integration and identification of biomarkers and molecular signatures. We present Data Integration Analysis for Biomarker discovery using Latent cOmponents (DIABLO), a multi-omics integrative method that seeks for common information across different data types through the selection of a subset of molecular features, while discriminating between multiple phenotypic groups. Results Using simulations and benchmark multi-omics studies, we show that DIABLO identifies features with superior biological relevance compared with existing unsupervised integrative methods, while achieving predictive performance comparable to state-of-the-art supervised approaches. DIABLO is versatile, allowing for modular-based analyses and cross-over study designs. In two case studies, DIABLO identified both known and novel multi-omics biomarkers consisting of mRNAs, miRNAs, CpGs, proteins and metabolites. Availability and implementation DIABLO is implemented in the mixOmics R Bioconductor package with functions for parameters’ choice and visualization to assist in the interpretation of the integrative analyses, along with tutorials on http://mixomics.org and in our Bioconductor vignette. Supplementary information Supplementary data are available at Bioinformatics online.


SoftwareX ◽  
2019 ◽  
Vol 9 ◽  
pp. 328-331 ◽  
Author(s):  
Faruk Diblen ◽  
Jisk Attema ◽  
Rena Bakhshi ◽  
Sascha Caron ◽  
Luc Hendriks ◽  
...  

2020 ◽  
Vol 98 ◽  
pp. 29-60
Author(s):  
Jerrold J. Heindel ◽  
Scott Belcher ◽  
Jodi A. Flaws ◽  
Gail S. Prins ◽  
Shuk-Mei Ho ◽  
...  

Database ◽  
2013 ◽  
Vol 2013 (0) ◽  
pp. bat051-bat051 ◽  
Author(s):  
R. Vera ◽  
Y. Perez-Riverol ◽  
S. Perez ◽  
B. Ligeti ◽  
A. Kertesz-Farkas ◽  
...  
Keyword(s):  

2020 ◽  
Author(s):  
Eugene Burger ◽  
Benjamin Pfeil ◽  
Kevin O'Brien ◽  
Linus Kamb ◽  
Steve Jones ◽  
...  

<p>Data assembly in support of global data products, such as GLODAP, and submission of data to national data centers to support long-term preservation, demands significant effort. This is in addition to the effort required to perform quality control on the data prior to submission. Delays in data assembly can negatively affect the timely production of scientific indicators that are dependent upon these datasets, including products such as GLODAP. What if data submission, metadata assembly and quality control can all be rolled into a single application? To support more streamlined data management processes in the NOAA Ocean Acidification Program (OAP) we are developing such an application.This application has the potential for application towards a broader community.</p><p>This application addresses the need that data contributing to analysis and synthesis products are high quality, well documented, and accessible from the applications scientists prefer to use. The Scientific Data Integration System (SDIS) application developed by the PMEL Science Data Integration Group, allows scientists to submit their data in a number of formats. Submitted data are checked for common errors. Metadata are extracted from the data that can then be complemented with a complete metadata record using the integrated metadata entry tool that collects rich metadata that meets the Carbon science community requirements. Still being developed, quality control for standard biogeochemical parameters will be integrated into the application. The quality control routines will be implemented in close collaboration with colleagues from the Bjerknes Climate Data Centre (BCDC) within the Bjerknes Centre for Climate Research (BCCR).  This presentation will highlight the capabilities that are now available as well as the implementation of the archive automation workflow, and it’s potential use in support of GLODAP data assembly efforts.</p>


Sign in / Sign up

Export Citation Format

Share Document