scholarly journals NIfTI-MRS: A standard format for magnetic resonance spectroscopic data

2021 ◽  
Author(s):  
William T Clarke ◽  
Mark Mikkelsen ◽  
Georg Oeltzschner ◽  
Tiffany Bell ◽  
Amirmohammad Shamaei ◽  
...  

Purpose: The use of multiple data formats in the MRS community currently hinders data sharing and integration. NIfTI-MRS is proposed as a standard MR spectroscopy data format, which is implemented as an extension to the neuroimaging informatics technology initiative (NIfTI) format. Using this standardised format will facilitate data sharing, ease algorithm development, and encourage the integration of MRS analysis with other imaging modalities. Methods: A file format based on the NIfTI header extension framework was designed to incorporate essential spectroscopic metadata and additional encoding dimensions. A detailed description of the specification is provided. An open-source command-line conversion program is implemented to enable conversion of single-voxel and spectroscopic imaging data to NIfTI-MRS. To provide visualisation of data in NIfTI-MRS, a dedicated plugin is implemented for FSLeyes, the FSL image viewer. Results: Alongside online documentation, ten example datasets are provided in the proposed format. In addition, minimal examples of NIfTI-MRS readers have been implemented. The conversion software, spec2nii, currently converts fourteen formats to NIfTI-MRS, including DICOM and vendor proprietary formats. Conclusion: The proposed format aims to solve the issue of multiple data formats being used in the MRS community. By providing a single conversion point, it aims to simplify the processing and analysis of MRS data, thereby lowering the barrier to use of MRS. Furthermore, it can serve as the basis for open data sharing, collaboration, and interoperability of analysis programs. It also opens possibility of greater standardisation and harmonisation. By aligning with the dominant format in neuroimaging, NIfTI-MRS enables the use of mature tools present in the imaging community, demonstrated in this work by using a dedicated imaging tool, FSLeyes, as a viewer.

2021 ◽  
pp. 002203452110202
Author(s):  
F. Schwendicke ◽  
J. Krois

Data are a key resource for modern societies and expected to improve quality, accessibility, affordability, safety, and equity of health care. Dental care and research are currently transforming into what we term data dentistry, with 3 main applications: 1) medical data analysis uses deep learning, allowing one to master unprecedented amounts of data (language, speech, imagery) and put them to productive use. 2) Data-enriched clinical care integrates data from individual (e.g., demographic, social, clinical and omics data, consumer data), setting (e.g., geospatial, environmental, provider-related data), and systems level (payer or regulatory data to characterize input, throughput, output, and outcomes of health care) to provide a comprehensive and continuous real-time assessment of biologic perturbations, individual behaviors, and context. Such care may contribute to a deeper understanding of health and disease and a more precise, personalized, predictive, and preventive care. 3) Data for research include open research data and data sharing, allowing one to appraise, benchmark, pool, replicate, and reuse data. Concerns and confidence into data-driven applications, stakeholders’ and system’s capabilities, and lack of data standardization and harmonization currently limit the development and implementation of data dentistry. Aspects of bias and data-user interaction require attention. Action items for the dental community circle around increasing data availability, refinement, and usage; demonstrating safety, value, and usefulness of applications; educating the dental workforce and consumers; providing performant and standardized infrastructure and processes; and incentivizing and adopting open data and data sharing.


Author(s):  
Di Xian ◽  
Peng Zhang ◽  
Ling Gao ◽  
Ruijing Sun ◽  
Haizhen Zhang ◽  
...  

AbstractFollowing the progress of satellite data assimilation in the 1990s, the combination of meteorological satellites and numerical models has changed the way scientists understand the earth. With the evolution of numerical weather prediction models and earth system models, meteorological satellites will play a more important role in earth sciences in the future. As part of the space-based infrastructure, the Fengyun (FY) meteorological satellites have contributed to earth science sustainability studies through an open data policy and stable data quality since the first launch of the FY-1A satellite in 1988. The capability of earth system monitoring was greatly enhanced after the second-generation polar orbiting FY-3 satellites and geostationary orbiting FY-4 satellites were developed. Meanwhile, the quality of the products generated from the FY-3 and FY-4 satellites is comparable to the well-known MODIS products. FY satellite data has been utilized broadly in weather forecasting, climate and climate change investigations, environmental disaster monitoring, etc. This article reviews the instruments mounted on the FY satellites. Sensor-dependent level 1 products (radiance data) and inversion algorithm-dependent level 2 products (geophysical parameters) are introduced. As an example, some typical geophysical parameters, such as wildfires, lightning, vegetation indices, aerosol products, soil moisture, and precipitation estimation have been demonstrated and validated by in-situ observations and other well-known satellite products. To help users access the FY products, a set of data sharing systems has been developed and operated. The newly developed data sharing system based on cloud technology has been illustrated to improve the efficiency of data delivery.


2019 ◽  
Vol 10 (20) ◽  
pp. 17 ◽  
Author(s):  
Mattia Previtali ◽  
Riccardo Valente

<p>The open data paradigm is changing the research approach in many fields such as remote sensing and the social sciences. This is supported by governmental decisions and policies that are boosting the open data wave, and in this context archaeology is also affected by this new trend. In many countries, archaeological data are still protected or only limited access is allowed. However, the strong political and economic support for the publication of government data as open data will change the accessibility and disciplinary expertise in the archaeological field too. In order to maximize the impact of data, their technical openness is of primary importance. Indeed, since a spreadsheet is more usable than a PDF of a table, the availability of digital archaeological data, which is structured using standardised approaches, is of primary importance for the real usability of published data. In this context, the main aim of this paper is to present a workflow for archaeological data sharing as open data with a large level of technical usability and interoperability. Primary data is mainly acquired through the use of digital techniques (e.g. digital cameras and terrestrial laser scanning). The processing of this raw data is performed with commercial software for scan registration and image processing, allowing for a simple and semi-automated workflow. Outputs obtained from this step are then processed in modelling and drawing environments to generate digital models, both 2D and 3D. These crude geometrical data are then enriched with further information to generate a Geographic Information System (GIS) which is finally published as open data using Open Geospatial Consortium (OGC) standards to maximise interoperability.</p><p><strong>Highlights:</strong></p><ul><li><p>Open data will change the accessibility and disciplinary expertise in the archaeological field.</p></li><li><p>The main aim of this paper is to present a workflow for archaeological data sharing as open data with a large level of interoperability.</p></li><li><p>Digital acquisition techniques are used to document archaeological excavations and a Geographic Information System (GIS) is generated that is published as open data.</p></li></ul>


2021 ◽  
Author(s):  
Anita Bandrowski ◽  
Jeffrey S. Grethe ◽  
Anna Pilko ◽  
Tom Gillespie ◽  
Gabi Pine ◽  
...  

AbstractThe NIH Common Fund’s Stimulating Peripheral Activity to Relieve Conditions (SPARC) initiative is a large-scale program that seeks to accelerate the development of therapeutic devices that modulate electrical activity in nerves to improve organ function. Integral to the SPARC program are the rich anatomical and functional datasets produced by investigators across the SPARC consortium that provide key details about organ-specific circuitry, including structural and functional connectivity, mapping of cell types and molecular profiling. These datasets are provided to the research community through an open data platform, the SPARC Portal. To ensure SPARC datasets are Findable, Accessible, Interoperable and Reusable (FAIR), they are all submitted to the SPARC portal following a standard scheme established by the SPARC Curation Team, called the SPARC Data Structure (SDS). Inspired by the Brain Imaging Data Structure (BIDS), the SDS has been designed to capture the large variety of data generated by SPARC investigators who are coming from all fields of biomedical research. Here we present the rationale and design of the SDS, including a description of the SPARC curation process and the automated tools for complying with the SDS, including the SDS validator and Software to Organize Data Automatically (SODA) for SPARC. The objective is to provide detailed guidelines for anyone desiring to comply with the SDS. Since the SDS are suitable for any type of biomedical research data, it can be adopted by any group desiring to follow the FAIR data principles for managing their data, even outside of the SPARC consortium. Finally, this manuscript provides a foundational framework that can be used by any organization desiring to either adapt the SDS to suit the specific needs of their data or simply desiring to design their own FAIR data sharing scheme from scratch.


F1000Research ◽  
2017 ◽  
Vol 6 ◽  
pp. 12 ◽  
Author(s):  
Stéphanie Boué ◽  
Thomas Exner ◽  
Samik Ghosh ◽  
Vincenzo Belcastro ◽  
Joh Dokler ◽  
...  

The US FDA defines modified risk tobacco products (MRTPs) as products that aim to reduce harm or the risk of tobacco-related disease associated with commercially marketed tobacco products.  Establishing a product’s potential as an MRTP requires scientific substantiation including toxicity studies and measures of disease risk relative to those of cigarette smoking.  Best practices encourage verification of the data from such studies through sharing and open standards. Building on the experience gained from the OpenTox project, a proof-of-concept database and website (INTERVALS) has been developed to share results from both in vivo inhalation studies and in vitro studies conducted by Philip Morris International R&D to assess candidate MRTPs. As datasets are often generated by diverse methods and standards, they need to be traceable, curated, and the methods used well described so that knowledge can be gained using data science principles and tools. The data-management framework described here accounts for the latest standards of data sharing and research reproducibility. Curated data and methods descriptions have been prepared in ISA-Tab format and stored in a database accessible via a search portal on the INTERVALS website. The portal allows users to browse the data by study or mechanism (e.g., inflammation, oxidative stress) and obtain information relevant to study design, methods, and the most important results. Given the successful development of the initial infrastructure, the goal is to grow this initiative and establish a public repository for 21st-century preclinical systems toxicology MRTP assessment data and results that supports open data principles.


Plant Disease ◽  
2018 ◽  
Vol 102 (10) ◽  
pp. 1981-1988 ◽  
Author(s):  
Wei Liu ◽  
Xueren Cao ◽  
Jieru Fan ◽  
Zhenhua Wang ◽  
Zhengyuan Yan ◽  
...  

High-resolution aerial imaging with an unmanned aerial vehicle (UAV) was used to quantify wheat powdery mildew and estimate grain yield. Aerial digital images were acquired at Feekes growth stage (GS) 10.5.4 from flight altitudes of 200, 300, and 400 m during the 2009–10 and 2010–11 seasons; and 50, 100, 200, and 300 m during the 2011–12, 2012–13, and 2013–14 seasons. The image parameter lgR was consistently correlated positively with wheat powdery mildew severity and negatively with wheat grain yield for all combinations of flight altitude and year. Fitting the data with random coefficient regression models showed that the exact relationship of lgR with disease severity and grain yield varied considerably from year to year and to a lesser extent with flight altitude within the same year. The present results raise an important question about the consistency of using remote imaging information to estimate disease severity and grain yield. Further research is needed to understand the nature of interyear variability in the relationship of remote imaging data with disease or grain yield. Only then can we determine how the remote imaging tool can be used in commercial agriculture.


BMJ Open ◽  
2016 ◽  
Vol 6 (10) ◽  
pp. e011784 ◽  
Author(s):  
Anisa Rowhani-Farid ◽  
Adrian G Barnett

ObjectiveTo quantify data sharing trends and data sharing policy compliance at the British Medical Journal (BMJ) by analysing the rate of data sharing practices, and investigate attitudes and examine barriers towards data sharing.DesignObservational study.SettingThe BMJ research archive.Participants160 randomly sampled BMJ research articles from 2009 to 2015, excluding meta-analysis and systematic reviews.Main outcome measuresPercentages of research articles that indicated the availability of their raw data sets in their data sharing statements, and those that easily made their data sets available on request.Results3 articles contained the data in the article. 50 out of 157 (32%) remaining articles indicated the availability of their data sets. 12 used publicly available data and the remaining 38 were sent email requests to access their data sets. Only 1 publicly available data set could be accessed and only 6 out of 38 shared their data via email. So only 7/157 research articles shared their data sets, 4.5% (95% CI 1.8% to 9%). For 21 clinical trials bound by the BMJ data sharing policy, the per cent shared was 24% (8% to 47%).ConclusionsDespite the BMJ's strong data sharing policy, sharing rates are low. Possible explanations for low data sharing rates could be: the wording of the BMJ data sharing policy, which leaves room for individual interpretation and possible loopholes; that our email requests ended up in researchers spam folders; and that researchers are not rewarded for sharing their data. It might be time for a more effective data sharing policy and better incentives for health and medical researchers to share their data.


Data & Policy ◽  
2020 ◽  
Vol 2 ◽  
Author(s):  
Sophie Stalla-Bourdillon ◽  
Gefion Thuermer ◽  
Johanna Walker ◽  
Laura Carmichael ◽  
Elena Simperl

Abstract Data trusts have been conceived as a mechanism to enable the sharing of data across entities where other formats, such as open data or commercial agreements, are not appropriate, and make data sharing both easier and more scalable. By our definition, a data trust is a legal, technical, and organizational structure for enabling the sharing of data for a variety of purposes. The concept of the “data trust” requires further disambiguation from other facilitating structures such as data collaboratives. Irrespective of the terminology used, attempting to create trust in order to facilitate data sharing, and create benefit to individuals, groups of individuals, or society at large, requires at a minimum a process-based mechanism, that is, a workflow that should have a trustworthiness-by-design approach at its core. Data protection by design should be a key component of such an approach.


2016 ◽  
Vol 375 (5) ◽  
pp. 403-405 ◽  
Author(s):  
Harlan M. Krumholz ◽  
Joanne Waldstreicher
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document