scholarly journals NRDC Data Visualization Web Suite

10.29007/rkqh ◽  
2020 ◽  
Author(s):  
Andrew Muñoz ◽  
Frederick Harris ◽  
Sergiu Dascalu

The Nevada Research Data Center (NRDC) is a research data management center that collects sensor-based data from various locations throughout the state of Nevada. The measurements collected are specifically environmental data, which are used in cross-disciplinary research across different facilities. Since data is being collected at a high rate, it is necessary to be able to visualize the data quickly and efficiently. This paper discusses in detail a web application that can be used by researchers to make visualizations that can help in data comparisons. While there exist other web applications that allows researchers to visualize the data, this project expands on that idea by allowing researchers the ability to not only visualize the data but also make comparisons and predictions.

2021 ◽  
Vol 9 ◽  
Author(s):  
Javad Chamanara ◽  
Jitendra Gaikwad ◽  
Roman Gerlach ◽  
Alsayed Algergawy ◽  
Andreas Ostrowski ◽  
...  

Obtaining fit-to-use data associated with diverse aspects of biodiversity, ecology and environment is challenging since often it is fragmented, sub-optimally managed and available in heterogeneous formats. Recently, with the universal acceptance of the FAIR data principles, the requirements and standards of data publications have changed substantially. Researchers are encouraged to manage the data as per the FAIR data principles and ensure that the raw data, metadata, processed data, software, codes and associated material are securely stored and the data be made available with the completion of the research. We have developed BEXIS2 as an open-source community-driven web-based research data management system to support research data management needs of mid to large-scale research projects with multiple sub-projects and up to several hundred researchers. BEXIS2 is a modular and extensible system providing a range of functions to realise the complete data lifecycle from data structure design to data collection, data discovery, dissemination, integration, quality assurance and research planning. It is an extensible and customisable system that allows for the development of new functions and customisation of its various components from database schemas to the user interface layout, elements and look and feel. During the development of BEXIS2, we aimed to incorporate key aspects of what is encoded in FAIR data principles. To investigate the extent to which BEXIS2 conforms to these principles, we conducted the self-assessment using the FAIR indicators, definitions and criteria provided in the FAIR Data Maturity Model. Even though the FAIR data maturity model is developed initially to judge the conformance of datasets, the self-assessment results indicated that BEXIS2 remarkably conforms and supports FAIR indicators. BEXIS2 strongly conforms to the indicators Findability and Accessibility. The indicator Interoperability is moderately supported as of now; however, for many of the lesssupported facets, we have concrete plans for improvement. Reusability (as defined by the FAIR data principles) is partially achieved. This paper also illustrates community deployment examples of the BEXIS2 instances as success stories to exemplify its capacity to meet the biodiversity and ecological data management needs of differently sized projects and serve as an organisational research data management system.


2020 ◽  
Author(s):  
Ionut Iosifescu-Enescu ◽  
Gian-Kasper Plattner ◽  
Dominik Haas-Artho ◽  
David Hanimann ◽  
Konrad Steffen

<p>EnviDat – www.envidat.ch – is the institutional Environmental Data portal of the Swiss Federal Institute for Forest, Snow and Landscape Research WSL. Launched in 2012 as a small project to explore possible solutions for a generic WSL-wide data portal, it has since evolved into a strategic initiative at the institutional level tackling issues in the broad areas of Open Research Data and Research Data Management. EnviDat demonstrates our commitment to accessible research data in order to advance environmental science.</p><p>EnviDat actively implements the FAIR (Findability, Accessibility, Interoperability and Reusability) principles. Core EnviDat research data management services include the registration, integration and hosting of quality-controlled, publication-ready data from a wide range of terrestrial environmental systems, in order to provide unified access to WSL’s environmental monitoring and research data. The registration of research data in EnviDat results in the formal publication with permanent identifiers (EnviDat own PIDs as well as DOIs) and the assignment of appropriate citation information.</p><p>Innovative EnviDat features that contribute to the global system of modern documentation and exchange of scientific information include: (i) a DataCRediT mechanism designed for specifying data authorship (Collection, Validation, Curation, Software, Publication, Supervision), (ii) the ability to enhance published research data with additional resources, such as model codes and software, (iii) in-depth documentation of data provenance, e.g., through a dataset description as well as related publications and datasets, (iv) unambiguous and persistent identifiers for authors (ORCIDs) and, in the medium-term, (v) a decentralized “peer-review” data publication process for safeguarding the quality of available datasets in EnviDat.</p><p>More recently, the EnviDat development has been moving beyond the set of core features expected from a research data management portal with a built-in publishing repository. This evolution is driven by the diverse set of researchers’ requirements for a specialized environmental data portal that formally cuts across the five WSL research themes forest, landscape, biodiversity, natural hazards, and snow and ice, and that concerns all research units and central IT services.</p><p>Examples of such recent requirements for EnviDat include: (i) immediate access to data collected by automatic measurements stations, (ii) metadata and data visualization on charts and maps, with geoservices for large geodatasets, and (iii) progress towards linked open data (LOD) with curated vocabularies and semantics for the environmental domain.</p><p>There are many challenges associated with the developments mentioned above. However, they also represent opportunities for further improving the exchange of scientific information in the environmental domain. Especially geospatial technologies have the potential to become a central element for any specialized environmental data portal, triggering the convergence between publishing repositories and geoportals. Ultimately, these new requirements demonstrate the raised expectations that institutions and researchers have towards the future capabilities of research data portals and repositories in the environmental domain. With EnviDat, we are ready to take up these challenges over the years to come.</p>


2015 ◽  
Author(s):  
Karlheinz Pappenberger

See video of the presentation.On 17th July 2015 the Ministry of Science, Research and the Arts for Baden-Wuerttemberg, Germany, invited national experts to the presentation of the final report of the ‘bwFDM communities’ project. This 18 month project was launched at the beginning of 2014 to evaluate the needs of services and the support that libraries and IT service centres should offer researchers in the area of research data management. Full-time key project staff had been established at all 9 universities in the state of Baden-Wuerttemberg to conduct semi-structured personal interviews of all research groups working with research data (in a broad sense including all areas of science, social science and humanities) and to document them in the form of user stories. 627 interviews have been conducted and more than 2,500 user stories could be extracted, showing the wide range of needs and wishes articulated by researchers. On this basis issues of importance and requirements had be identified, categorised in 18 different groups and finalised into an analysis of the status quo and recommendations for concrete action plans. The results cover the areas ‘general requirements and policy framework’, ‘data collection and data sharing’, ‘technical framework and virtual research environments’, ‘preservation’,  ‘IT infrastructure and IT support’, ‘licencing’ and ‘Open Science’.The presentation will give an overview of the project results and will highlight the roles libraries and IT service centres are expected to play from the researcher´s point of view.As the final report to the Ministry contributes to a comprehensive research data management strategy for the State of Baden-Wuerttemberg, the presentation will also point out the status of the federal strategy in RDM.


Author(s):  
Judith E Pasek ◽  
Jennifer Mayer

Research data management is a prominent and evolving consideration for the academic community, especially in scientific disciplines. This research study surveyed 131 graduate students and 79 faculty members in the sciences at two public doctoral universities to determine the importance, knowledge, and interest levels around research data management training and education. The authors adapted 12 competencies for measurement in the study. Graduate students and faculty ranked the following areas most important among the 12 competencies: ethics and attribution, data visualization, and quality assurance. Graduate students indicated they were least knowledgeable and skilled in data curation and re-use, metadata and data description, data conversion and interoperability, and data preservation. Their responses generally matched the perceptions of faculty. The study also examined how graduate students learn research data management, and how faculty perceive that their students learn research data management. Results showed that graduate students utilize self-learning most often and that faculty may be less influential in research data management education than they perceive. Responses for graduate students between the two institutions were not statistically different, except in the area of perceived deficiencies in data visualization competency.


Author(s):  
Frank Oliver Glöckner ◽  
Michael Diepenbroek

Background: The NFDI process in Germany The digital revolution is fundamentally transforming research data and methods. Mastering this transformation poses major challenges for stakeholders in the domains of science and policy. The process of digitalisation creates immense opportunities, but it must be structured proactively. To this end, the establishment of effective governance mechanisms for research data management (RDM) is of fundamental importance and will be one key driver for successful research and innovation in the future. In 2016 the German Council for Information Infrastructures (RfII) recommended the establishment of a “Nationale Forschungsdateninfrastruktur” (National Research Data Infrastructure, or NFDI), which will serve as the backbone for research data management in Germany. The NFDI should be implemented as a dynamic national collaborative network that grows over time and is composed of various specialised nodes (consortia). The talk will provide a short overview of the status and objectives of the NFDI. It will commence with a description of the goals of the NFDI4BioDiversity consortium which was established for the targeted support of the biodiversity community with data management. The NFDI4BioDiversity Consortium: Biodiversity, Ecology & Environmental Data Biodiversity is more than just the diversity of living species. It includes genetic diversity, functional diversity, interactions and the diversity of whole ecosystems. Mankind continuous to dramatically impact the earth’s ecosystem: species dying-out genetic diversity as well as whole ecosystems are endangered or already lost. Next to the loss of charismatic species and conspicuous change in ecosystems, we are experiencing a quiet loss of common species which together has captured high level policy attention. This has impacts on vital ecosystem services that provide the foundation of human well-being. A general understanding of the status, trends and drivers of the biodiversity on earth is urgently needed to devise conservation responses. Besides the fact that data are often scattered across repositories or not accessible at all, the main challenge for integrative studies is the heterogeneity of measurements and observation types, combined with a substantial lack of documentation. This leads to inconsistencies and incompatibilities in data structures, interfaces and semantics and thus hinders the re-usability of data to answer scientifically and socially relevant questions. Synthesis as well as hypothesis generation will only proceed when data are compliant with the FAIR (Findable, Accessible, Interoperable and Re-usable) data principles. Over the last five years these key challenges have been addressed by the DFG funded German Federation for Biological Data (GFBio) project. GFBio encompasses technical, organizational, financial, and community aspects to raise awareness for research data management in biodiversity research and environmental sciences. To foster sustainability across this federated infrastructure the not-for-profit association “Gesellschaft für biologische Daten e.V. (GFBio e.V.)” has been set up in 2016 as an independent legal entity. NFDI4BioDiversity builds on the experience and established user community of GFBio and takes advantage of GFBio e.V. GFBio already comprises data centers for nucleotide and environmental data as well as the seven well-established data centers of Germany´s largest natural science research facilities, museums and world’s most diverse microbiological resource collection. The network is now extended to include the network of botanical gardens and the largest collections of crop plants and their wild relatives. All collections together host more than 75% of all museum objects (150 millions) in Germany and >80% of all described microbial species. They represent the biggest and internationally-relevant data repositories. NFDI4BioDiversity will extend its community engagement at the science-society-policy interface by including farm animal biology, crop sciences, biodiversity monitoring and citizen science, as well as systems biology encompassing world-leading tools and collections for FAIR data management. Partners of the German Network for Bioinformatics Infrastructure (de.NBI) provide large scale data analysis and storage capacities in the cloud, as well as extensive continuous training and education experiences. Dedicated personnel will be responsible for the mutual exchange of data and experiences with NFDI4Life-Umbrella,NFDI4Earth, NFDI4Chem, NFDI4Health and beyond. As digitalization and liberation of data proceeds, NFDI4BioDiversity will foster community standards, quality management and documentation as well as the harmonization and synthesis of heterogeneous data. It will pro-actively engage the user community to build a coordinated data management platform for all types of biodiversity data as a dedicated added value service for all users of NFDI.


2020 ◽  
Author(s):  
Alexander Götz ◽  
Johannes Munke ◽  
Mohamad Hayek ◽  
Hai Nguyen ◽  
Tobias Weber ◽  
...  

<p>LTDS ("Let the Data Sing") is a lightweight, microservice-based Research Data Management (RDM) architecture which augments previously isolated data stores ("data silos") with FAIR research data repositories. The core components of LTDS include a metadata store as well as dissemination services such as a landing page generator and an OAI-PMH server. As these core components were designed to be independent from one another, a central control system has been implemented, which handles data flows between components. LTDS is developed at LRZ (Leibniz Supercomputing Centre, Garching, Germany), with the aim of allowing researchers to make massive amounts of data (e.g. HPC simulation results) on different storage backends FAIR. Such data can often, owing to their size, not easily be transferred into conventional repositories. As a result, they remain "hidden", while only e.g. final results are published - a massive problem for reproducibility of simulation-based science. The LTDS architecture uses open-source and standardized components and follows best practices in FAIR data (and metadata) handling. We present our experience with our first three use cases: the Alpine Environmental Data Analysis Centre (AlpEnDAC) platform, the ClimEx dataset with 400TB of climate ensemble simulation data, and the Virtual Water Value (ViWA) hydrological model ensemble.</p>


2021 ◽  
Vol 13 (2) ◽  
pp. 50
Author(s):  
Hamed Z. Jahromi ◽  
Declan Delaney ◽  
Andrew Hines

Content is a key influencing factor in Web Quality of Experience (QoE) estimation. A web user’s satisfaction can be influenced by how long it takes to render and visualize the visible parts of the web page in the browser. This is referred to as the Above-the-fold (ATF) time. SpeedIndex (SI) has been widely used to estimate perceived web page loading speed of ATF content and a proxy metric for Web QoE estimation. Web application developers have been actively introducing innovative interactive features, such as animated and multimedia content, aiming to capture the users’ attention and improve the functionality and utility of the web applications. However, the literature shows that, for the websites with animated content, the estimated ATF time using the state-of-the-art metrics may not accurately match completed ATF time as perceived by users. This study introduces a new metric, Plausibly Complete Time (PCT), that estimates ATF time for a user’s perception of websites with and without animations. PCT can be integrated with SI and web QoE models. The accuracy of the proposed metric is evaluated based on two publicly available datasets. The proposed metric holds a high positive Spearman’s correlation (rs=0.89) with the Perceived ATF reported by the users for websites with and without animated content. This study demonstrates that using PCT as a KPI in QoE estimation models can improve the robustness of QoE estimation in comparison to using the state-of-the-art ATF time metric. Furthermore, experimental result showed that the estimation of SI using PCT improves the robustness of SI for websites with animated content. The PCT estimation allows web application designers to identify where poor design has significantly increased ATF time and refactor their implementation before it impacts end-user experience.


Author(s):  
Fabian Cremer ◽  
Silvia Daniel ◽  
Marina Lemaire ◽  
Katrin Moeller ◽  
Matthias Razum ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document