scholarly journals The Front Office–Back Office Model: Supporting Research Data Management in the Netherlands

2014 ◽  
Vol 9 (2) ◽  
pp. 39-46 ◽  
Author(s):  
Ingrid Dillo ◽  
Peter Doorn

High quality and timely data management and secure storage of data, both during and after completion of research, are an essential prerequisite for sharing that data. It is therefore crucial that universities and research institutions themselves formulate a clear policy on data management within their organization. For the implementation of this data management policy, high quality support for researchers and an adequate technical infrastructure are indispensable. This practice paper will present an overview of the merging federated data infrastructure in the Netherlands with its front office – back office model, as a use case of an efficient and effective national support infrastructure for researchers. We will elaborate on the stakeholders involved, on the services they offer each other, and on the benefits of this model not only for the front and back offices themselves, but also for the researchers. We will also pay attention to a number of challenges that we are facing, like the implementation of a technical infrastructure for automatic data ingest and integrating access to research data.

2020 ◽  
Vol 6 ◽  
Author(s):  
Christoph Steinbeck ◽  
Oliver Koepler ◽  
Felix Bach ◽  
Sonja Herres-Pawlis ◽  
Nicole Jung ◽  
...  

The vision of NFDI4Chem is the digitalisation of all key steps in chemical research to support scientists in their efforts to collect, store, process, analyse, disclose and re-use research data. Measures to promote Open Science and Research Data Management (RDM) in agreement with the FAIR data principles are fundamental aims of NFDI4Chem to serve the chemistry community with a holistic concept for access to research data. To this end, the overarching objective is the development and maintenance of a national research data infrastructure for the research domain of chemistry in Germany, and to enable innovative and easy to use services and novel scientific approaches based on re-use of research data. NFDI4Chem intends to represent all disciplines of chemistry in academia. We aim to collaborate closely with thematically related consortia. In the initial phase, NFDI4Chem focuses on data related to molecules and reactions including data for their experimental and theoretical characterisation. This overarching goal is achieved by working towards a number of key objectives: Key Objective 1: Establish a virtual environment of federated repositories for storing, disclosing, searching and re-using research data across distributed data sources. Connect existing data repositories and, based on a requirements analysis, establish domain-specific research data repositories for the national research community, and link them to international repositories. Key Objective 2: Initiate international community processes to establish minimum information (MI) standards for data and machine-readable metadata as well as open data standards in key areas of chemistry. Identify and recommend open data standards in key areas of chemistry, in order to support the FAIR principles for research data. Finally, develop standards, if there is a lack. Key Objective 3: Foster cultural and digital change towards Smart Laboratory Environments by promoting the use of digital tools in all stages of research and promote subsequent Research Data Management (RDM) at all levels of academia, beginning in undergraduate studies curricula. Key Objective 4: Engage with the chemistry community in Germany through a wide range of measures to create awareness for and foster the adoption of FAIR data management. Initiate processes to integrate RDM and data science into curricula. Offer a wide range of training opportunities for researchers. Key Objective 5: Explore synergies with other consortia and promote cross-cutting development within the NFDI. Key Objective 6: Provide a legally reliable framework of policies and guidelines for FAIR and open RDM.


Author(s):  
Josiline Phiri Chigwada

The chapter seeks to analyze how librarians in Zimbabwe are responding to increasing librarian roles in the provision of research data services. The study sought to ascertain librarians' awareness and preparedness to offer research data management services at their institutions and determine support required by librarians to effectively deliver research data services. Participants were invited to respond to the survey, and survey monkey was used to administer the online questionnaire. The collected data was analyzed using content analysis, and it was thematically presented. Findings revealed that librarians in Zimbabwe are aware of their role in research data management, but the majority are not prepared to offer research data management services due to a lack of the required skills and resources. Challenges that were noted include lack of research data management policy at institutional levels and information technology issues such as obsolescence and security issues.


2020 ◽  
Vol 37 (4) ◽  
pp. 1-5
Author(s):  
Nove E. Variant Anna ◽  
Endang Fitriyah Mannan

Purpose The purpose of this paper is to analyse the publication of big data in the library from Scopus database by looking at the writing time period of the papers, author's country, the most frequently occurring keywords, the article theme, the journal publisher and the group of keywords in the big data article. The methodology used in this study is a quantitative approach by extracting data from Scopus database publications with the keywords “big data” and “library” in May 2019. The collected data was analysed using Voxviewer software to show the keywords or terms. The results of the study stated that articles on big data have appeared since 2012 and are increasing in number every year. The big data authors are mostly from China and America. Keywords that often appear are based on the results of terminology visualization are including, “big data”, “libraries”, “library”, “data handling”, “data mining”, “university libraries”, “digital libraries”, “academic libraries”, “big data applications” and “data management”. It can be concluded that the number of publications related to big data in the library is still small; there are still many gaps that need to be researched on the topic. The results of the research can be used by libraries in using big data for the development of library innovation. Design/methodology/approach The Scopus database was accessed on 24 May 2019 by using the keyword “big data” and “library” in the search box. The authors only include papers, which title contain of big data in library. There were 74 papers, however, 1 article was dropped because of it not meeting the criteria (affiliation and abstract were not available). The papers consist of journal articles, conference papers, book chapters, editorial and review. Then the data were extracted into excel and analysed as follows (by the year, by the author/s’s country, by the theme and by the publisher). Following that the collected data were analysed using VOX viewer software to see the relationship between big data terminology and library, terminology clustering, keywords that often appear, countries that publish big data, number of big data authors, year of publication and name of journals that publish big data and library articles (Alagu and Thanuskodi, 2019). Findings It can be concluded that the implementation of big data in libraries is still in an early stage, it is shown from the limited number of practical implementation of big data analytics in library. Not many libraries that use big data to support innovation and services since there were lack of librarian skills of big data analytics. The library manager’s view of big data is still not necessary to do. It is suggested for academic libraries to start their adoption of big data analytics to support library services especially research data. To do so, librarians can enhance their skills and knowledge by following some training in big data analytics or research data management. The information technology infrastructure also needs to be upgraded since big data need big IT capacity. Finally, the big data management policy should be made to ensure the implementation goes well. Originality/value This paper discovers the adoption and implementation of big data in library, many papers talk big data in business and technology context. This is offering new idea for many libraries especially academic library about the adoption of big data to support their services. They can adopt the big data analytics technology and technique that suitable for their library.


2021 ◽  
Author(s):  
Damien Graux ◽  
Sina Mahmoodi

The growing web of data warrants better data management strategies. Data silos are single points of failure and they face availability problems which lead to broken links. Furthermore the dynamic nature of some datasets increases the need for a versioning scheme. In this work, we propose a novel architecture for a linked open data infrastructure, built on open decentralized technologies. IPFS is used for storage and retrieval of data, and the public Ethereum blockchain is used for naming, versioning and storing metadata of datasets. We furthermore exploit two mechanisms for maintaining a collection of relevant, high-quality datasets in a distributed manner in which participants are incentivized. The platform is shown to have a low barrier to entry and censorship-resistance. It benefits from the fault-tolerance of its underlying technologies. Furthermore, we validate the approach by implementing our solution.


2013 ◽  
Vol 8 (2) ◽  
pp. 235-246 ◽  
Author(s):  
James A. J. Wilson ◽  
Paul Jeffreys

Since presenting a paper at the International Digital Curation Conference 2010 conference entitled ‘An Institutional Approach to Developing Research Data Management Infrastructure’, the University of Oxford has come a long way in developing research data management (RDM) policy, tools and training to address the various phases of the research data lifecycle. Work has now begun on integrating these various elements into a unified infrastructure for the whole university, under the aegis of the Data Management Roll-out at Oxford (Damaro) Project.This paper will explain the process and motivation behind the project, and describes our vision for the future. It will also introduce the new tools and processes created by the university to tie the individual RDM components together. Chief among these is the ‘DataFinder’ – a hierarchically-structured metadata cataloguing system which will enable researchers to search for and locate research datasets hosted in a variety of different datastores from institutional repositories, through Web 2 services, to filing cabinets standing in department offices. DataFinder will be able to pull and associate research metadata from research information databases and data management plans, and is intended to be CERIF compatible. DataFinder is being designed so that it can be deployed at different levels within different contexts, with higher-level instances harvesting information from lower-level instances enabling, for example, an academic department to deploy one instance of DataFinder, which can then be harvested by another at an institutional level, which can then in turn be harvested by another at a national level.The paper will also consider the requirements of embedding tools and training within an institution and address the difficulties of ensuring the sustainability of an RDM infrastructure at a time when funding for such endeavours is limited. Our research shows that researchers (and indeed departments) are at present not exposed to the true costs of their (often suboptimal) data management solutions, whereas when data management services are centrally provided the full costs are visible and off-putting. There is, therefore, the need to sell the benefits of centrally-provided infrastructure to researchers. Furthermore, there is a distinction between training and services that can be most effectively provided at the institutional level, and those which need to be provided at the divisional or departmental level in order to be relevant and applicable to researchers. This is being addressed in principle by Oxford’s research data management policy, and in practice by the planning and piloting aspects of the Damaro Project.


Author(s):  
Barb Znamirowski

In March 2021 the Tri-Agency released its Research Data Management Policy, including its three pillar requirements. This article reviews some key points from the Alliance RDM (Portage Network) workshop "Putting the Tri-Agency Policy into Practice: Workshopping Your Institutional Research Data Management Strategy."


Author(s):  
Laure Perrier ◽  
Leslie Barnes

This mixed method study determined the essential tools and services required for research data management to aid academic researchers in fulfilling emerging funding agency and journal requirements. Focus groups were conducted and a rating exercise was designed to rank potential services. Faculty conducting research at the University of Toronto were recruited; 28 researchers participated in four focus groups from June– August 2016. Two investigators independently coded the transcripts from the focus groups and identified four themes: 1) seamless infrastructure, 2) data security, 3) developing skills and knowledge, and 4) anxiety about releasing data. Researchers require assistance with the secure storage of data and favour tools that are easy to use. Increasing knowledge of best practices in research data management is necessary and can be supported by the library using multiple strategies. These findings help our library identify and prioritize tools and services in order to allocate resources in support of research data management on campus.


Sign in / Sign up

Export Citation Format

Share Document