This data publication does not exist yet

Author(s):  
No name No name
Keyword(s):  
Neuroforum ◽  
2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Michael Denker ◽  
Sonja Grün ◽  
Thomas Wachtler ◽  
Hansjörg Scherberger

Abstract Preparing a neurophysiological data set with the aim of sharing and publishing is hard. Many of the available tools and services to provide a smooth workflow for data publication are still in their maturing stages and not well integrated. Also, best practices and concrete examples of how to create a rigorous and complete package of an electrophysiology experiment are still lacking. Given the heterogeneity of the field, such unifying guidelines and processes can only be formulated together as a community effort. One of the goals of the NFDI-Neuro consortium initiative is to build such a community for systems and behavioral neuroscience. NFDI-Neuro aims to address the needs of the community to make data management easier and to tackle these challenges in collaboration with various international initiatives (e.g., INCF, EBRAINS). This will give scientists the opportunity to spend more time analyzing the wealth of electrophysiological data they leverage, rather than dealing with data formats and data integrity.


2021 ◽  
Author(s):  
G. Agoua ◽  
P. Cauchois ◽  
O. Chaouy ◽  
I. Gazeau ◽  
B. Grossin

2018 ◽  
Vol 2018 ◽  
pp. 1-13 ◽  
Author(s):  
Li Kuang ◽  
Yujia Zhu ◽  
Shuqi Li ◽  
Xuejin Yan ◽  
Han Yan ◽  
...  

With the rapid development of sensor acquisition technology, more and more data are collected, analyzed, and encapsulated into application services. However, most of applications are developed by untrusted third parties. Therefore, it has become an urgent problem to protect users’ privacy in data publication. Since the attacker may identify the user based on the combination of user’s quasi-identifiers and the fewer quasi-identifier fields result in a lower probability of privacy leaks, therefore, in this paper, we aim to investigate an optimal number of quasi-identifier fields under the constraint of trade-offs between service quality and privacy protection. We first propose modelling the service development process as a cooperative game between the data owner and consumers and employing the Stackelberg game model to determine the number of quasi-identifiers that are published to the data development organization. We then propose a way to identify when the new data should be learned, as well, a way to update the parameters involved in the model, so that the new strategy on quasi-identifier fields can be delivered. The experiment first analyses the validity of our proposed model and then compares it with the traditional privacy protection approach, and the experiment shows that the data loss of our model is less than that of the traditional k-anonymity especially when strong privacy protection is applied.


Author(s):  
Armando Barbosa ◽  
Ig Ibert Bittencourt ◽  
Sean Wolfgand Matsui Siqueira ◽  
Rafael de Amorim Silva ◽  
Ivo Calado

To reduce the complexity intrinsic to LD manipulation, software tools are used to publish or consume data associated to LD activities. However, few developers have a broad understanding of how software tools may be used in publication or consumption of Linked Data. The goal of this work is to investigate the use of software tools in Linked Data publication and consumption processes. More specifically, understanding how these software tools are related to process of publication or consumption of LD. In order to meet their goal, the authors conducted a Systematic Literature Review (SLR) to identify the studies on the use of software tools in these processes. The SLR gathered 6473 studies, of which only 80 studies remained for final analysis (1.25% of the original sample). The highlights of the study are: (1) initial steps of the publication process are fairly supported by the software tools; (2) Non-RDF serialization is fairly supported in publication and consumptions process by the software tools; and (3) there are non-supported steps in consumption and publication processes by the tools.


2017 ◽  
Vol 12 (1) ◽  
pp. 88-105 ◽  
Author(s):  
Sünje Dallmeier-Tiessen ◽  
Varsha Khodiyar ◽  
Fiona Murphy ◽  
Amy Nurnberger ◽  
Lisa Raymond ◽  
...  

The data curation community has long encouraged researchers to document collected research data during active stages of the research workflow, to provide robust metadata earlier, and support research data publication and preservation. Data documentation with robust metadata is one of a number of steps in effective data publication. Data publication is the process of making digital research objects ‘FAIR’, i.e. findable, accessible, interoperable, and reusable; attributes increasingly expected by research communities, funders and society. Research data publishing workflows are the means to that end. Currently, however, much published research data remains inconsistently and inadequately documented by researchers. Documentation of data closer in time to data collection would help mitigate the high cost that repositories associate with the ingest process. More effective data publication and sharing should in principle result from early interactions between researchers and their selected data repository. This paper describes a short study undertaken by members of the Research Data Alliance (RDA) and World Data System (WDS) working group on Publishing Data Workflows. We present a collection of recent examples of data publication workflows that connect data repositories and publishing platforms with research activity ‘upstream’ of the ingest process. We re-articulate previous recommendations of the working group, to account for the varied upstream service components and platforms that support the flow of contextual and provenance information downstream. These workflows should be open and loosely coupled to support interoperability, including with preservation and publication environments. Our recommendations aim to stimulate further work on researchers’ views of data publishing and the extent to which available services and infrastructure facilitate the publication of FAIR data. We also aim to stimulate further dialogue about, and definition of, the roles and responsibilities of research data services and platform providers for the ‘FAIRness’ of research data publication workflows themselves.


Sign in / Sign up

Export Citation Format

Share Document