scholarly journals OSD2F: An Open-Source Data Donation Framework

2021 ◽  
Author(s):  
Theo Araujo ◽  
Jef Ausloos ◽  
Wouter van Atteveldt ◽  
Felicia Loecherbach ◽  
Judith Moeller ◽  
...  

The digital traces that people leave through their use of various online platforms provide tremendous opportunities for studying human behavior. However, the collection of these data is hampered by legal, ethical and technical challenges. We present a framework and tool for collecting these data through a data donation platform where consenting participants can securely submit their digital traces. This approach leverages recent developments in data rights that have given people more control over their own data, such as legislation that now mandates companies to make digital trace data available on request in a machine-readable format. By transparently requesting access to specific parts of this data for clearly communicated academic purposes, the data ownership and privacy of participants is respected and researchers are less dependent on commercial organizations that store this data in proprietary archives. In this paper we outline the general design principles, the current state of the tool, and future development goals.

2021 ◽  
Vol 22 (14) ◽  
pp. 7590
Author(s):  
Liza Vinhoven ◽  
Frauke Stanke ◽  
Sylvia Hafkemeyer ◽  
Manuel Manfred Nietert

Different causative therapeutics for CF patients have been developed. There are still no mutation-specific therapeutics for some patients, especially those with rare CFTR mutations. For this purpose, high-throughput screens have been performed which result in various candidate compounds, with mostly unclear modes of action. In order to elucidate the mechanism of action for promising candidate substances and to be able to predict possible synergistic effects of substance combinations, we used a systems biology approach to create a model of the CFTR maturation pathway in cells in a standardized, human- and machine-readable format. It is composed of a core map, manually curated from small-scale experiments in human cells, and a coarse map including interactors identified in large-scale efforts. The manually curated core map includes 170 different molecular entities and 156 reactions from 221 publications. The coarse map encompasses 1384 unique proteins from four publications. The overlap between the two data sources amounts to 46 proteins. The CFTR Lifecycle Map can be used to support the identification of potential targets inside the cell and elucidate the mode of action for candidate substances. It thereby provides a backbone to structure available data as well as a tool to develop hypotheses regarding novel therapeutics.


2021 ◽  
pp. 79-90
Author(s):  
Christian Zinke-Wehlmann ◽  
Amit Kirschenbaum ◽  
Raul Palma ◽  
Soumya Brahma ◽  
Karel Charvát ◽  
...  

AbstractData is the basis for creating information and knowledge. Having data in a structured and machine-readable format facilitates the processing and analysis of the data. Moreover, metadata—data about the data, can help discovering data based on features as, e.g., by whom they were created, when, or for which purpose. These associated features make the data more interpretable and assist in turning it into useful information. This chapter briefly introduces the concepts of metadata and Linked Data—highly structured and interlinked data, their standards and their usages, with some elaboration on the role of Linked Data in bioeconomy.


2020 ◽  
Vol 18 (1) ◽  
pp. 87-106
Author(s):  
Jill E B Coster van Voorhout

Abstract Human trafficking — a crime with enormous human cost — is mostly money-driven. By focusing on its financial aspects, this article argues that enhancing financial investigations by making them proactive is crucial to combat this low-risk and high-profit offence holistically. By providing insight into research carried out as of 2015 in the framework of an anti-human trafficking public–private financial partnership in the Netherlands, this article indicates some concrete improvement in detecting potential victims in bank records, thereupon following financial flows, and ultimately discerning the structures, networks, interactions and patterns of this offence. While these findings specifically relate to the Dutch context, this contribution sets out actionable ways forward for other states as well. After detailing how to approach hard, soft and open source data, this article also explains how to improve international cooperation throughout the entire chain, from banks to financial intelligence units and law enforcement. Such enhancements are urgently needed if we are to live up to the international community’s pledge under three Sustainable Development Goals to combat human trafficking, so as to leave no one behind in our global economy.


Author(s):  
M. Thangamani ◽  
P. Thangaraj

The increase in the number of documents has aggravated the difficulty of classifying those documents according to specific needs. Clustering analysis in a distributed environment is a thrust area in artificial intelligence and data mining. Its fundamental task is to utilize characters to compute the degree of related corresponding relationship between objects and to accomplish automatic classification without earlier knowledge. Document clustering utilizes clustering technique to gather the documents of high resemblance collectively by computing the documents resemblance. Recent studies have shown that ontologies are useful in improving the performance of document clustering. Ontology is concerned with the conceptualization of a domain into an individual identifiable format and machine-readable format containing entities, attributes, relationships, and axioms. By analyzing types of techniques for document clustering, a better clustering technique depending on Genetic Algorithm (GA) is determined. Non-Dominated Ranked Genetic Algorithm (NRGA) is used in this paper for clustering, which has the capability of providing a better classification result. The experiment is conducted in 20 newsgroups data set for evaluating the proposed technique. The result shows that the proposed approach is very effective in clustering the documents in the distributed environment.


1977 ◽  
Vol 35 ◽  
pp. 104-119
Author(s):  
Anne B. Underhill ◽  
Jaylee M. Mead

AbstractMany catalogues of astronomical data appear in book form as well as in a machine-readable format. The latter form is popular because of the convenience of handling large bodies of data by machine and because it is an efficient way in which to transmit and make accessible data in books which are now out of print or very difficult to obtain. Some new catalogues are prepared entirely in a machine-readable form and the book form, if it exists at all, is of secondary importance for the preservation of the data.In this paper comments are given about the importance of prefaces for transmitting the results of a critical evaluation of a body of data and it is noted that it is essential that this type of documentation be transferred with any machine-readable catalogue. The types of error sometimes encountered in handling machine-readable catalogues are noted. The procedures followed in developing the Goddard Cross Index of eleven star catalogues are outlined as one example of how star catalogues can be compared using computers. The classical approach to evaluating data critically is reviewed and the types of question one should ask and answer for particular types of data are listed. Finally, a specific application of these precepts to the problem of line identifications is given.


2020 ◽  
Author(s):  
Tim Cernak ◽  
Babak Mahjour

<p>High throughput experimentation (HTE) is an increasingly important tool in the study of chemical synthesis. While the hardware for running HTE in the synthesis lab has evolved significantly in recent years, there remains a need for software solutions to navigate data rich experiments. We have developed the software, phactor™, to facilitate the performance and analysis of HTE in a chemical laboratory. phactor™ allows experimentalists to rapidly design arrays of chemical reactions in 24, 96, 384, or 1,536 wellplates. Users can access online reagent data, such as a lab inventory, to populate wells with experiments and produce instructions to perform the screen manually, or with the assistance of a liquid handling robot. After completion of the screen, analytical results can be uploaded for facile evaluation, and to guide the next series of experiments. All chemical data, metadata, and results are stored in a machine-readable format.</p>


2011 ◽  
Vol 7 (4) ◽  
pp. 26-46 ◽  
Author(s):  
M. Thangamani ◽  
P. Thangaraj

The increase in the number of documents has aggravated the difficulty of classifying those documents according to specific needs. Clustering analysis in a distributed environment is a thrust area in artificial intelligence and data mining. Its fundamental task is to utilize characters to compute the degree of related corresponding relationship between objects and to accomplish automatic classification without earlier knowledge. Document clustering utilizes clustering technique to gather the documents of high resemblance collectively by computing the documents resemblance. Recent studies have shown that ontologies are useful in improving the performance of document clustering. Ontology is concerned with the conceptualization of a domain into an individual identifiable format and machine-readable format containing entities, attributes, relationships, and axioms. By analyzing types of techniques for document clustering, a better clustering technique depending on Genetic Algorithm (GA) is determined. Non-Dominated Ranked Genetic Algorithm (NRGA) is used in this paper for clustering, which has the capability of providing a better classification result. The experiment is conducted in 20 newsgroups data set for evaluating the proposed technique. The result shows that the proposed approach is very effective in clustering the documents in the distributed environment.


2021 ◽  
Author(s):  
F. Stuker ◽  
F. Rinderer ◽  
P. Blattner

For further efforts in saving energy in lighting installations, intelligent sensors are indispensable. Despite the fact such sensors are available precise and reproducible measurements of the coverage areas are missing. Therefore, METAS decided in 2020 to install the world's first manufacturer-independent fully automated measurement system to test passive infrared motion and presence sensors according to the recently published standard IEC 63180:2020. The test facility allows reproducible measurements of radial and tangential movements by moving different scaled and heated dummies on linear tracks. Furthermore, the presence detection can be measured with an automated robot arm. The test results are provided in a digital test report and the data is available in a machine-readable format to be further processed in designing and planning softwares.


Sign in / Sign up

Export Citation Format

Share Document