scholarly journals ID+Lab – Analyzing, Modeling and Designing Interdisciplinarity

Artnodes ◽  
2018 ◽  
Author(s):  
Michael Dürfeld ◽  
Anika Schultz ◽  
Christian Stein ◽  
Benjamin Thomack ◽  
Nadia Zeissig

Interdisciplinary collaboration is the key to solving complex tasks. At the same time, the communication and cooperation within such a collaboration are themselves characterized by multifaceted, dynamic processes. The ID+Lab explores the structures of interdisciplinary cooperation in order to better understand, visualize and test them. The ID+Model developed for this purpose enables us to acquire an altogether new and detailed view of these structures. ID+Model allows complex collaborations to be analyzed, modeled and designed as a network of actors and connections. The basic elements are eleven ID+Actors that have proven to be essential in interdisciplinary collaborations: people, organizations, events, tasks, methods, tools, money, topics, time, sources and places. The ID+Actors form a network through semantically defined ID+Ties with different status, values and intensity. The formal definition of modeling in the ID+Ontology makes research data available for the Semantic Web and the Linked Open Data Cloud. Based on the ID+Model, the ID+Lab is developing a semantic research contextualization platform – the ID+Stage. The ID+Backstage modeling tool uses a structured question dialogue to gather all the critical actors and connections of a publication, translate them in the background into a machine-readable format, and store them. When a research result is connected to the modeled context of origin, the ID+Publication is formed. Consequently, not only are the research results disclosed, so are their development processes. The ID+Publications are published on the ID+Stage. The ID+Ontology enables the semantic recording and searchability of all data as a modeling foundation.

2018 ◽  
Vol 38 (3) ◽  
pp. 295-314 ◽  
Author(s):  
Marko Marković ◽  
Stevan Gostojić

Open data gained considerable traction in government, nonprofit, and profit organizations in the last several years. Open judicial data increase transparency of the judiciary and are an integral part of open justice. This article identifies relevant judicial data set types, reviews widely used open government data evaluation methodologies, selects a methodology for evaluating judicial data sets, uses the methodology to evaluate openness of judicial data sets in chosen countries, and suggests actions to improve efficiency and effectiveness of open data initiatives. Our findings show that judicial data sets should at least include court decisions, case registers, filed document records, and statistical data. The Global Open Data Index methodology is the most suitable for the task. We suggest considering actions to enable more effective and efficient opening of judicial data sets, including publishing legal documents and legal data in standardized machine-readable formats, assigning standardized metadata to the published documents and data sets, providing both programmable and bulk access to documents and data, explicitly publishing licenses which apply to them in a machine-readable format, and introducing a centralized portal enabling retrieval and browsing of open data sets from a single source.


2021 ◽  
Vol 22 (14) ◽  
pp. 7590
Author(s):  
Liza Vinhoven ◽  
Frauke Stanke ◽  
Sylvia Hafkemeyer ◽  
Manuel Manfred Nietert

Different causative therapeutics for CF patients have been developed. There are still no mutation-specific therapeutics for some patients, especially those with rare CFTR mutations. For this purpose, high-throughput screens have been performed which result in various candidate compounds, with mostly unclear modes of action. In order to elucidate the mechanism of action for promising candidate substances and to be able to predict possible synergistic effects of substance combinations, we used a systems biology approach to create a model of the CFTR maturation pathway in cells in a standardized, human- and machine-readable format. It is composed of a core map, manually curated from small-scale experiments in human cells, and a coarse map including interactors identified in large-scale efforts. The manually curated core map includes 170 different molecular entities and 156 reactions from 221 publications. The coarse map encompasses 1384 unique proteins from four publications. The overlap between the two data sources amounts to 46 proteins. The CFTR Lifecycle Map can be used to support the identification of potential targets inside the cell and elucidate the mode of action for candidate substances. It thereby provides a backbone to structure available data as well as a tool to develop hypotheses regarding novel therapeutics.


2021 ◽  
pp. 79-90
Author(s):  
Christian Zinke-Wehlmann ◽  
Amit Kirschenbaum ◽  
Raul Palma ◽  
Soumya Brahma ◽  
Karel Charvát ◽  
...  

AbstractData is the basis for creating information and knowledge. Having data in a structured and machine-readable format facilitates the processing and analysis of the data. Moreover, metadata—data about the data, can help discovering data based on features as, e.g., by whom they were created, when, or for which purpose. These associated features make the data more interpretable and assist in turning it into useful information. This chapter briefly introduces the concepts of metadata and Linked Data—highly structured and interlinked data, their standards and their usages, with some elaboration on the role of Linked Data in bioeconomy.


2021 ◽  
Author(s):  
Theo Araujo ◽  
Jef Ausloos ◽  
Wouter van Atteveldt ◽  
Felicia Loecherbach ◽  
Judith Moeller ◽  
...  

The digital traces that people leave through their use of various online platforms provide tremendous opportunities for studying human behavior. However, the collection of these data is hampered by legal, ethical and technical challenges. We present a framework and tool for collecting these data through a data donation platform where consenting participants can securely submit their digital traces. This approach leverages recent developments in data rights that have given people more control over their own data, such as legislation that now mandates companies to make digital trace data available on request in a machine-readable format. By transparently requesting access to specific parts of this data for clearly communicated academic purposes, the data ownership and privacy of participants is respected and researchers are less dependent on commercial organizations that store this data in proprietary archives. In this paper we outline the general design principles, the current state of the tool, and future development goals.


Author(s):  
M. Thangamani ◽  
P. Thangaraj

The increase in the number of documents has aggravated the difficulty of classifying those documents according to specific needs. Clustering analysis in a distributed environment is a thrust area in artificial intelligence and data mining. Its fundamental task is to utilize characters to compute the degree of related corresponding relationship between objects and to accomplish automatic classification without earlier knowledge. Document clustering utilizes clustering technique to gather the documents of high resemblance collectively by computing the documents resemblance. Recent studies have shown that ontologies are useful in improving the performance of document clustering. Ontology is concerned with the conceptualization of a domain into an individual identifiable format and machine-readable format containing entities, attributes, relationships, and axioms. By analyzing types of techniques for document clustering, a better clustering technique depending on Genetic Algorithm (GA) is determined. Non-Dominated Ranked Genetic Algorithm (NRGA) is used in this paper for clustering, which has the capability of providing a better classification result. The experiment is conducted in 20 newsgroups data set for evaluating the proposed technique. The result shows that the proposed approach is very effective in clustering the documents in the distributed environment.


1977 ◽  
Vol 35 ◽  
pp. 104-119
Author(s):  
Anne B. Underhill ◽  
Jaylee M. Mead

AbstractMany catalogues of astronomical data appear in book form as well as in a machine-readable format. The latter form is popular because of the convenience of handling large bodies of data by machine and because it is an efficient way in which to transmit and make accessible data in books which are now out of print or very difficult to obtain. Some new catalogues are prepared entirely in a machine-readable form and the book form, if it exists at all, is of secondary importance for the preservation of the data.In this paper comments are given about the importance of prefaces for transmitting the results of a critical evaluation of a body of data and it is noted that it is essential that this type of documentation be transferred with any machine-readable catalogue. The types of error sometimes encountered in handling machine-readable catalogues are noted. The procedures followed in developing the Goddard Cross Index of eleven star catalogues are outlined as one example of how star catalogues can be compared using computers. The classical approach to evaluating data critically is reviewed and the types of question one should ask and answer for particular types of data are listed. Finally, a specific application of these precepts to the problem of line identifications is given.


2020 ◽  
Author(s):  
Tim Cernak ◽  
Babak Mahjour

<p>High throughput experimentation (HTE) is an increasingly important tool in the study of chemical synthesis. While the hardware for running HTE in the synthesis lab has evolved significantly in recent years, there remains a need for software solutions to navigate data rich experiments. We have developed the software, phactor™, to facilitate the performance and analysis of HTE in a chemical laboratory. phactor™ allows experimentalists to rapidly design arrays of chemical reactions in 24, 96, 384, or 1,536 wellplates. Users can access online reagent data, such as a lab inventory, to populate wells with experiments and produce instructions to perform the screen manually, or with the assistance of a liquid handling robot. After completion of the screen, analytical results can be uploaded for facile evaluation, and to guide the next series of experiments. All chemical data, metadata, and results are stored in a machine-readable format.</p>


2011 ◽  
Vol 7 (4) ◽  
pp. 26-46 ◽  
Author(s):  
M. Thangamani ◽  
P. Thangaraj

The increase in the number of documents has aggravated the difficulty of classifying those documents according to specific needs. Clustering analysis in a distributed environment is a thrust area in artificial intelligence and data mining. Its fundamental task is to utilize characters to compute the degree of related corresponding relationship between objects and to accomplish automatic classification without earlier knowledge. Document clustering utilizes clustering technique to gather the documents of high resemblance collectively by computing the documents resemblance. Recent studies have shown that ontologies are useful in improving the performance of document clustering. Ontology is concerned with the conceptualization of a domain into an individual identifiable format and machine-readable format containing entities, attributes, relationships, and axioms. By analyzing types of techniques for document clustering, a better clustering technique depending on Genetic Algorithm (GA) is determined. Non-Dominated Ranked Genetic Algorithm (NRGA) is used in this paper for clustering, which has the capability of providing a better classification result. The experiment is conducted in 20 newsgroups data set for evaluating the proposed technique. The result shows that the proposed approach is very effective in clustering the documents in the distributed environment.


Sign in / Sign up

Export Citation Format

Share Document