scholarly journals CytoGPS: a web-enabled karyotype analysis tool for cytogenetics

2019 ◽  
Vol 35 (24) ◽  
pp. 5365-5366
Author(s):  
Zachary B Abrams ◽  
Lin Zhang ◽  
Lynne V Abruzzo ◽  
Nyla A Heerema ◽  
Suli Li ◽  
...  

Abstract Summary Karyotype data are the most common form of genetic data that is regularly used clinically. They are collected as part of the standard of care in many diseases, particularly in pediatric and cancer medicine contexts. Karyotypes are represented in a unique text-based format, with a syntax defined by the International System for human Cytogenetic Nomenclature (ISCN). While human-readable, ISCN is not intrinsically machine-readable. This limitation has prevented the full use of complex karyotype data in discovery science use cases. To enhance the utility and value of karyotype data, we developed a tool named CytoGPS. CytoGPS first parses ISCN karyotypes into a machine-readable format. It then converts the ISCN karyotype into a binary Loss-Gain-Fusion (LGF) model, which represents all cytogenetic abnormalities as combinations of loss, gain, or fusion events, in a format that is analyzable using modern computational methods. Such data is then made available for comprehensive ‘downstream’ analyses that previously were not feasible. Availability and implementation Freely available at http://cytogps.org.

2019 ◽  
Author(s):  
Zachary B. Abrams ◽  
Lin Zhang ◽  
Lynne V. Abruzzo ◽  
Nyla A. Heerema ◽  
Suli Li ◽  
...  

AbstractKaryotype data are the most common form of genetic data that is regularly used clinically. They are collected as part of the standard of care in many diseases, particularly in pediatric and cancer medicine contexts. Karyotypes are represented in a unique text-based format, with a syntax defined by the International System for human Cytogenetic Nomenclature (ISCN). While human-readable, ISCN is not intrinsically machine-readable. This limitation has prevented the full use of complex karyotype data in discovery science use cases. To enhance the utility and value of karyotype data, we developed a tool named CytoGPS. CytoGPS first parses ISCN karyotypes into a machine-readable format. It then converts the ISCN karyotype into a binary Loss-Gain-Fusion (LGF) model, which represents all cytogenetic abnormalities as combinations of loss, gain, or fusion events, in a format that is analyzable using modern computational methods. Such data is then made available for comprehensive “downstream” analyses that previously were not feasible.Availability and ImplementationFreely available at https://[email protected] informationNot applicable


2021 ◽  
Vol 22 (14) ◽  
pp. 7590
Author(s):  
Liza Vinhoven ◽  
Frauke Stanke ◽  
Sylvia Hafkemeyer ◽  
Manuel Manfred Nietert

Different causative therapeutics for CF patients have been developed. There are still no mutation-specific therapeutics for some patients, especially those with rare CFTR mutations. For this purpose, high-throughput screens have been performed which result in various candidate compounds, with mostly unclear modes of action. In order to elucidate the mechanism of action for promising candidate substances and to be able to predict possible synergistic effects of substance combinations, we used a systems biology approach to create a model of the CFTR maturation pathway in cells in a standardized, human- and machine-readable format. It is composed of a core map, manually curated from small-scale experiments in human cells, and a coarse map including interactors identified in large-scale efforts. The manually curated core map includes 170 different molecular entities and 156 reactions from 221 publications. The coarse map encompasses 1384 unique proteins from four publications. The overlap between the two data sources amounts to 46 proteins. The CFTR Lifecycle Map can be used to support the identification of potential targets inside the cell and elucidate the mode of action for candidate substances. It thereby provides a backbone to structure available data as well as a tool to develop hypotheses regarding novel therapeutics.


2021 ◽  
pp. 79-90
Author(s):  
Christian Zinke-Wehlmann ◽  
Amit Kirschenbaum ◽  
Raul Palma ◽  
Soumya Brahma ◽  
Karel Charvát ◽  
...  

AbstractData is the basis for creating information and knowledge. Having data in a structured and machine-readable format facilitates the processing and analysis of the data. Moreover, metadata—data about the data, can help discovering data based on features as, e.g., by whom they were created, when, or for which purpose. These associated features make the data more interpretable and assist in turning it into useful information. This chapter briefly introduces the concepts of metadata and Linked Data—highly structured and interlinked data, their standards and their usages, with some elaboration on the role of Linked Data in bioeconomy.


2021 ◽  
Author(s):  
Theo Araujo ◽  
Jef Ausloos ◽  
Wouter van Atteveldt ◽  
Felicia Loecherbach ◽  
Judith Moeller ◽  
...  

The digital traces that people leave through their use of various online platforms provide tremendous opportunities for studying human behavior. However, the collection of these data is hampered by legal, ethical and technical challenges. We present a framework and tool for collecting these data through a data donation platform where consenting participants can securely submit their digital traces. This approach leverages recent developments in data rights that have given people more control over their own data, such as legislation that now mandates companies to make digital trace data available on request in a machine-readable format. By transparently requesting access to specific parts of this data for clearly communicated academic purposes, the data ownership and privacy of participants is respected and researchers are less dependent on commercial organizations that store this data in proprietary archives. In this paper we outline the general design principles, the current state of the tool, and future development goals.


Author(s):  
M. Thangamani ◽  
P. Thangaraj

The increase in the number of documents has aggravated the difficulty of classifying those documents according to specific needs. Clustering analysis in a distributed environment is a thrust area in artificial intelligence and data mining. Its fundamental task is to utilize characters to compute the degree of related corresponding relationship between objects and to accomplish automatic classification without earlier knowledge. Document clustering utilizes clustering technique to gather the documents of high resemblance collectively by computing the documents resemblance. Recent studies have shown that ontologies are useful in improving the performance of document clustering. Ontology is concerned with the conceptualization of a domain into an individual identifiable format and machine-readable format containing entities, attributes, relationships, and axioms. By analyzing types of techniques for document clustering, a better clustering technique depending on Genetic Algorithm (GA) is determined. Non-Dominated Ranked Genetic Algorithm (NRGA) is used in this paper for clustering, which has the capability of providing a better classification result. The experiment is conducted in 20 newsgroups data set for evaluating the proposed technique. The result shows that the proposed approach is very effective in clustering the documents in the distributed environment.


1977 ◽  
Vol 35 ◽  
pp. 104-119
Author(s):  
Anne B. Underhill ◽  
Jaylee M. Mead

AbstractMany catalogues of astronomical data appear in book form as well as in a machine-readable format. The latter form is popular because of the convenience of handling large bodies of data by machine and because it is an efficient way in which to transmit and make accessible data in books which are now out of print or very difficult to obtain. Some new catalogues are prepared entirely in a machine-readable form and the book form, if it exists at all, is of secondary importance for the preservation of the data.In this paper comments are given about the importance of prefaces for transmitting the results of a critical evaluation of a body of data and it is noted that it is essential that this type of documentation be transferred with any machine-readable catalogue. The types of error sometimes encountered in handling machine-readable catalogues are noted. The procedures followed in developing the Goddard Cross Index of eleven star catalogues are outlined as one example of how star catalogues can be compared using computers. The classical approach to evaluating data critically is reviewed and the types of question one should ask and answer for particular types of data are listed. Finally, a specific application of these precepts to the problem of line identifications is given.


2020 ◽  
Author(s):  
Tim Cernak ◽  
Babak Mahjour

<p>High throughput experimentation (HTE) is an increasingly important tool in the study of chemical synthesis. While the hardware for running HTE in the synthesis lab has evolved significantly in recent years, there remains a need for software solutions to navigate data rich experiments. We have developed the software, phactor™, to facilitate the performance and analysis of HTE in a chemical laboratory. phactor™ allows experimentalists to rapidly design arrays of chemical reactions in 24, 96, 384, or 1,536 wellplates. Users can access online reagent data, such as a lab inventory, to populate wells with experiments and produce instructions to perform the screen manually, or with the assistance of a liquid handling robot. After completion of the screen, analytical results can be uploaded for facile evaluation, and to guide the next series of experiments. All chemical data, metadata, and results are stored in a machine-readable format.</p>


2011 ◽  
Vol 7 (4) ◽  
pp. 26-46 ◽  
Author(s):  
M. Thangamani ◽  
P. Thangaraj

The increase in the number of documents has aggravated the difficulty of classifying those documents according to specific needs. Clustering analysis in a distributed environment is a thrust area in artificial intelligence and data mining. Its fundamental task is to utilize characters to compute the degree of related corresponding relationship between objects and to accomplish automatic classification without earlier knowledge. Document clustering utilizes clustering technique to gather the documents of high resemblance collectively by computing the documents resemblance. Recent studies have shown that ontologies are useful in improving the performance of document clustering. Ontology is concerned with the conceptualization of a domain into an individual identifiable format and machine-readable format containing entities, attributes, relationships, and axioms. By analyzing types of techniques for document clustering, a better clustering technique depending on Genetic Algorithm (GA) is determined. Non-Dominated Ranked Genetic Algorithm (NRGA) is used in this paper for clustering, which has the capability of providing a better classification result. The experiment is conducted in 20 newsgroups data set for evaluating the proposed technique. The result shows that the proposed approach is very effective in clustering the documents in the distributed environment.


Sign in / Sign up

Export Citation Format

Share Document