scholarly journals Section 7: Bioinformatics: Bioinformatics Linkage of Heterogeneous Clinical and Genomic Information in Support of Personalized Medicine

2007 ◽  
Vol 16 (01) ◽  
pp. 98-105
Author(s):  
V. Maojo ◽  
J. A. Mitchell ◽  
L. J. Frey

SummaryBiomedical Informatics as a whole faces a difficult epistemological task, since there is no foundation to explain the complexities of modeling clinical medicine and the many relationships between genotype, phenotype, and environment. This paper discusses current efforts to investigate such relationships, intended to lead to better diagnostic and therapeutic procedures and the development of treatments that could make personalized medicine a reality.To achieve this goal there are a number of issues to overcome. Primary are the rapidly growing numbers of heterogeneous data sources which must be integrated to support personalized medicine. Solutions involving the use of domain driven information models of heterogeneous data sources are described in conjunction with controlled ontologies and terminologies. A number of such applications are discussed.Researchers have realized that many dimensions of biology and medicine aim to understand and model the informational mechanisms that support more precise clinical diagnostic, prognostic and therapeutic procedures. As long as data grows exponentially, novel Biomedical Informatics approaches and tools are needed to manage the data. Although researchers are typically able to manage this information within specific, usually narrow contexts of clinical investigation, novel approaches for both training and clinical usage must be developed.After some preliminary overoptimistic expectations, it seems clear now that genetics alone cannot transform medicine. In order to achieve this, heterogeneous clinical and genomic data source must be integrated in scientifically meaningful and productive systems. This will include hypothesis-driven scientific research systems along with well understood information systems to support such research. These in turn will enable the faster advancement of personalized medicine.

2016 ◽  
Vol 53 ◽  
pp. 172-191 ◽  
Author(s):  
Eduardo M. Eisman ◽  
María Navarro ◽  
Juan Luis Castro

iScience ◽  
2021 ◽  
pp. 103298
Author(s):  
Anca Flavia Savulescu ◽  
Emmanuel Bouilhol ◽  
Nicolas Beaume ◽  
Macha Nikolski

2015 ◽  
Author(s):  
Lisa M. Breckels ◽  
Sean Holden ◽  
David Wojnar ◽  
Claire M. Mulvey ◽  
Andy Christoforou ◽  
...  

AbstractSub-cellular localisation of proteins is an essential post-translational regulatory mechanism that can be assayed using high-throughput mass spectrometry (MS). These MS-based spatial proteomics experiments enable us to pinpoint the sub-cellular distribution of thousands of proteins in a specific system under controlled conditions. Recent advances in high-throughput MS methods have yielded a plethora of experimental spatial proteomics data for the cell biology community. Yet, there are many third-party data sources, such as immunofluorescence microscopy or protein annotations and sequences, which represent a rich and vast source of complementary information. We present a unique transfer learning classification framework that utilises a nearest-neighbour or support vector machine system, to integrate heterogeneous data sources to considerably improve on the quantity and quality of sub-cellular protein assignment. We demonstrate the utility of our algorithms through evaluation of five experimental datasets, from four different species in conjunction with four different auxiliary data sources to classify proteins to tens of sub-cellular compartments with high generalisation accuracy. We further apply the method to an experiment on pluripotent mouse embryonic stem cells to classify a set of previously unknown proteins, and validate our findings against a recent high resolution map of the mouse stem cell proteome. The methodology is distributed as part of the open-source Bioconductor pRoloc suite for spatial proteomics data analysis.AbbreviationsLOPITLocalisation of Organelle Proteins by Isotope TaggingPCPProtein Correlation ProfilingMLMachine learningTLTransfer learningSVMSupport vector machinePCAPrincipal component analysisGOGene OntologyCCCellular compartmentiTRAQIsobaric tags for relative and absolute quantitationTMTTandem mass tagsMSMass spectrometry


2018 ◽  
Vol 42 (1) ◽  
pp. 39-61 ◽  
Author(s):  
Marko Gulić ◽  
Marin Vuković

Ontology matching plays an important role in the integration of heterogeneous data sources that are described by ontologies. In order to determine correspondences between ontologies, a set of matchers can be used. After the execution of these matchers and the aggregation of the results obtained by these matchers, a final alignment method is executed in order to select appropriate correspondences between entities of compared ontologies. The final alignment method is an important part of the ontology matching process because it directly determines the output result of this process. In this paper we improve our iterative final alignment method by introducing an automatic adjustment of final alignment threshold as well as a new rule for determining false correspondences with similarity values greater than adjusted threshold. An evaluation of the method is performed on the test ontologies of the OAEI evaluation contest and a comparison with other final alignment methods is given.


Sign in / Sign up

Export Citation Format

Share Document