scholarly journals Efficient Results in Semantic Interoperability for Health Care

2016 ◽  
Vol 25 (01) ◽  
pp. 184-187
Author(s):  
J. Charlet ◽  
L. F. Soualmia ◽  

Summary Objectives: To summarize excellent current research in the field of Knowledge Representation and Management (KRM) within the health and medical care domain. Method: We provide a synopsis of the 2016 IMIA selected articles as well as a related synthetic overview of the current and future field activities. A first step of the selection was performed through MEDLINE querying with a list of MeSH descriptors completed by a list of terms adapted to the KRM section. The second step of the selection was completed by the two section editors who separately evaluated the set of 1,432 articles. The third step of the selection consisted of a collective work that merged the evaluation results to retain 15 articles for peer-review. Results: The selection and evaluation process of this Yearbook’s section on Knowledge Representation and Management has yielded four excellent and interesting articles regarding semantic interoperability for health care by gathering heterogeneous sources (knowledge and data) and auditing ontologies. In the first article, the authors present a solution based on standards and Semantic Web technologies to access distributed and heterogeneous datasets in the domain of breast cancer clinical trials. The second article describes a knowledge-based recommendation system that relies on ontologies and Semantic Web rules in the context of chronic diseases dietary. The third article is related to concept-recognition and text-mining to derive common human diseases model and a phenotypic network of common diseases. In the fourth article, the authors highlight the need for auditing the SNOMED CT. They propose to use a crowd-based method for ontology engineering. Conclusions: The current research activities further illustrate the continuous convergence of Knowledge Representation and Medical Informatics, with a focus this year on dedicated tools and methods to advance clinical care by proposing solutions to cope with the problem of semantic interoperability. Indeed, there is a need for powerful tools able to manage and interpret complex, large-scale and distributed datasets and knowledge bases, but also a need for user-friendly tools developed for the clinicians in their daily practice.

2017 ◽  
Vol 26 (01) ◽  
pp. 188-192 ◽  
Author(s):  
H. Dauchel ◽  
T. Lecroq

Summary Objective: To summarize excellent current research and propose a selection of best papers published in 2016 in the field of Bioinformatics and Translational Informatics with applications in the health domain and clinical care. Methods: We provide a synopsis of the articles selected for the IMIA Yearbook 2017, from which we attempt to derive a synthetic overview of current and future activities in the field. As in 2016, a first step of selection was performed by querying MEDLINE with a list of MeSH descriptors completed by a list of terms adapted to the section coverage. Each section editor evaluated separately the set of 951 articles returned and evaluation results were merged for retaining 15 candidate best papers for peer-review. Results: The selection and evaluation process of papers published in the Bioinformatics and Translational Informatics field yielded four excellent articles focusing this year on the secondary use and massive integration of multi-omics data for cancer genomics and non-cancer complex diseases. Papers present methods to study the functional impact of genetic variations, either at the level of the transcription or at the levels of pathway and network. Conclusions: Current research activities in Bioinformatics and Translational Informatics with applications in the health domain continue to explore new algorithms and statistical models to manage, integrate, and interpret large-scale genomic datasets. As addressed by some of the selected papers, future trends would include the question of the international collaborative sharing of clinical and omics data, and the implementation of intelligent systems to enhance routine medical genomics.


Author(s):  
Christopher Walton

In the introductory chapter of this book, we discussed the means by which knowledge can be made available on the Web. That is, the representation of the knowledge in a form by which it can be automatically processed by a computer. To recap, we identified two essential steps that were deemed necessary to achieve this task: 1. We discussed the need to agree on a suitable structure for the knowledge that we wish to represent. This is achieved through the construction of a semantic network, which defines the main concepts of the knowledge, and the relationships between these concepts. We presented an example network that contained the main concepts to differentiate between kinds of cameras. Our network is a conceptualization, or an abstract view of a small part of the world. A conceptualization is defined formally in an ontology, which is in essence a vocabulary for knowledge representation. 2. We discussed the construction of a knowledge base, which is a store of knowledge about a domain in machine-processable form; essentially a database of knowledge. A knowledge base is constructed through the classification of a body of information according to an ontology. The result will be a store of facts and rules that describe the domain. Our example described the classification of different camera features to form a knowledge base. The knowledge base is expressed formally in the language of the ontology over which it is defined. In this chapter we elaborate on these two steps to show how we can define ontologies and knowledge bases specifically for the Web. This will enable us to construct Semantic Web applications that make use of this knowledge. The chapter is devoted to a detailed explanation of the syntax and pragmatics of the RDF, RDFS, and OWL Semantic Web standards. The resource description framework (RDF) is an established standard for knowledge representation on the Web. Taken together with the associated RDF Schema (RDFS) standard, we have a language for representing simple ontologies and knowledge bases on the Web.


1994 ◽  
Vol 03 (03) ◽  
pp. 319-348 ◽  
Author(s):  
CHITTA BARAL ◽  
SARIT KRAUS ◽  
JACK MINKER ◽  
V. S. SUBRAHMANIAN

During the past decade, it has become increasingly clear that the future generation of large-scale knowledge bases will consist, not of one single isolated knowledge base, but a multiplicity of specialized knowledge bases that contain knowledge about different domains of expertise. These knowledge bases will work cooperatively, pooling together their varied bodies of knowledge, so as to be able to solve complex problems that no single knowledge base, by itself, would have been able to address successfully. In any such situation, inconsistencies are bound to arise. In this paper, we address the question: "Suppose we have a set of knowledge bases, KB1, …, KBn, each of which uses default logic as the formalism for knowledge representation, and a set of integrity constraints IC. What knowledge base constitutes an acceptable combination of KB1, …, KBn?"


2020 ◽  
Vol 38 (14) ◽  
pp. 1602-1607 ◽  
Author(s):  
Monica M. Bertagnolli ◽  
Brian Anderson ◽  
Kelly Norsworthy ◽  
Steven Piantadosi ◽  
Andre Quina ◽  
...  

Wide adoption of electronic health records (EHRs) has raised the expectation that data obtained during routine clinical care, termed “real-world” data, will be accumulated across health care systems and analyzed on a large scale to produce improvements in patient outcomes and the use of health care resources. To facilitate a learning health system, EHRs must contain clinically meaningful structured data elements that can be readily exchanged, and the data must be of adequate quality to draw valid inferences. At the present time, the majority of EHR content is unstructured and locked into proprietary systems that pose significant challenges to conducting accurate analyses of many clinical outcomes. This article details the current state of data obtained at the point of care and describes the changes necessary to use the EHR to build a learning health system.


2016 ◽  
Vol 25 (01) ◽  
pp. 207-210
Author(s):  
T. Lecroq ◽  
H. Dauchel ◽  

Summary Objectives : To summarize excellent current research and propose a selection of best papers published in 2015 in the field of Bioinformatics and Translational Informatics with application in the health domain and clinical care. Method : We provide a synopsis of the articles selected for the IMIA Yearbook 2016, from which we attempt to derive a synthetic overview of current and future activities in the field. As last year, a first step of selection was performed by querying MEDLINE with a list of MeSH descriptors completed by a list of terms adapted to the section. Each section editor has evaluated separately the set of 1,566 articles and the evaluation results were merged for retaining 14 articles for peer-review. Results : The selection and evaluation process of this Yearbook's section on Bioinformatics and Translational Informatics yielded four excellent articles focusing this year on data management of large-scale datasets and genomic medicine that are mainly new method-based papers. Three articles explore the high potential of the re-analysis of previously collected data, here from The Cancer Genome Atlas project (TCGA) and one article presents an original analysis of genomic data from sub-Saharan Africa populations. Conclusions : The current research activities in Bioinformatics and Translational Informatics with application in the health domain continues to explore new algorithms and statistical models to manage and interpret large-scale genomic datasets. From population wide genome sequencing for cataloging genomic variants to the comprehension of functional impact on pathways and molecular interactions regarding a given pathology, making sense of large genomic data requires a necessary effort to address the issue of clinical translation for precise diagnostic and personalized medicine.


2020 ◽  
Vol 29 (01) ◽  
pp. 163-168
Author(s):  
Ferdinand Dhombres ◽  
Jean Charlet ◽  

Objective: To select, present, and summarize the best papers in the field of Knowledge Representation and Management (KRM) published in 2019. Methods: A comprehensive and standardized review of the biomedical informatics literature was performed to select the most interesting papers of KRM published in 2019, based on PubMed and ISI Web Of Knowledge queries. Results: Four best papers were selected among 1,189 publications retrieved, following the usual International Medical Informatics Association Yearbook reviewing process. In 2019, research areas covered by pre-selected papers were represented by the design of semantic resources (methods, visualization, curation) and the application of semantic representations for the integration/enrichment of biomedical data. Besides new ontologies and sound methodological guidance to rethink knowledge bases design, we observed large scale applications, promising results for phenotypes characterization, semantic-aware machine learning solutions for biomedical data analysis, and semantic provenance information representations for scientific reproducibility evaluation. Conclusion: In the KRM selection for 2019, research on knowledge representation demonstrated significant contributions both in the design and in the application of semantic resources. Semantic representations serve a great variety of applications across many medical domains, with actionable results.


Pharmaceutics ◽  
2020 ◽  
Vol 12 (9) ◽  
pp. 809
Author(s):  
Emiliene B. Tata ◽  
Melvin A. Ambele ◽  
Michael S. Pepper

Clinical research in high-income countries is increasingly demonstrating the cost- effectiveness of clinical pharmacogenetic (PGx) testing in reducing the incidence of adverse drug reactions and improving overall patient care. Medications are prescribed based on an individual’s genotype (pharmacogenes), which underlies a specific phenotypic drug response. The advent of cost-effective high-throughput genotyping techniques coupled with the existence of Clinical Pharmacogenetics Implementation Consortium (CPIC) dosing guidelines for pharmacogenetic “actionable variants” have increased the clinical applicability of PGx testing. The implementation of clinical PGx testing in sub-Saharan African (SSA) countries can significantly improve health care delivery, considering the high incidence of communicable diseases, the increasing incidence of non-communicable diseases, and the high degree of genetic diversity in these populations. However, the implementation of PGx testing has been sluggish in SSA, prompting this review, the aim of which is to document the existing barriers. These include under-resourced clinical care logistics, a paucity of pharmacogenetics clinical trials, scientific and technical barriers to genotyping pharmacogene variants, and socio-cultural as well as ethical issues regarding health-care stakeholders, among other barriers. Investing in large-scale SSA PGx research and governance, establishing biobanks/bio-databases coupled with clinical electronic health systems, and encouraging the uptake of PGx knowledge by health-care stakeholders, will ensure the successful implementation of pharmacogenetically guided treatment in SSA.


2017 ◽  
Vol 26 (01) ◽  
pp. 188-191
Author(s):  
H. Dauchel ◽  
T. Lecroq

Summary Objective: To summarize excellent current research and propose a selection of best papers published in 2016 in the field of Bioinformatics and Translational Informatics with applications in the health domain and clinical care. Methods: We provide a synopsis of the articles selected for the IMIA Yearbook 2017, from which we attempt to derive a synthetic overview of current and future activities in the field. As in 2016, a first step of selection was performed by querying MEDLINE with a list of MeSH descriptors completed by a list of terms adapted to the section coverage. Each section editor evaluated separately the set of 951 articles returned and evaluation results were merged for retaining 15 candidate best papers for peer-review. Results: The selection and evaluation process of papers published in the Bioinformatics and Translational Informatics field yielded four excellent articles focusing this year on the secondary use and massive integration of multi-omics data for cancer genomics and non-cancer complex diseases. Papers present methods to study the functional impact of genetic variations, either at the level of the transcription or at the levels of pathway and network. Conclusions: Current research activities in Bioinformatics and Translational Informatics with applications in the health domain continue to explore new algorithms and statistical models to manage, integrate, and interpret large-scale genomic datasets. As addressed by some of the selected papers, future trends would include the question of the international collaborative sharing of clinical and omics data, and the implementation of intelligent systems to enhance routine medical genomics.


Author(s):  
Bernardo Cuenca Grau ◽  
Adolfo Plasencia

In this dialogue, Bernardo Cuenca Grau, a computer scientist at the Department of Computer Science, University of Oxford, begins by explaining his research in technology based on ontologies and knowledge representation, somewhere between mathematics, philosophy, and computer science. He goes on to argue why we need to represent knowledge in a way that it can be processed by a computer and therefore enable automated reasoning of this knowledge using artificial intelligence. Later he explains how his investigation probes the limits of mathematics to find the most appropriate languages for developing practical applications. For example, the large-scale processing of structured information linked to comprehensive health systems. Bernardo is supportive of collective tools such as Wikipedia. He also discusses why in his opinion the success of a scientific or technological idea depends very much on luck, and why the semantic web has not been defined. Furthermore, he argues why bureaucracy confuses process with progress.


2016 ◽  
Vol 4 ◽  
pp. 205031211562433 ◽  
Author(s):  
Janet S Carpenter ◽  
Marc B Rosenman ◽  
Mitchell R Knisely ◽  
Brian S Decker ◽  
Kenneth D Levy ◽  
...  

Objective: Prior to implementing a trial to evaluate the economic costs and clinical outcomes of pharmacogenetic testing in a large safety net health care system, we determined the number of patients taking targeted medications and their clinical care encounter sites. Methods: Using 1-year electronic medical record data, we evaluated the number of patients who had started one or more of 30 known pharmacogenomically actionable medications and the number of care encounter sites the patients had visited. Results: Results showed 7039 unique patients who started one or more of the target medications within a 12-month period with visits to 73 care sites within the system. Conclusion: Findings suggest that the type of large-scale, multi-drug, multi-gene approach to pharmacogenetic testing we are planning is widely relevant, and successful implementation will require wide-scale education of prescribers and other personnel involved in medication dispensing and handling.


Sign in / Sign up

Export Citation Format

Share Document